e-space
Manchester Metropolitan University's Research Repository

FISHnet: Learning to Segment the Silhouettes of Swimmers

Ascenso, Guido and Yap, Moi Hoon and Allen, Thomas Bruce and Choppin, Simon S and Payton, Carl (2020) FISHnet: Learning to Segment the Silhouettes of Swimmers. IEEE Access, 8. pp. 178311-178321. ISSN 2169-3536

[img]
Preview

Available under License Creative Commons Attribution.

Download (1MB) | Preview

Abstract

We present a novel silhouette extraction algorithm designed for the binary segmentation of swimmers underwater. The intended use of this algorithm is within a 2D-to-3D pipeline for the markerless motion capture of swimmers, a task which has not been achieved satisfactorily, partly due to the absence of silhouette extraction methods that work well on images of swimmers. Our algorithm, FISHnet, was trained on the novel Scylla dataset, which contains 3,100 images (and corresponding hand-traced silhouettes) of swimmers underwater, and achieved a dice score of 0.9712 on its test data. Our algorithm uses a U-Net-like architecture and VGG16 as a backbone. It introduces two novel modules: a modified version of the Semantic Embedding Branch module from ExFuse, which increases the complexity of the features learned by the layers of the encoder; and the Spatial Resolution Enhancer module, which increases the spatial resolution of the features of the decoder before they are skip connected with the features of the encoder. The contribution of these two modules to the performance of our network was marginal, and we attribute this result to the lack of data on which our network was trained. Nevertheless, our model outperformed state-of-the-art silhouette extraction algorithms (namely DeepLabv3+) on Scylla, and it is the first algorithm developed specifically for the task of accurately segmenting the silhouettes of swimmers.

Impact and Reach

Statistics

Downloads
Activity Overview
53Downloads
61Hits

Additional statistics for this dataset are available via IRStats2.

Altmetric

Actions (login required)

View Item View Item