Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
IEEE Trans Pattern Anal Mach Intell ; 45(7): 8743-8756, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37015515

RESUMEN

We introduce a new image segmentation task, called Entity Segmentation (ES), which aims to segment all visual entities (objects and stuffs) in an image without predicting their semantic labels. By removing the need of class label prediction, the models trained for such task can focus more on improving segmentation quality. It has many practical applications such as image manipulation and editing where the quality of segmentation masks is crucial but class labels are less important. We conduct the first-ever study to investigate the feasibility of convolutional center-based representation to segment things and stuffs in a unified manner, and show that such representation fits exceptionally well in the context of ES. More specifically, we propose a CondInst-like fully-convolutional architecture with two novel modules specifically designed to exploit the class-agnostic and non-overlapping requirements of ES. Experiments show that the models designed and trained for ES significantly outperforms popular class-specific panoptic segmentation models in terms of segmentation quality. Moreover, an ES model can be easily trained on a combination of multiple datasets without the need to resolve label conflicts in dataset merging, and the model trained for ES on one or more datasets can generalize very well to other test datasets of unseen domains. The code has been released at https://github.com/dvlab-research/Entity.

2.
IEEE Trans Pattern Anal Mach Intell ; 42(8): 1957-1967, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-30908256

RESUMEN

In this work, we propose a motion-guided cascaded refinement network for video object segmentation. By assuming the foreground objects show different motion patterns from the background, for each video frame we apply an active contour model on optical flow to coarsely segment the foreground. The proposed Cascaded Refinement Network (CRN) then takes as guidance the coarse segmentation to generate an accurate segmentation in full resolution. In this way, the motion information and the deep CNNs can complement each other well to accurately segment the foreground objects from video frames. To deal with multi-instance cases, we extend our method with a spatial-temporal instance embedding model that further segments the foreground regions into instances and propagates instance labels. We further introduce a single-channel residual attention module in CRN to incorporate the coarse segmentation map as attention, which makes the network effective and efficient in both training and testing. We perform experiments on popular benchmarks and the results show that our method achieves state-of-the-art performance with high time efficiency.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...