Your browser doesn't support javascript.
loading
FusionVision: A Comprehensive Approach of 3D Object Reconstruction and Segmentation from RGB-D Cameras Using YOLO and Fast Segment Anything.
El Ghazouali, Safouane; Mhirit, Youssef; Oukhrid, Ali; Michelucci, Umberto; Nouira, Hichem.
Afiliação
  • El Ghazouali S; TOELT LLC, AI Lab, 8406 Winterthur, Switzerland.
  • Mhirit Y; Independent Researcher, 75000 Paris, France.
  • Oukhrid A; Independent Researcher, 2502 Biel/Bienne, Switzerland.
  • Michelucci U; TOELT LLC, AI Lab, 8406 Winterthur, Switzerland.
  • Nouira H; LNE Laboratoire National de Metrologie et d'Essaies, 75015 Paris, France.
Sensors (Basel) ; 24(9)2024 Apr 30.
Article em En | MEDLINE | ID: mdl-38732995
ABSTRACT
In the realm of computer vision, the integration of advanced techniques into the pre-processing of RGB-D camera inputs poses a significant challenge, given the inherent complexities arising from diverse environmental conditions and varying object appearances. Therefore, this paper introduces FusionVision, an exhaustive pipeline adapted for the robust 3D segmentation of objects in RGB-D imagery. Traditional computer vision systems face limitations in simultaneously capturing precise object boundaries and achieving high-precision object detection on depth maps, as they are mainly proposed for RGB cameras. To address this challenge, FusionVision adopts an integrated approach by merging state-of-the-art object detection techniques, with advanced instance segmentation methods. The integration of these components enables a holistic (unified analysis of information obtained from both color RGB and depth D channels) interpretation of RGB-D data, facilitating the extraction of comprehensive and accurate object information in order to improve post-processes such as object 6D pose estimation, Simultanious Localization and Mapping (SLAM) operations, accurate 3D dataset extraction, etc. The proposed FusionVision pipeline employs YOLO for identifying objects within the RGB image domain. Subsequently, FastSAM, an innovative semantic segmentation model, is applied to delineate object boundaries, yielding refined segmentation masks. The synergy between these components and their integration into 3D scene understanding ensures a cohesive fusion of object detection and segmentation, enhancing overall precision in 3D object segmentation.
Palavras-chave

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Suíça

Texto completo: 1 Bases de dados: MEDLINE Idioma: En Revista: Sensors (Basel) Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Suíça