RESUMEN
In recent years, several deep learning models have been proposed to accurately quantify and diagnose cardiac pathologies. These automated tools heavily rely on the accurate segmentation of cardiac structures in MRI images. However, segmentation of the right ventricle is challenging due to its highly complex shape and ill-defined borders. Hence, there is a need for new methods to handle such structure's geometrical and textural complexities, notably in the presence of pathologies such as Dilated Right Ventricle, Tricuspid Regurgitation, Arrhythmogenesis, Tetralogy of Fallot, and Inter-atrial Communication. The last MICCAI challenge on right ventricle segmentation was held in 2012 and included only 48 cases from a single clinical center. As part of the 12th Workshop on Statistical Atlases and Computational Models of the Heart (STACOM 2021), the M&Ms-2 challenge was organized to promote the interest of the research community around right ventricle segmentation in multi-disease, multi-view, and multi-center cardiac MRI. Three hundred sixty CMR cases, including short-axis and long-axis 4-chamber views, were collected from three Spanish hospitals using nine different scanners from three different vendors, and included a diverse set of right and left ventricle pathologies. The solutions provided by the participants show that nnU-Net achieved the best results overall. However, multi-view approaches were able to capture additional information, highlighting the need to integrate multiple cardiac diseases, views, scanners, and acquisition protocols to produce reliable automatic cardiac segmentation algorithms.
Asunto(s)
Aprendizaje Profundo , Ventrículos Cardíacos , Humanos , Ventrículos Cardíacos/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Algoritmos , Atrios CardíacosRESUMEN
OBJECTIVE: Robotic endoscopes have the potential to dramatically improve endoscopy procedures, however current attempts remain limited due to mobility and sensing challenges and have yet to offer the full capabilities of traditional tools. Endoscopic intervention (e.g., biopsy) for robotic systems remains an understudied problem and must be addressed prior to clinical adoption. This paper presents an autonomous intervention technique onboard a Robotic Endoscope Platform (REP) using endoscopy forceps, an auto-feeding mechanism, and positional feedback. METHODS: A workspace model is established for estimating tool position while a Structure from Motion (SfM) approach is used for target-polyp position estimation with the onboard camera and positional sensor. Utilizing this data, a visual system for controlling the REP position and forceps extension is developed and tested within multiple anatomical environments. RESULTS: The workspace model demonstrates accuracy of 5.5% while the target-polyp estimates are within 5 mm of absolute error. This successful experiment requires only 15 seconds once the polyp has been located, with a success rate of 43% using a 1 cm polyp, 67% for a 2 cm polyp, and 81% for a 3 cm polyp. CONCLUSION: Workspace modeling and visual sensing techniques allow for autonomous endoscopic intervention and demonstrate the potential for similar strategies to be used onboard mobile robotic endoscopic devices. SIGNIFICANCE: To the authors' knowledge this is the first attempt at automating the task of colonoscopy intervention onboard a mobile robot. While the REP is not sized for actual procedures, these techniques are translatable to devices suitable for in vivo application.