RESUMEN
PURPOSE: This study evaluates the nnU-Net for segmenting brain, skin, tumors, and ventricles in contrast-enhanced T1 (T1CE) images, benchmarking it against an established mesh growing algorithm (MGA). METHODS: We used 67 retrospectively collected annotated single-center T1CE brain scans for training models for brain, skin, tumor, and ventricle segmentation. An additional 32 scans from two centers were used test performance compared to that of the MGA. The performance was measured using the Dice-Sørensen coefficient (DSC), intersection over union (IoU), 95th percentile Hausdorff distance (HD95), and average symmetric surface distance (ASSD) metrics, with time to segment also compared. RESULTS: The nnU-Net models significantly outperformed the MGA (p < 0.0125) with a median brain segmentation DSC of 0.971 [95CI: 0.945-0.979], skin: 0.997 [95CI: 0.984-0.999], tumor: 0.926 [95CI: 0.508-0.968], and ventricles: 0.910 [95CI: 0.812-0.968]. Compared to the MGA's median DSC for brain: 0.936 [95CI: 0.890, 0.958], skin: 0.991 [95CI: 0.964, 0.996], tumor: 0.723 [95CI: 0.000-0.926], and ventricles: 0.856 [95CI: 0.216-0.916]. NnU-Net performance between centers did not significantly differ except for the skin segmentations Additionally, the nnU-Net models were faster (mean: 1139 s [95CI: 685.0-1616]) than the MGA (mean: 2851 s [95CI: 1482-6246]). CONCLUSIONS: The nnU-Net is a fast, reliable tool for creating automatic deep learning-based segmentation pipelines, reducing the need for extensive manual tuning and iteration. The models are able to achieve this performance despite a modestly sized training set. The ability to create high-quality segmentations in a short timespan can prove invaluable in neurosurgical settings.
Asunto(s)
Neoplasias , Mallas Quirúrgicas , Humanos , Estudios Retrospectivos , Imagen por Resonancia Magnética , AlgoritmosRESUMEN
BACKGROUND: Visualization, analysis and characterization of the angioarchitecture of a brain arteriovenous malformation (bAVM) present crucial steps for understanding and management of these complex lesions. Three-dimensional (3D) segmentation and 3D visualization of bAVMs play hereby a significant role. We performed a systematic review regarding currently available 3D segmentation and visualization techniques for bAVMs. METHODS: PubMed, Embase and Google Scholar were searched to identify studies reporting 3D segmentation techniques applied to bAVM characterization. Category of input scan, segmentation (automatic, semiautomatic, manual), time needed for segmentation and 3D visualization techniques were noted. RESULTS: Thirty-three studies were included. Thirteen (39%) used MRI as baseline imaging modality, 9 used DSA (27%), and 7 used CT (21%). Segmentation through automatic algorithms was used in 20 (61%), semiautomatic segmentation in 6 (18%), and manual segmentation in 7 (21%) studies. Median automatic segmentation time was 10 min (IQR 33), semiautomatic 25 min (IQR 73). Manual segmentation time was reported in only one study, with the mean of 5-10 min. Thirty-two (97%) studies used screens to visualize the 3D segmentations outcomes and 1 (3%) study utilized a heads-up display (HUD). Integration with mixed reality was used in 4 studies (12%). CONCLUSIONS: A golden standard for 3D visualization of bAVMs does not exist. This review describes a tendency over time to base segmentation on algorithms trained with machine learning. Unsupervised fuzzy-based algorithms thereby stand out as potential preferred strategy. Continued efforts will be necessary to improve algorithms, integrate complete hemodynamic assessment and find innovative tools for tridimensional visualization.
Asunto(s)
Imagenología Tridimensional , Malformaciones Arteriovenosas Intracraneales , Humanos , Imagenología Tridimensional/métodos , Malformaciones Arteriovenosas Intracraneales/diagnóstico por imagen , Malformaciones Arteriovenosas Intracraneales/patología , Algoritmos , Encéfalo/patología , Imagen por Resonancia MagnéticaRESUMEN
OBJECTIVE: For currently available augmented reality workflows, 3D models need to be created with manual or semiautomatic segmentation, which is a time-consuming process. The authors created an automatic segmentation algorithm that generates 3D models of skin, brain, ventricles, and contrast-enhancing tumor from a single T1-weighted MR sequence and embedded this model into an automatic workflow for 3D evaluation of anatomical structures with augmented reality in a cloud environment. In this study, the authors validate the accuracy and efficiency of this automatic segmentation algorithm for brain tumors and compared it with a manually segmented ground truth set. METHODS: Fifty contrast-enhanced T1-weighted sequences of patients with contrast-enhancing lesions measuring at least 5 cm3 were included. All slices of the ground truth set were manually segmented. The same scans were subsequently run in the cloud environment for automatic segmentation. Segmentation times were recorded. The accuracy of the algorithm was compared with that of manual segmentation and evaluated in terms of Sørensen-Dice similarity coefficient (DSC), average symmetric surface distance (ASSD), and 95th percentile of Hausdorff distance (HD95). RESULTS: The mean ± SD computation time of the automatic segmentation algorithm was 753 ± 128 seconds. The mean ± SD DSC was 0.868 ± 0.07, ASSD was 1.31 ± 0.63 mm, and HD95 was 4.80 ± 3.18 mm. Meningioma (mean 0.89 and median 0.92) showed greater DSC than metastasis (mean 0.84 and median 0.85). Automatic segmentation had greater accuracy for measuring DSC (mean 0.86 and median 0.87) and HD95 (mean 3.62 mm and median 3.11 mm) of supratentorial metastasis than those of infratentorial metastasis (mean 0.82 and median 0.81 for DSC; mean 5.26 mm and median 4.72 mm for HD95). CONCLUSIONS: The automatic cloud-based segmentation algorithm is reliable, accurate, and fast enough to aid neurosurgeons in everyday clinical practice by providing 3D augmented reality visualization of contrast-enhancing intracranial lesions measuring at least 5 cm3. The next steps involve incorporation of other sequences and improving accuracy with 3D fine-tuning in order to expand the scope of augmented reality workflow.
Asunto(s)
Realidad Aumentada , Neoplasias Encefálicas , Algoritmos , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/cirugía , Humanos , Procesamiento de Imagen Asistido por ComputadorRESUMEN
PURPOSE: Leptomeningeal metastases (LM) are a rare, but often debilitating complication of advanced cancer that can severely impact a patient's quality-of-life. LM can result in hydrocephalus (HC) and lead to a range of neurologic sequelae, including weakness, headaches, and altered mental status. Given that patients with LM generally have quite poor prognoses, the decision of how to manage this HC remains unclear and is not only a medical, but also an ethical one. METHODS: We first provide a brief overview of management options for hydrocephalus secondary to LM. We then apply general ethical principles to decision making in LM-associated hydrocephalus that can help guide physicians and patients. RESULTS: Management options for LM-associated hydrocephalus include shunt placement, repeated lumbar punctures, intraventricular reservoir placement, endoscopic third ventriculostomy, or pain management alone without intervention. While these options may offer symptomatic relief in the short-term, each is also associated with risks to the patient. Moreover, data on survival and quality-of-life following intervention is sparse. We propose that the pros and cons of each option should be evaluated not only from a clinical standpoint, but also within a larger framework that incorporates ethical principles and individual patient values. CONCLUSIONS: The decision of how to manage LM-associated hydrocephalus is complex and requires close collaboration amongst the physician, patient, and/or patient's family/friends/community leaders. Ultimately, the decision should be rooted in the patients' values and should aim to optimize a patient's quality-of-life.
Asunto(s)
Toma de Decisiones Clínicas/ética , Hidrocefalia/etiología , Hidrocefalia/terapia , Neoplasias Meníngeas/complicaciones , Neoplasias Meníngeas/secundario , Humanos , Neoplasias Meníngeas/terapia , Procedimientos Neuroquirúrgicos/éticaRESUMEN
OBJECTIVE: Effective image segmentation of cerebral structures is fundamental to 3-dimensional techniques such as augmented reality. To be clinically viable, segmentation algorithms should be fully automatic and easily integrated in existing digital infrastructure. We created a fully automatic adaptive-meshing-based segmentation system for T1-weighted magnetic resonance images (MRI) to automatically segment the complete ventricular system, running in a cloud-based environment that can be accessed on an augmented reality device. This study aims to assess the accuracy and segmentation time of the system by comparing it to a manually segmented ground truth dataset. METHODS: A ground truth (GT) dataset of 46 contrast-enhanced and non-contrast-enhanced T1-weighted MRI scans was manually segmented. These scans also were uploaded to our system to create a machine-segmented (MS) dataset. The GT data were compared with the MS data using the Sørensen-Dice similarity coefficient and 95% Hausdorff distance to determine segmentation accuracy. Furthermore, segmentation times for all GT and MS segmentations were measured. RESULTS: Automatic segmentation was successful for 45 (98%) of 46 cases. Mean Sørensen-Dice similarity coefficient score was 0.83 (standard deviation [SD] = 0.08) and mean 95% Hausdorff distance was 19.06 mm (SD = 11.20). Segmentation time was significantly longer for the GT group (mean = 14405 seconds, SD = 7089) when compared with the MS group (mean = 1275 seconds, SD = 714) with a mean difference of 13,130 seconds (95% confidence interval 10,130-16,130). CONCLUSIONS: The described adaptive meshing-based segmentation algorithm provides accurate and time-efficient automatic segmentation of the ventricular system from T1 MRI scans and direct visualization of the rendered surface models in augmented reality.
Asunto(s)
Realidad Aumentada , Ventrículos Cerebrales/diagnóstico por imagen , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Neuronavegación/métodos , Bases de Datos Factuales , Humanos , Imagenología Tridimensional/tendencias , Imagen por Resonancia Magnética/tendencias , Neuronavegación/tendencias , Estudios Prospectivos , Sistema de RegistrosRESUMEN
BACKGROUND: Augmented reality neuronavigation (ARN) systems can overlay three-dimensional anatomy and disease without the need for a two-dimensional external monitor. Accuracy is crucial for their clinical applicability. We performed a systematic review regarding the reported accuracy of ARN systems and compared them with the accuracy of conventional infrared neuronavigation (CIN). METHODS: PubMed and Embase were searched for ARN and CIN systems. For ARN, type of system, method of patient-to-image registration, accuracy method, and accuracy of the system were noted. For CIN, navigation accuracy, expressed as target registration error (TRE), was noted. A meta-analysis was performed comparing the TRE of ARN and CIN systems. RESULTS: Thirty-five studies were included, 12 for ARN and 23 for CIN. ARN systems could be divided into head-mounted display and heads-up display. In ARN, 4 methods were encountered for patient-to-image registration, of which point-pair matching was the one most frequently used. Five methods for assessing accuracy were described. Ninety-four TRE measurements of ARN systems were compared with 9058 TRE measurements of CIN systems. Mean TRE was 2.5 mm (95% confidence interval, 0.7-4.4) for ARN systems and 2.6 mm (95% confidence interval, 2.1-3.1) for CIN systems. CONCLUSIONS: In ARN, there seems to be lack of agreement regarding the best method to assess accuracy. Nevertheless, ARN systems seem able to achieve an accuracy comparable to CIN systems. Future studies should be prospective and compare TREs, which should be measured in a standardized fashion.