RESUMO
Magnetic resonance imaging (MRI) is a ubiquitous medical imaging technology with applications in disease diagnostics, intervention, and treatment planning. Accurate MRI segmentation is critical for diagnosing abnormalities, monitoring diseases, and deciding on a course of treatment. With the advent of advanced deep learning frameworks, fully automated and accurate MRI segmentation is advancing. Traditional supervised deep learning techniques have advanced tremendously, reaching clinical-level accuracy in the field of segmentation. However, these algorithms still require a large amount of annotated data, which is oftentimes unavailable or impractical. One way to circumvent this issue is to utilize algorithms that exploit a limited amount of labeled data. This paper aims to review such state-of-the-art algorithms that use a limited number of annotated samples. We explain the fundamental principles of self-supervised learning, generative models, few-shot learning, and semi-supervised learning and summarize their applications in cardiac, abdomen, and brain MRI segmentation. Throughout this review, we highlight algorithms that can be employed based on the quantity of annotated data available. We also present a comprehensive list of notable publicly available MRI segmentation datasets. To conclude, we discuss possible future directions of the field-including emerging algorithms, such as contrastive language-image pretraining, and potential combinations across the methods discussed-that can further increase the efficacy of image segmentation with limited labels.
Assuntos
Aprendizado Profundo , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Aprendizado de Máquina Supervisionado , Encéfalo/diagnóstico por imagemRESUMO
Artificial intelligence (AI), particularly deep learning, has made enormous strides in medical imaging analysis. In the field of musculoskeletal radiology, deep-learning models are actively being developed for the identification and evaluation of bone fractures. These methods provide numerous benefits to radiologists such as increased diagnostic accuracy and efficiency while also achieving standalone performances comparable or superior to clinician readers. Various algorithms are already commercially available for integration into clinical workflows, with the potential to improve healthcare delivery and shape the future practice of radiology. In this systematic review, we explore the performance of current AI methods in the identification and evaluation of fractures, particularly those in the ankle, wrist, hip, and ribs. We also discuss current commercially available products for fracture detection and provide an overview of the current limitations of this technology and future directions of the field.
RESUMO
BACKGROUND: Patellofemoral anatomy has not been well characterized. Applying deep learning to automatically measure knee anatomy can provide a better understanding of anatomy, which can be a key factor in improving outcomes. METHODS: 483 total patients with knee CT imaging (April 2017-May 2022) from 6 centers were selected from a cohort scheduled for knee arthroplasty and a cohort with healthy knee anatomy. A total of 7 patellofemoral landmarks were annotated on 14,652 images and approved by a senior musculoskeletal radiologist. A two-stage deep learning model was trained to predict landmark coordinates using a modified ResNet50 architecture initialized with self-supervised learning pretrained weights on RadImageNet. Landmark predictions were evaluated with mean absolute error, and derived patellofemoral measurements were analyzed with Bland-Altman plots. Statistical significance of measurements was assessed by paired t-tests. RESULTS: Mean absolute error between predicted and ground truth landmark coordinates was 0.20/0.26 cm in the healthy/arthroplasty cohort. Four knee parameters were calculated, including transepicondylar axis length, transepicondylar-posterior femur axis angle, trochlear medial asymmetry, and sulcus angle. There were no statistically significant parameter differences (p > 0.05) between predicted and ground truth measurements in both cohorts, except for the healthy cohort sulcus angle. CONCLUSION: Our model accurately identifies key trochlear landmarks with ~0.20-0.26 cm accuracy and produces human-comparable measurements on both healthy and pathological knees. This work represents the first deep learning regression model for automated patellofemoral annotation trained on both physiologic and pathologic CT imaging at this scale. This novel model can enhance our ability to analyze the anatomy of the patellofemoral compartment at scale.