Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 360
Filtrar
1.
Neuroimage Clin ; 42: 103611, 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38703470

RESUMO

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D U-Nets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1-weighted and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI also includes a lesion location annotation tool, labeling lesions as periventricular, infratentorial, and juxtacortical according to the 2017 McDonald criteria, and, additionally, as subcortical. We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions with a volume between 10 mm3 and 100 mm3. Given its higher segmentation performance, we recommend that research groups currently using LST transition to LST-AI. To facilitate broad adoption, we are releasing LST-AI as an open-source model, available as a command-line tool, dockerized container, or Python script, enabling diverse applications across multiple platforms.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38723059

RESUMO

AIMS: Standard methods of heart chamber volume estimation in cardiovascular magnetic resonance (CMR) typically utilize simple geometric formulae based on a limited number of slices. We aimed to evaluate whether an automated deep learning neural network prediction of 3D anatomy of all four chambers would show stronger associations with cardiovascular risk factors and disease than standard volume estimation methods in the UK Biobank. METHODS: A deep learning network was adapted to predict 3D segmentations of left and right ventricles (LV, RV) and atria (LA, RA) at ∼1mm isotropic resolution from CMR short and long axis 2D segmentations obtained from a fully automated machine learning pipeline in 4723 individuals with cardiovascular disease (CVD) and 5733 without in the UK Biobank. Relationships between volumes at end-diastole (ED) and end-systole (ES) and risk/disease factors were quantified using univariate, multivariate and logistic regression analyses. Strength of association between deep learning volumes and standard volumes was compared using the area under the receiving operator characteristic curve (AUC). RESULTS: Univariate and multivariate associations between deep learning volumes and most risk and disease factors were stronger than for standard volumes (higher R2 and more significant P values), particularly for sex, age, and body mass index. AUC for all logistic regressions were higher for deep learning volumes than standard volumes (p<0.001 for all four chambers at ED and ES). CONCLUSIONS: Neural network reconstructions of whole heart volumes had significantly stronger associations with cardiovascular disease and risk factors than standard volume estimation methods in an automatic processing pipeline.

3.
Mult Scler ; : 13524585241247775, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38751230

RESUMO

BACKGROUND: Alterations of the superficial retinal vasculature are commonly observed in multiple sclerosis (MS) and can be visualized through optical coherence tomography angiography (OCTA). OBJECTIVES: This study aimed to examine changes in the retinal vasculature during MS and to integrate findings into current concepts of the underlying pathology. METHODS: In this cross-sectional study, including 259 relapsing-remitting MS patients and 78 healthy controls, we analyzed OCTAs using deep-learning-based segmentation algorithm tools. RESULTS: We identified a loss of small-sized vessels (diameter < 10 µm) in the superficial vascular complex in all MS eyes, irrespective of their optic neuritis (ON) history. This alteration was associated with MS disease burden and appears independent of retinal ganglion cell loss. In contrast, an observed reduction of medium-sized vessels (diameter 10-20 µm) was specific to eyes with a history of ON and was closely linked to ganglion cell atrophy. CONCLUSION: These findings suggest distinct atrophy patterns in retinal vessels in patients with MS. Further studies are necessary to investigate retinal vessel alterations and their underlying pathology in MS.

4.
IEEE Trans Med Imaging ; PP2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38635383

RESUMO

The lack of reliable biomarkers makes predicting the conversion from intermediate to neovascular age-related macular degeneration (iAMD, nAMD) a challenging task. We develop a Deep Learning (DL) model to predict the future risk of conversion of an eye from iAMD to nAMD from its current OCT scan. Although eye clinics generate vast amounts of longitudinal OCT scans to monitor AMD progression, only a small subset can be manually labeled for supervised DL. To address this issue, we propose Morph-SSL, a novel Self-supervised Learning (SSL) method for longitudinal data. It uses pairs of unlabelled OCT scans from different visits and involves morphing the scan from the previous visit to the next. The Decoder predicts the transformation for morphing and ensures a smooth feature manifold that can generate intermediate scans between visits through linear interpolation. Next, the Morph-SSL trained features are input to a Classifier which is trained in a supervised manner to model the cumulative probability distribution of the time to conversion with a sigmoidal function. Morph-SSL was trained on unlabelled scans of 399 eyes (3570 visits). The Classifier was evaluated with a five-fold cross-validation on 2418 scans from 343 eyes with clinical labels of the conversion date. The Morph-SSL features achieved an AUC of 0.779 in predicting the conversion to nAMD within the next 6 months, outperforming the same network when trained end-to-end from scratch or pre-trained with popular SSL methods. Automated prediction of the future risk of nAMD onset can enable timely treatment and individualized AMD management.

5.
IEEE Trans Med Imaging ; PP2024 Apr 24.
Artigo em Inglês | MEDLINE | ID: mdl-38656867

RESUMO

Self-supervised learning (SSL) has emerged as a powerful technique for improving the efficiency and effectiveness of deep learning models. Contrastive methods are a prominent family of SSL that extract similar representations of two augmented views of an image while pushing away others in the representation space as negatives. However, the state-of-the-art contrastive methods require large batch sizes and augmentations designed for natural images that are impractical for 3D medical images. To address these limitations, we propose a new longitudinal SSL method, 3DTINC, based on non-contrastive learning. It is designed to learn perturbation-invariant features for 3D optical coherence tomography (OCT) volumes, using augmentations specifically designed for OCT. We introduce a new non-contrastive similarity loss term that learns temporal information implicitly from intra-patient scans acquired at different times. Our experiments show that this temporal information is crucial for predicting progression of retinal diseases, such as age-related macular degeneration (AMD). After pretraining with 3DTINC, we evaluated the learned representations and the prognostic models on two large-scale longitudinal datasets of retinal OCTs where we predict the conversion to wet-AMD within a six-month interval. Our results demonstrate that each component of our contributions is crucial for learning meaningful representations useful in predicting disease progression from longitudinal volumetric scans.

6.
Nat Methods ; 2024 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-38649742

RESUMO

Automated detection of specific cells in three-dimensional datasets such as whole-brain light-sheet image stacks is challenging. Here, we present DELiVR, a virtual reality-trained deep-learning pipeline for detecting c-Fos+ cells as markers for neuronal activity in cleared mouse brains. Virtual reality annotation substantially accelerated training data generation, enabling DELiVR to outperform state-of-the-art cell-segmenting approaches. Our pipeline is available in a user-friendly Docker container that runs with a standalone Fiji plugin. DELiVR features a comprehensive toolkit for data visualization and can be customized to other cell types of interest, as we did here for microglia somata, using Fiji for dataset-specific training. We applied DELiVR to investigate cancer-related brain activity, unveiling an activation pattern that distinguishes weight-stable cancer from cancers associated with weight loss. Overall, DELiVR is a robust deep-learning tool that does not require advanced coding skills to analyze whole-brain imaging data in health and disease.

7.
Eur Radiol ; 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38488971

RESUMO

OBJECTIVES: To develop an algorithm to link undiagnosed patients to previous patient histories based on radiographs, and simultaneous classification of multiple bone tumours to enable early and specific diagnosis. MATERIALS AND METHODS: For this retrospective study, data from 2000 to 2021 were curated from our database by two orthopaedic surgeons, a radiologist and a data scientist. Patients with complete clinical and pre-therapy radiographic data were eligible. To ensure feasibility, the ten most frequent primary tumour entities, confirmed histologically or by tumour board decision, were included. We implemented a ResNet and transformer model to establish baseline results. Our method extracts image features using deep learning and then clusters the k most similar images to the target image using a hash-based nearest-neighbour recommender approach that performs simultaneous classification by majority voting. The results were evaluated with precision-at-k, accuracy, precision and recall. Discrete parameters were described by incidence and percentage ratios. For continuous parameters, based on a normality test, respective statistical measures were calculated. RESULTS: Included were data from 809 patients (1792 radiographs; mean age 33.73 ± 18.65, range 3-89 years; 443 men), with Osteochondroma (28.31%) and Ewing sarcoma (1.11%) as the most and least common entities, respectively. The dataset was split into training (80%) and test subsets (20%). For k = 3, our model achieved the highest mean accuracy, precision and recall (92.86%, 92.86% and 34.08%), significantly outperforming state-of-the-art models (54.10%, 55.57%, 19.85% and 62.80%, 61.33%, 23.05%). CONCLUSION: Our novel approach surpasses current models in tumour classification and links to past patient data, leveraging expert insights. CLINICAL RELEVANCE STATEMENT: The proposed algorithm could serve as a vital support tool for clinicians and general practitioners with limited experience in bone tumour classification by identifying similar cases and classifying bone tumour entities. KEY POINTS: • Addressed accurate bone tumour classification using radiographic features. • Model achieved 92.86%, 92.86% and 34.08% mean accuracy, precision and recall, respectively, significantly surpassing state-of-the-art models. • Enhanced diagnosis by integrating prior expert patient assessments.

8.
ArXiv ; 2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38495563

RESUMO

Biophysical modeling, particularly involving partial differential equations (PDEs), offers significant potential for tailoring disease treatment protocols to individual patients. However, the inverse problem-solving aspect of these models presents a substantial challenge, either due to the high computational requirements of model-based approaches or the limited robustness of deep learning (DL) methods. We propose a novel framework that leverages the unique strengths of both approaches in a synergistic manner. Our method incorporates a DL ensemble for initial parameter estimation, facilitating efficient downstream evolutionary sampling initialized with this DL-based prior. We showcase the effectiveness of integrating a rapid deep-learning algorithm with a high-precision evolution strategy in estimating brain tumor cell concentrations from magnetic resonance images. The DL-Prior plays a pivotal role, significantly constraining the effective sampling-parameter space. This reduction results in a fivefold convergence acceleration and a Dice-score of 95.

9.
Eye (Lond) ; 2024 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-38514853

RESUMO

OBJECTIVES: To study the changes in vessel densities (VD) stratified by vessel diameter in the retinal superficial and deep vascular complexes (SVC/DVC) using optical coherence tomography angiography (OCTA) images obtained from people with diabetes and age-matched healthy controls. METHODS: We quantified the VD based on vessel diameter categorized as <10, 10-20 and >20 µm in the SVC/DVC obtained on 3 × 3 mm2 OCTA scans using a deep learning-based segmentation and vascular graph extraction tool in people with diabetes and age-matched healthy controls. RESULTS: OCTA images obtained from 854 eyes of 854 subjects were divided into 5 groups: healthy controls (n = 555); people with diabetes with no diabetic retinopathy (DR, n = 90), mild and moderate non-proliferative DR (NPDR) (n = 96), severe NPDR (n = 42) and proliferative DR (PDR) (n = 71). Both SVC and DVC showed significant decrease in VD with increasing DR severity (p < 0.001). The largest difference was observed in the <10 µm vessels of the SVC between healthy controls and no DR (13.9% lower in no DR, p < 0.001). Progressive decrease in <10 µm vessels of the SVC and DVC was seen with increasing DR severity (p < 0.001). However, 10-20 µm vessels only showed decline in the DVC, but not the SVC (p < 0.001) and there was no change observed in the >20 µm vessels in either plexus. CONCLUSIONS: Our findings suggest that OCTA is able to demonstrate a distinct vulnerability of the smallest retinal vessels in both plexuses that worsens with increasing severity of DR.

10.
Commun Med (Lond) ; 4(1): 46, 2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38486100

RESUMO

BACKGROUND: Artificial intelligence (AI) models are increasingly used in the medical domain. However, as medical data is highly sensitive, special precautions to ensure its protection are required. The gold standard for privacy preservation is the introduction of differential privacy (DP) to model training. Prior work indicates that DP has negative implications on model accuracy and fairness, which are unacceptable in medicine and represent a main barrier to the widespread use of privacy-preserving techniques. In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training. METHODS: We used two datasets: (1) A large dataset (N = 193,311) of high quality clinical chest radiographs, and (2) a dataset (N = 1625) of 3D abdominal computed tomography (CT) images, with the task of classifying the presence of pancreatic ductal adenocarcinoma (PDAC). Both were retrospectively collected and manually labeled by experienced radiologists. We then compared non-private deep convolutional neural networks (CNNs) and privacy-preserving (DP) models with respect to privacy-utility trade-offs measured as area under the receiver operating characteristic curve (AUROC), and privacy-fairness trade-offs, measured as Pearson's r or Statistical Parity Difference. RESULTS: We find that, while the privacy-preserving training yields lower accuracy, it largely does not amplify discrimination against age, sex or co-morbidity. However, we find an indication that difficult diagnoses and subgroups suffer stronger performance hits in private training. CONCLUSIONS: Our study shows that - under the challenging realistic circumstances of a real-life clinical dataset - the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.


Artificial intelligence (AI), in which computers can learn to do tasks that normally require human intelligence, is particularly useful in medical imaging. However, AI should be used in a way that preserves patient privacy. We explored the balance between maintaining patient data privacy and AI performance in medical imaging. We use an approach called differential privacy to protect the privacy of patients' images. We show that, although training AI with differential privacy leads to a slight decrease in accuracy, it does not substantially increase bias against different age groups, genders, or patients with multiple health conditions. However, we notice that AI faces more challenges in accurately diagnosing complex cases and specific subgroups when trained under these privacy constraints. These findings highlight the importance of designing AI systems that are both privacy-conscious and capable of reliable diagnoses across patient groups.

11.
Front Digit Health ; 6: 1341475, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38510279

RESUMO

Introduction: Today, modern technology is used to diagnose and treat cardiovascular disease. These medical devices provide exact measures and raw data such as imaging data or biosignals. So far, the Broad Integration of These Health Data into Hospital Information Technology Structures-Especially in Germany-is Lacking, and if data integration takes place, only non-Evaluable Findings are Usually Integrated into the Hospital Information Technology Structures. A Comprehensive Integration of raw Data and Structured Medical Information has not yet Been Established. The aim of this project was to design and implement an interoperable database (cardio-vascular-information-system, CVIS) for the automated integration of al medical device data (parameters and raw data) in cardio-vascular medicine. Methods: The CVIS serves as a data integration and preparation system at the interface between the various devices and the hospital IT infrastructure. In our project, we were able to establish a database with integration of proprietary device interfaces, which could be integrated into the electronic health record (EHR) with various HL7 and web interfaces. Results: In the period between 1.7.2020 and 30.6.2022, the data integrated into this database were evaluated. During this time, 114,858 patients were automatically included in the database and medical data of 50,295 of them were entered. For technical examinations, more than 4.5 million readings (an average of 28.5 per examination) and 684,696 image data and raw signals (28,935 ECG files, 655,761 structured reports, 91,113 x-ray objects, 559,648 ultrasound objects in 54 different examination types, 5,000 endoscopy objects) were integrated into the database. Over 10.2 million bidirectional HL7 messages (approximately 14,000/day) were successfully processed. 98,458 documents were transferred to the central document management system, 55,154 materials (average 7.77 per order) were recorded and stored in the database, 21,196 diagnoses and 50,353 services/OPS were recorded and transferred. On average, 3.3 examinations per patient were recorded; in addition, there are an average of 13 laboratory examinations. Discussion: Fully automated data integration from medical devices including the raw data is feasible and already creates a comprehensive database for multimodal modern analysis approaches in a short time. This is the basis for national and international projects by extracting research data using FHIR.

12.
Int J Comput Assist Radiol Surg ; 19(4): 655-664, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38498132

RESUMO

PURPOSE: Pancreatic duct dilation is associated with an increased risk of pancreatic cancer, the most lethal malignancy with the lowest 5-year relative survival rate. Automatic segmentation of the dilated pancreatic duct from contrast-enhanced CT scans would facilitate early diagnosis. However, pancreatic duct segmentation poses challenges due to its small anatomical structure and poor contrast in abdominal CT. In this work, we investigate an anatomical attention strategy to address this issue. METHODS: Our proposed anatomical attention strategy consists of two steps: pancreas localization and pancreatic duct segmentation. The coarse pancreatic mask segmentation is used to guide the fully convolutional networks (FCNs) to concentrate on the pancreas' anatomy and disregard unnecessary features. We further apply a multi-scale aggregation scheme to leverage the information from different scales. Moreover, we integrate the tubular structure enhancement as an additional input channel of FCN. RESULTS: We performed extensive experiments on 30 cases of contrast-enhanced abdominal CT volumes. To evaluate the pancreatic duct segmentation performance, we employed four measurements, including the Dice similarity coefficient (DSC), sensitivity, normalized surface distance, and 95 percentile Hausdorff distance. The average DSC achieves 55.7%, surpassing other pancreatic duct segmentation methods on single-phase CT scans only. CONCLUSIONS: We proposed an anatomical attention-based strategy for the dilated pancreatic duct segmentation. Our proposed strategy significantly outperforms earlier approaches. The attention mechanism helps to focus on the pancreas region, while the enhancement of the tubular structure enables FCNs to capture the vessel-like structure. The proposed technique might be applied to other tube-like structure segmentation tasks within targeted anatomies.


Assuntos
Abdome , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos , Pâncreas , Tomografia Computadorizada por Raios X , Ductos Pancreáticos/diagnóstico por imagem
13.
IEEE Trans Med Imaging ; PP2024 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-38354077

RESUMO

In cardiac CINE, motion-compensated MR reconstruction (MCMR) is an effective approach to address highly undersampled acquisitions by incorporating motion information between frames. In this work, we propose a novel perspective for addressing the MCMR problem and a more integrated and efficient solution to the MCMR field. Contrary to state-of-the-art (SOTA) MCMR methods which break the original problem into two sub-optimization problems, i.e. motion estimation and reconstruction, we formulate this problem as a single entity with one single optimization. Our approach is unique in that the motion estimation is directly driven by the ultimate goal, reconstruction, but not by the canonical motion-warping loss (similarity measurement between motion-warped images and target images). We align the objectives of motion estimation and reconstruction, eliminating the drawbacks of artifacts-affected motion estimation and therefore error-propagated reconstruction. Further, we can deliver high-quality reconstruction and realistic motion without applying any regularization/smoothness loss terms, circumventing the non-trivial weighting factor tuning. We evaluate our method on two datasets: 1) an in-house acquired 2D CINE dataset for the retrospective study and 2) the public OCMR cardiac dataset for the prospective study. The conducted experiments indicate that the proposed MCMR framework can deliver artifact-free motion estimation and high-quality MR images even for imaging accelerations up to 20x, outperforming SOTA non-MCMR and MCMR methods in both qualitative and quantitative evaluation across all experiments.

15.
IEEE Trans Med Imaging ; PP2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-38224512

RESUMO

Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images.

16.
Comput Biol Med ; 169: 107929, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38184862

RESUMO

In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.


Assuntos
Procedimentos Cirúrgicos Robóticos , Cirurgia Assistida por Computador , Endoscopia , Procedimentos Cirúrgicos Minimamente Invasivos , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Instrumentos Cirúrgicos , Processamento de Imagem Assistida por Computador/métodos
17.
Magn Reson Med ; 92(1): 289-302, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38282254

RESUMO

PURPOSE: To estimate pixel-wise predictive uncertainty for deep learning-based MR image reconstruction and to examine the impact of domain shifts and architecture robustness. METHODS: Uncertainty prediction could provide a measure for robustness of deep learning (DL)-based MR image reconstruction from undersampled data. DL methods bear the risk of inducing reconstruction errors like in-painting of unrealistic structures or missing pathologies. These errors may be obscured by visual realism of DL reconstruction and thus remain undiscovered. Furthermore, most methods are task-agnostic and not well calibrated to domain shifts. We propose a strategy that estimates aleatoric (data) and epistemic (model) uncertainty, which entails training a deep ensemble (epistemic) with nonnegative log-likelihood (aleatoric) loss in addition to the conventional applied losses terms. The proposed procedure can be paired with any DL reconstruction, enabling investigations of their predictive uncertainties on a pixel level. Five different architectures were investigated on the fastMRI database. The impact on the examined uncertainty of in-distributional and out-of-distributional data with changes to undersampling pattern, imaging contrast, imaging orientation, anatomy, and pathology were explored. RESULTS: Predictive uncertainty could be captured and showed good correlation to normalized mean squared error. Uncertainty was primarily focused along the aliased anatomies and on hyperintense and hypointense regions. The proposed uncertainty measure was able to detect disease prevalence shifts. Distinct predictive uncertainty patterns were observed for changing network architectures. CONCLUSION: The proposed approach enables aleatoric and epistemic uncertainty prediction for DL-based MR reconstruction with an interpretable examination on a pixel level.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Humanos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos , Incerteza , Algoritmos , Encéfalo/diagnóstico por imagem , Bases de Dados Factuais
18.
Radiology ; 310(1): e230764, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-38165245

RESUMO

While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.


Assuntos
Inteligência Artificial , Comércio , Humanos , Cintilografia , Exame Físico , Radiologistas
20.
ArXiv ; 2024 Apr 29.
Artigo em Inglês | MEDLINE | ID: mdl-38235066

RESUMO

The Circle of Willis (CoW) is an important network of arteries connecting major circulations of the brain. Its vascular architecture is believed to affect the risk, severity, and clinical outcome of serious neuro-vascular diseases. However, characterizing the highly variable CoW anatomy is still a manual and time-consuming expert task. The CoW is usually imaged by two angiographic imaging modalities, magnetic resonance angiography (MRA) and computed tomography angiography (CTA), but there exist limited public datasets with annotations on CoW anatomy, especially for CTA. Therefore we organized the TopCoW Challenge in 2023 with the release of an annotated CoW dataset. The TopCoW dataset was the first public dataset with voxel-level annotations for thirteen possible CoW vessel components, enabled by virtual-reality (VR) technology. It was also the first large dataset with paired MRA and CTA from the same patients. TopCoW challenge formalized the CoW characterization problem as a multiclass anatomical segmentation task with an emphasis on topological metrics. We invited submissions worldwide for the CoW segmentation task, which attracted over 140 registered participants from four continents. The top performing teams managed to segment many CoW components to Dice scores around 90%, but with lower scores for communicating arteries and rare variants. There were also topological mistakes for predictions with high Dice scores. Additional topological analysis revealed further areas for improvement in detecting certain CoW components and matching CoW variant topology accurately. TopCoW represented a first attempt at benchmarking the CoW anatomical segmentation task for MRA and CTA, both morphologically and topologically.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA