Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
medRxiv ; 2024 May 13.
Article in English | MEDLINE | ID: mdl-38798581

ABSTRACT

Background/purpose: The use of artificial intelligence (AI) in radiotherapy (RT) is expanding rapidly. However, there exists a notable lack of clinician trust in AI models, underscoring the need for effective uncertainty quantification (UQ) methods. The purpose of this study was to scope existing literature related to UQ in RT, identify areas of improvement, and determine future directions. Methods: We followed the PRISMA-ScR scoping review reporting guidelines. We utilized the population (human cancer patients), concept (utilization of AI UQ), context (radiotherapy applications) framework to structure our search and screening process. We conducted a systematic search spanning seven databases, supplemented by manual curation, up to January 2024. Our search yielded a total of 8980 articles for initial review. Manuscript screening and data extraction was performed in Covidence. Data extraction categories included general study characteristics, RT characteristics, AI characteristics, and UQ characteristics. Results: We identified 56 articles published from 2015-2024. 10 domains of RT applications were represented; most studies evaluated auto-contouring (50%), followed by image-synthesis (13%), and multiple applications simultaneously (11%). 12 disease sites were represented, with head and neck cancer being the most common disease site independent of application space (32%). Imaging data was used in 91% of studies, while only 13% incorporated RT dose information. Most studies focused on failure detection as the main application of UQ (60%), with Monte Carlo dropout being the most commonly implemented UQ method (32%) followed by ensembling (16%). 55% of studies did not share code or datasets. Conclusion: Our review revealed a lack of diversity in UQ for RT applications beyond auto-contouring. Moreover, there was a clear need to study additional UQ methods, such as conformal prediction. Our results may incentivize the development of guidelines for reporting and implementation of UQ in RT.

2.
ArXiv ; 2024 Jul 02.
Article in English | MEDLINE | ID: mdl-38711427

ABSTRACT

Recent advancements in machine learning have led to the development of novel medical imaging systems and algorithms that address ill-posed problems. Assessing their trustworthiness and understanding how to deploy them safely at test time remains an important and open problem. In this work, we propose using conformal prediction to compute valid and distribution-free bounds on downstream metrics given reconstructions generated by one algorithm, and retrieve upper/lower bounds and inlier/outlier reconstructions according to the adjusted bounds. Our work offers 1) test time image reconstruction evaluation without ground truth, 2) downstream performance guarantees, 3) meaningful upper/lower bound reconstructions, and 4) meaningful statistical inliers/outlier reconstructions. We demonstrate our method on post-mastectomy radiotherapy planning using 3D breast CT reconstructions, and show 1) that metric-guided bounds have valid coverage for downstream metrics while conventional pixel-wise bounds do not and 2) anatomical differences of upper/lower bounds between metric-guided and pixel-wise methods. Our work paves way for more meaningful and trustworthy test-time evaluation of medical image reconstructions. Code available at https://github.com/matthewyccheung/conformal-metric.

3.
Med Image Anal ; 57: 226-236, 2019 10.
Article in English | MEDLINE | ID: mdl-31351389

ABSTRACT

Classical deformable registration techniques achieve impressive results and offer a rigorous theoretical treatment, but are computationally intensive since they solve an optimization problem for each image pair. Recently, learning-based methods have facilitated fast registration by learning spatial deformation functions. However, these approaches use restricted deformation models, require supervised labels, or do not guarantee a diffeomorphic (topology-preserving) registration. Furthermore, learning-based registration tools have not been derived from a probabilistic framework that can offer uncertainty estimates. In this paper, we build a connection between classical and learning-based methods. We present a probabilistic generative model and derive an unsupervised learning-based inference algorithm that uses insights from classical registration methods and makes use of recent developments in convolutional neural networks (CNNs). We demonstrate our method on a 3D brain registration task for both images and anatomical surfaces, and provide extensive empirical analyses of the algorithm. Our principled approach results in state of the art accuracy and very fast runtimes, while providing diffeomorphic guarantees. Our implementation is available online at http://voxelmorph.csail.mit.edu.


Subject(s)
Brain/diagnostic imaging , Image Processing, Computer-Assisted/methods , Imaging, Three-Dimensional , Magnetic Resonance Imaging , Pattern Recognition, Automated/methods , Unsupervised Machine Learning , Algorithms , Humans , Models, Statistical , Neural Networks, Computer
4.
Article in English | MEDLINE | ID: mdl-30716034

ABSTRACT

We present VoxelMorph, a fast learning-based framework for deformable, pairwise medical image registration. Traditional registration methods optimize an objective function for each pair of images, which can be time-consuming for large datasets or rich deformation models. In contrast to this approach, and building on recent learning-based methods, we formulate registration as a function that maps an input image pair to a deformation field that aligns these images. We parameterize the function via a convolutional neural network (CNN), and optimize the parameters of the neural network on a set of images. Given a new pair of scans, VoxelMorph rapidly computes a deformation field by directly evaluating the function. In this work, we explore two different training strategies. In the first (unsupervised) setting, we train the model to maximize standard image matching objective functions that are based on the image intensities. In the second setting, we leverage auxiliary segmentations available in the training data. We demonstrate that the unsupervised model's accuracy is comparable to state-of-the-art methods, while operating orders of magnitude faster. We also show that VoxelMorph trained with auxiliary data improves registration accuracy at test time, and evaluate the effect of training set size on registration. Our method promises to speed up medical image analysis and processing pipelines, while facilitating novel directions in learning-based registration and its applications. Our code is freely available at https://github.com/voxelmorph/voxelmorph.

5.
Article in English | MEDLINE | ID: mdl-22254772

ABSTRACT

To make it viable for remote monitoring to scale to large patient populations, the accuracy of detectors used to identify patient states of interests must improve. Patient-specific detectors hold the promise of higher accuracy than generic detectors, but the need to train these detectors individually for each patient using expert labeled data limits their scalability. We explore a solution to this challenge in the context of atrial fibrillation (AF) detection. Using patient recordings from the MIT-BIH AF database, we demonstrate the importance of patient specificity and present a scalable method of constructing a personalized detector based on active learning. Using a generic detector having a sensitivity of 76% and a specificity of 57% as its seed, our active learning approach constructs a detector with a sensitivity of 90% and specificity of 85%. This performance approaches that of a patient-specific detector, which has a sensitivity of 94% and specificity of 85%. By selectively choosing examples for training, the active learning approach reduces the amount of expert labeling needed by almost eight fold (compared to the patient-specific detector) while achieving accuracy within 99%.


Subject(s)
Algorithms , Artificial Intelligence , Atrial Fibrillation/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Pattern Recognition, Automated/methods , Precision Medicine/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
6.
Article in English | MEDLINE | ID: mdl-21096007

ABSTRACT

The electroencephalogram (EEG) is widely used in the investigation of neurological disorders. Continuous long-term EEG data offers the opportunity to assess patient health over long periods of time, and to discover previously unknown physiological phenomena. However, the sheer volume of information generated by long-term EEG monitoring also poses a serious challenge for both analysis and visualization. Symbolization has been successful in addressing information overload in many disciplines. In this paper, we present different approaches to transform EEG signals into symbolic sequences. This discrete symbolic representation reduces the amount of EEG data by several orders of magnitude and makes the task of discovering and visualizing interesting activity more manageable. We describe alternate methodologies to symbolize EEG data from patients with epilepsy. When evaluated on long-term intracranial data from 10 patients, our symbolization produced results that were consistent with clinical labels of seizures (for 97% of the seizures and 68% of the seizure segments), and often produced finer-grained distinctions.


Subject(s)
Diagnosis, Computer-Assisted/methods , Electrocardiography/classification , Electrocardiography/methods , Seizures/classification , Seizures/diagnosis , Terminology as Topic , Humans , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL
...