Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Front Comput Neurosci ; 18: 1365727, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38784680

RESUMEN

Automatic segmentation of vestibular schwannoma (VS) from routine clinical MRI has potential to improve clinical workflow, facilitate treatment decisions, and assist patient management. Previous work demonstrated reliable automatic segmentation performance on datasets of standardized MRI images acquired for stereotactic surgery planning. However, diagnostic clinical datasets are generally more diverse and pose a larger challenge to automatic segmentation algorithms, especially when post-operative images are included. In this work, we show for the first time that automatic segmentation of VS on routine MRI datasets is also possible with high accuracy. We acquired and publicly release a curated multi-center routine clinical (MC-RC) dataset of 160 patients with a single sporadic VS. For each patient up to three longitudinal MRI exams with contrast-enhanced T1-weighted (ceT1w) (n = 124) and T2-weighted (T2w) (n = 363) images were included and the VS manually annotated. Segmentations were produced and verified in an iterative process: (1) initial segmentations by a specialized company; (2) review by one of three trained radiologists; and (3) validation by an expert team. Inter- and intra-observer reliability experiments were performed on a subset of the dataset. A state-of-the-art deep learning framework was used to train segmentation models for VS. Model performance was evaluated on a MC-RC hold-out testing set, another public VS datasets, and a partially public dataset. The generalizability and robustness of the VS deep learning segmentation models increased significantly when trained on the MC-RC dataset. Dice similarity coefficients (DSC) achieved by our model are comparable to those achieved by trained radiologists in the inter-observer experiment. On the MC-RC testing set, median DSCs were 86.2(9.5) for ceT1w, 89.4(7.0) for T2w, and 86.4(8.6) for combined ceT1w+T2w input images. On another public dataset acquired for Gamma Knife stereotactic radiosurgery our model achieved median DSCs of 95.3(2.9), 92.8(3.8), and 95.5(3.3), respectively. In contrast, models trained on the Gamma Knife dataset did not generalize well as illustrated by significant underperformance on the MC-RC routine MRI dataset, highlighting the importance of data variability in the development of robust VS segmentation models. The MC-RC dataset and all trained deep learning models were made available online.

2.
Hum Brain Mapp ; 45(4): e26625, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38433665

RESUMEN

Estimated age from brain MRI data has emerged as a promising biomarker of neurological health. However, the absence of large, diverse, and clinically representative training datasets, along with the complexity of managing heterogeneous MRI data, presents significant barriers to the development of accurate and generalisable models appropriate for clinical use. Here, we present a deep learning framework trained on routine clinical data (N up to 18,890, age range 18-96 years). We trained five separate models for accurate brain age prediction (all with mean absolute error ≤4.0 years, R2 ≥ .86) across five different MRI sequences (T2 -weighted, T2 -FLAIR, T1 -weighted, diffusion-weighted, and gradient-recalled echo T2 *-weighted). Our trained models offer dual functionality. First, they have the potential to be directly employed on clinical data. Second, they can be used as foundation models for further refinement to accommodate a range of other MRI sequences (and therefore a range of clinical scenarios which employ such sequences). This adaptation process, enabled by transfer learning, proved effective in our study across a range of MRI sequences and scan orientations, including those which differed considerably from the original training datasets. Crucially, our findings suggest that this approach remains viable even with limited data availability (as low as N = 25 for fine-tuning), thus broadening the application of brain age estimation to more diverse clinical contexts and patient populations. By making these models publicly available, we aim to provide the scientific community with a versatile toolkit, promoting further research in brain age prediction and related areas.


Asunto(s)
Encéfalo , Recuerdo Mental , Humanos , Adolescente , Adulto Joven , Adulto , Persona de Mediana Edad , Anciano , Anciano de 80 o más Años , Preescolar , Encéfalo/diagnóstico por imagen , Difusión , Neuroimagen , Aprendizaje Automático
3.
Front Radiol ; 3: 1251825, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38089643

RESUMEN

Unlocking the vast potential of deep learning-based computer vision classification systems necessitates large data sets for model training. Natural Language Processing (NLP)-involving automation of dataset labelling-represents a potential avenue to achieve this. However, many aspects of NLP for dataset labelling remain unvalidated. Expert radiologists manually labelled over 5,000 MRI head reports in order to develop a deep learning-based neuroradiology NLP report classifier. Our results demonstrate that binary labels (normal vs. abnormal) showed high rates of accuracy, even when only two MRI sequences (T2-weighted and those based on diffusion weighted imaging) were employed as opposed to all sequences in an examination. Meanwhile, the accuracy of more specific labelling for multiple disease categories was variable and dependent on the category. Finally, resultant model performance was shown to be dependent on the expertise of the original labeller, with worse performance seen with non-expert vs. expert labellers.

4.
Br J Cancer ; 129(12): 1949-1955, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37932513

RESUMEN

BACKGROUND: Methods to improve stratification of small (≤15 mm) lung nodules are needed. We aimed to develop a radiomics model to assist lung cancer diagnosis. METHODS: Patients were retrospectively identified using health records from January 2007 to December 2018. The external test set was obtained from the national LIBRA study and a prospective Lung Cancer Screening programme. Radiomics features were extracted from multi-region CT segmentations using TexLab2.0. LASSO regression generated the 5-feature small nodule radiomics-predictive-vector (SN-RPV). K-means clustering was used to split patients into risk groups according to SN-RPV. Model performance was compared to 6 thoracic radiologists. SN-RPV and radiologist risk groups were combined to generate "Safety-Net" and "Early Diagnosis" decision-support tools. RESULTS: In total, 810 patients with 990 nodules were included. The AUC for malignancy prediction was 0.85 (95% CI: 0.82-0.87), 0.78 (95% CI: 0.70-0.85) and 0.78 (95% CI: 0.59-0.92) for the training, test and external test datasets, respectively. The test set accuracy was 73% (95% CI: 65-81%) and resulted in 66.67% improvements in potentially missed [8/12] or delayed [6/9] cancers, compared to the radiologist with performance closest to the mean of six readers. CONCLUSIONS: SN-RPV may provide net-benefit in terms of earlier cancer diagnosis.


Asunto(s)
Detección Precoz del Cáncer , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Estudios Prospectivos , Estudios Retrospectivos , Radiólogos , Pulmón
5.
Med Image Anal ; 78: 102391, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35183876

RESUMEN

The growing demand for head magnetic resonance imaging (MRI) examinations, along with a global shortage of radiologists, has led to an increase in the time taken to report head MRI scans in recent years. For many neurological conditions, this delay can result in poorer patient outcomes and inflated healthcare costs. Potentially, computer vision models could help reduce reporting times for abnormal examinations by flagging abnormalities at the time of imaging, allowing radiology departments to prioritise limited resources into reporting these scans first. To date, however, the difficulty of obtaining large, clinically-representative labelled datasets has been a bottleneck to model development. In this work, we present a deep learning framework, based on convolutional neural networks, for detecting clinically-relevant abnormalities in minimally processed, hospital-grade axial T2-weighted and axial diffusion-weighted head MRI scans. The models were trained at scale using a Transformer-based neuroradiology report classifier to generate a labelled dataset of 70,206 examinations from two large UK hospital networks, and demonstrate fast (< 5 s), accurate (area under the receiver operating characteristic curve (AUC) > 0.9), and interpretable classification, with good generalisability between hospitals (ΔAUC ≤ 0.02). Through a simulation study we show that our best model would reduce the mean reporting time for abnormal examinations from 28 days to 14 days and from 9 days to 5 days at the two hospital networks, demonstrating feasibility for use in a clinical triage environment.


Asunto(s)
Aprendizaje Profundo , Imagen de Difusión por Resonancia Magnética , Hospitales , Humanos , Imagen por Resonancia Magnética/métodos , Triaje/métodos
6.
Neuroimage ; 249: 118871, 2022 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-34995797

RESUMEN

Convolutional neural networks (CNN) can accurately predict chronological age in healthy individuals from structural MRI brain scans. Potentially, these models could be applied during routine clinical examinations to detect deviations from healthy ageing, including early-stage neurodegeneration. This could have important implications for patient care, drug development, and optimising MRI data collection. However, existing brain-age models are typically optimised for scans which are not part of routine examinations (e.g., volumetric T1-weighted scans), generalise poorly (e.g., to data from different scanner vendors and hospitals etc.), or rely on computationally expensive pre-processing steps which limit real-time clinical utility. Here, we sought to develop a brain-age framework suitable for use during routine clinical head MRI examinations. Using a deep learning-based neuroradiology report classifier, we generated a dataset of 23,302 'radiologically normal for age' head MRI examinations from two large UK hospitals for model training and testing (age range = 18-95 years), and demonstrate fast (< 5 s), accurate (mean absolute error [MAE] < 4 years) age prediction from clinical-grade, minimally processed axial T2-weighted and axial diffusion-weighted scans, with generalisability between hospitals and scanner vendors (Δ MAE < 1 year). The clinical relevance of these brain-age predictions was tested using 228 patients whose MRIs were reported independently by neuroradiologists as showing atrophy 'excessive for age'. These patients had systematically higher brain-predicted age than chronological age (mean predicted age difference = +5.89 years, 'radiologically normal for age' mean predicted age difference = +0.05 years, p < 0.0001). Our brain-age framework demonstrates feasibility for use as a screening tool during routine hospital examinations to automatically detect older-appearing brains in real-time, with relevance for clinical decision-making and optimising patient pathways.


Asunto(s)
Envejecimiento , Encéfalo/diagnóstico por imagen , Desarrollo Humano , Imagen por Resonancia Magnética , Neuroimagen , Adolescente , Adulto , Factores de Edad , Anciano , Anciano de 80 o más Años , Envejecimiento/patología , Envejecimiento/fisiología , Aprendizaje Profundo , Desarrollo Humano/fisiología , Humanos , Imagen por Resonancia Magnética/métodos , Imagen por Resonancia Magnética/normas , Persona de Mediana Edad , Neuroimagen/métodos , Neuroimagen/normas , Adulto Joven
7.
Eur Radiol ; 32(1): 725-736, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34286375

RESUMEN

OBJECTIVES: The purpose of this study was to build a deep learning model to derive labels from neuroradiology reports and assign these to the corresponding examinations, overcoming a bottleneck to computer vision model development. METHODS: Reference-standard labels were generated by a team of neuroradiologists for model training and evaluation. Three thousand examinations were labelled for the presence or absence of any abnormality by manually scrutinising the corresponding radiology reports ('reference-standard report labels'); a subset of these examinations (n = 250) were assigned 'reference-standard image labels' by interrogating the actual images. Separately, 2000 reports were labelled for the presence or absence of 7 specialised categories of abnormality (acute stroke, mass, atrophy, vascular abnormality, small vessel disease, white matter inflammation, encephalomalacia), with a subset of these examinations (n = 700) also assigned reference-standard image labels. A deep learning model was trained using labelled reports and validated in two ways: comparing predicted labels to (i) reference-standard report labels and (ii) reference-standard image labels. The area under the receiver operating characteristic curve (AUC-ROC) was used to quantify model performance. Accuracy, sensitivity, specificity, and F1 score were also calculated. RESULTS: Accurate classification (AUC-ROC > 0.95) was achieved for all categories when tested against reference-standard report labels. A drop in performance (ΔAUC-ROC > 0.02) was seen for three categories (atrophy, encephalomalacia, vascular) when tested against reference-standard image labels, highlighting discrepancies in the original reports. Once trained, the model assigned labels to 121,556 examinations in under 30 min. CONCLUSIONS: Our model accurately classifies head MRI examinations, enabling automated dataset labelling for downstream computer vision applications. KEY POINTS: • Deep learning is poised to revolutionise image recognition tasks in radiology; however, a barrier to clinical adoption is the difficulty of obtaining large labelled datasets for model training. • We demonstrate a deep learning model which can derive labels from neuroradiology reports and assign these to the corresponding examinations at scale, facilitating the development of downstream computer vision models. • We rigorously tested our model by comparing labels predicted on the basis of neuroradiology reports with two sets of reference-standard labels: (1) labels derived by manually scrutinising each radiology report and (2) labels derived by interrogating the actual images.


Asunto(s)
Aprendizaje Profundo , Área Bajo la Curva , Humanos , Imagen por Resonancia Magnética , Radiografía , Radiólogos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...