Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Eur Radiol ; 34(2): 810-822, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37606663

RESUMEN

OBJECTIVES: Non-contrast computed tomography of the brain (NCCTB) is commonly used to detect intracranial pathology but is subject to interpretation errors. Machine learning can augment clinical decision-making and improve NCCTB scan interpretation. This retrospective detection accuracy study assessed the performance of radiologists assisted by a deep learning model and compared the standalone performance of the model with that of unassisted radiologists. METHODS: A deep learning model was trained on 212,484 NCCTB scans drawn from a private radiology group in Australia. Scans from inpatient, outpatient, and emergency settings were included. Scan inclusion criteria were age ≥ 18 years and series slice thickness ≤ 1.5 mm. Thirty-two radiologists reviewed 2848 scans with and without the assistance of the deep learning system and rated their confidence in the presence of each finding using a 7-point scale. Differences in AUC and Matthews correlation coefficient (MCC) were calculated using a ground-truth gold standard. RESULTS: The model demonstrated an average area under the receiver operating characteristic curve (AUC) of 0.93 across 144 NCCTB findings and significantly improved radiologist interpretation performance. Assisted and unassisted radiologists demonstrated an average AUC of 0.79 and 0.73 across 22 grouped parent findings and 0.72 and 0.68 across 189 child findings, respectively. When assisted by the model, radiologist AUC was significantly improved for 91 findings (158 findings were non-inferior), and reading time was significantly reduced. CONCLUSIONS: The assistance of a comprehensive deep learning model significantly improved radiologist detection accuracy across a wide range of clinical findings and demonstrated the potential to improve NCCTB interpretation. CLINICAL RELEVANCE STATEMENT: This study evaluated a comprehensive CT brain deep learning model, which performed strongly, improved the performance of radiologists, and reduced interpretation time. The model may reduce errors, improve efficiency, facilitate triage, and better enable the delivery of timely patient care. KEY POINTS: • This study demonstrated that the use of a comprehensive deep learning system assisted radiologists in the detection of a wide range of abnormalities on non-contrast brain computed tomography scans. • The deep learning model demonstrated an average area under the receiver operating characteristic curve of 0.93 across 144 findings and significantly improved radiologist interpretation performance. • The assistance of the comprehensive deep learning model significantly reduced the time required for radiologists to interpret computed tomography scans of the brain.


Asunto(s)
Aprendizaje Profundo , Adolescente , Humanos , Radiografía , Radiólogos , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Adulto
2.
J Clin Neurosci ; 99: 217-223, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-35290937

RESUMEN

Brain computed tomography (CTB) scans are widely used to evaluate intracranial pathology. The implementation and adoption of CTB has led to clinical improvements. However, interpretation errors occur and may have substantial morbidity and mortality implications for patients. Deep learning has shown promise for facilitating improved diagnostic accuracy and triage. This research charts the potential of deep learning applied to the analysis of CTB scans. It draws on the experience of practicing clinicians and technologists involved in development and implementation of deep learning-based clinical decision support systems. We consider the past, present and future of the CTB, along with limitations of existing systems as well as untapped beneficial use cases. Implementing deep learning CTB interpretation systems and effectively navigating development and implementation risks can deliver many benefits to clinicians and patients, ultimately improving efficiency and safety in healthcare.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Aprendizaje Profundo , Humanos , Neuroimagen , Tomografía Computarizada por Rayos X/métodos
3.
Magn Reson Imaging ; 86: 28-36, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34715290

RESUMEN

Automated brain tumour segmentation from post-operative images is a clinically relevant yet challenging problem. In this study, an automated method for segmenting brain tumour into its subregions has been developed. The dataset consists of multimodal post-operative brain scans (T1 MRI, post-Gadolinium T1 MRI, and T2-FLAIR images) of 15 patients who were treated with post-operative radiation therapy, along with manual annotations of their tumour subregions. A 3D densely-connected U-net was developed for segmentation of brain tumour regions and extensive experiments were conducted to enhance model accuracy. A model was initially developed using the publicly available BraTS dataset consisting of pre-operative brain scans. This model achieved Dice Scores of 0.90, 0.83 and 0.78 for predicting whole tumour, tumour core, and enhancing tumour subregions when tested on BraTS20 blind validation dataset. The acquired knowledge from BraTS was then transferred to the local dataset. For augmentation purpose, the local dataset was registered to a dataset of MRI brain scans of healthy subjects. To improve the robustness of the model and enhance its accuracy, ensemble learning was used to combine the outputs of all the trained models. Even though the size of the dataset is very small, the final model can segment brain tumours with a high Dice Score of 0.83, 0.77 and 0.60 for whole tumour, tumour core and enhancing core respectively.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/patología , Neoplasias Encefálicas/cirugía , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
4.
Transl Vis Sci Technol ; 10(8): 2, 2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-34228106

RESUMEN

Purpose: This study describes the development of a deep learning algorithm based on the U-Net architecture for automated segmentation of geographic atrophy (GA) lesions in fundus autofluorescence (FAF) images. Methods: Image preprocessing and normalization by modified adaptive histogram equalization were used for image standardization to improve effectiveness of deep learning. A U-Net-based deep learning algorithm was developed and trained and tested by fivefold cross-validation using FAF images from clinical datasets. The following metrics were used for evaluating the performance for lesion segmentation in GA: dice similarity coefficient (DSC), DSC loss, sensitivity, specificity, mean absolute error (MAE), accuracy, recall, and precision. Results: In total, 702 FAF images from 51 patients were analyzed. After fivefold cross-validation for lesion segmentation, the average training and validation scores were found for the most important metric, DSC (0.9874 and 0.9779), for accuracy (0.9912 and 0.9815), for sensitivity (0.9955 and 0.9928), and for specificity (0.8686 and 0.7261). Scores for testing were all similar to the validation scores. The algorithm segmented GA lesions six times more quickly than human performance. Conclusions: The deep learning algorithm can be implemented using clinical data with a very high level of performance for lesion segmentation. Automation of diagnostics for GA assessment has the potential to provide savings with respect to patient visit duration, operational cost and measurement reliability in routine GA assessments. Translational Relevance: A deep learning algorithm based on the U-Net architecture and image preprocessing appears to be suitable for automated segmentation of GA lesions on clinical data, producing fast and accurate results.


Asunto(s)
Aprendizaje Profundo , Atrofia Geográfica , Algoritmos , Atrofia Geográfica/diagnóstico , Humanos , Imagen Óptica , Reproducibilidad de los Resultados
5.
J Med Imaging Radiat Oncol ; 65(5): 578-595, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-34313006

RESUMEN

Segmentation of organs and structures, as either targets or organs-at-risk, has a significant influence on the success of radiation therapy. Manual segmentation is a tedious and time-consuming task for clinicians, and inter-observer variability can affect the outcomes of radiation therapy. The recent hype over deep neural networks has added many powerful auto-segmentation methods as variations of convolutional neural networks (CNN). This paper presents a descriptive review of the literature on deep learning techniques for segmentation in radiation therapy planning. The most common CNN architecture across the four clinical sub sites considered was U-net, with the majority of deep learning segmentation articles focussed on head and neck normal tissue structures. The most common data sets were CT images from an inhouse source, along with some public data sets. N-fold cross-validation was commonly employed; however, not all work separated training, test and validation data sets. This area of research is expanding rapidly. To facilitate comparisons of proposed methods and benchmarking, consistent use of appropriate metrics and independent validation should be carefully considered.


Asunto(s)
Aprendizaje Profundo , Humanos , Procesamiento de Imagen Asistido por Computador , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador , Tomografía Computarizada por Rayos X
6.
Transl Vis Sci Technol ; 10(6): 2, 2021 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-34111247

RESUMEN

Purpose: To identify the most suitable model for assessing the rate of growth of total geographic atrophy (GA) by analysis of model structure uncertainty. Methods: Model structure uncertainty refers to unexplained variability arising from the choice of mathematical model and represents an example of epistemic uncertainty. In this study, we quantified this uncertainty to help identify a model most representative of GA progression. Fundus autofluorescence (FAF) images and GA progression data (i.e., total GA area estimation at each presentation) were acquired using Spectralis HRA+OCT instrumentation and RegionFinder software. Six regression models were evaluated. Models were compared using various statistical tests, [i.e., coefficient of determination (r2), uncertainty metric (U), and test of significance for the correlation coefficient, r], as well as adherence to expected physical and clinical assumptions of GA growth. Results: Analysis was carried out for 81 GA-affected eyes, 531 FAF images (range: 3-17 images per eye), over median of 57 months (IQR: 42, 74), with a mean baseline lesion size of 2.62 ± 4.49 mm2 (range: 0.11-20.69 mm2). The linear model proved to be the most representative of total GA growth, with lowest average uncertainty (original scale: U = 0.025, square root scale: U = 0.014), high average r2 (original scale: 0.92, square root scale: 0.93), and applicability of the model was supported by a high correlation coefficient, r, with statistical significance (P = 0.01). Conclusions: Statistical analysis of uncertainty suggests that the linear model provides an effective and practical representation of the rate and progression of total GA growth based on data from patient presentations in clinical settings. Translational Relevance: Identification of correct model structure to characterize rate of growth of total GA in the retina using FAF images provides an objective metric for comparing interventions and charting GA progression in clinical presentations.


Asunto(s)
Atrofia Geográfica , Progresión de la Enfermedad , Angiografía con Fluoresceína , Atrofia Geográfica/diagnóstico , Humanos , Retina , Incertidumbre
7.
Transl Vis Sci Technol ; 9(2): 57, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33173613

RESUMEN

Purpose: The purpose of this study was to summarize and evaluate artificial intelligence (AI) algorithms used in geographic atrophy (GA) diagnostic processes (e.g. isolating lesions or disease progression). Methods: The search strategy and selection of publications were both conducted in accordance with the Preferred of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. PubMed and Web of Science were used to extract literary data. The algorithms were summarized by objective, performance, and scope of coverage of GA diagnosis (e.g. lesion automation and GA progression). Results: Twenty-seven studies were identified for this review. A total of 18 publications focused on lesion segmentation only, 2 were designed to detect and classify GA, 2 were designed to predict future overall GA progression, 3 focused on prediction of future spatial GA progression, and 2 focused on prediction of visual function in GA. GA-related algorithms reported sensitivities from 0.47 to 0.98, specificities from 0.73 to 0.99, accuracies from 0.42 to 0.995, and Dice coefficients from 0.66 to 0.89. Conclusions: Current GA-AI publications have a predominant focus on lesion segmentation and a minor focus on classification and progression analysis. AI could be applied to other facets of GA diagnoses, such as understanding the role of hyperfluorescent areas in GA. Using AI for GA has several advantages, including improved diagnostic accuracy and faster processing speeds. Translational Relevance: AI can be used to quantify GA lesions and therefore allows one to impute visual function and quality-of-life. However, there is a need for the development of reliable and objective models and software to predict the rate of GA progression and to quantify improvements due to interventions.


Asunto(s)
Atrofia Geográfica , Algoritmos , Inteligencia Artificial , Humanos
8.
Trends Ecol Evol ; 34(3): 224-238, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30580972

RESUMEN

We propose a new framework for research synthesis of both evidence and influence, named research weaving. It summarizes and visualizes information content, history, and networks among a collection of documents on any given topic. Research weaving achieves this feat by combining the power of two methods: systematic mapping and bibliometrics. Systematic mapping provides a snapshot of the current state of knowledge, identifying areas needing more research attention and those ready for full synthesis. Bibliometrics enables researchers to see how pieces of evidence are connected, revealing the structure and development of a field. We explain how researchers can use some or all of these tools to gain a deeper, more nuanced understanding of the scientific literature.


Asunto(s)
Bibliometría , Evolución Biológica , Ecología/métodos , Metaanálisis como Asunto , Proyectos de Investigación , Literatura de Revisión como Asunto , Revisiones Sistemáticas como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA