Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38806239

RESUMEN

BACKGROUND AND PURPOSE: Mass effect and vasogenic edema are critical findings on CT of the head. This study compared the accuracy of an artificial intelligence model (Annalise Enterprise CTB) to consensus neuroradiologist interpretations in detecting mass effect and vasogenic edema. MATERIALS AND METHODS: A retrospective standalone performance assessment was conducted on datasets of non-contrast CT head cases acquired between 2016 and 2022 for each finding. The cases were obtained from patients aged 18 years or older from five hospitals in the United States. The positive cases were selected consecutively based on the original clinical reports using natural language processing and manual confirmation. The negative cases were selected by taking the next negative case acquired from the same CT scanner after positive cases. Each case was interpreted independently by up to three neuroradiologists to establish consensus interpretations. Each case was then interpreted by the AI model for the presence of the relevant finding. The neuroradiologists were provided with the entire CT study. The AI model separately received thin (≤1.5mm) and/or thick (>1.5 and ≤5mm) axial series. RESULTS: The two cohorts included 818 cases for mass effect and 310 cases for vasogenic edema. The AI model identified mass effect with sensitivity 96.6% (95% CI, 94.9-98.2) and specificity 89.8% (95% CI, 84.7-94.2) for the thin series, and 95.3% (95% CI, 93.5-96.8) and 93.1% (95% CI, 89.1-96.6) for the thick series. It identified vasogenic edema with sensitivity 90.2% (95% CI, 82.0-96.7) and specificity 93.5% (95% CI, 88.9-97.2) for the thin series, and 90.0% (95% CI, 84.0-96.0) and 95.5% (95% CI, 92.5-98.0) for the thick series. The corresponding areas under the curve were at least 0.980. CONCLUSIONS: The assessed AI model accurately identified mass effect and vasogenic edema in this CT dataset. It could assist the clinical workflow by prioritizing interpretation of abnormal cases, which could benefit patients through earlier identification and subsequent treatment. ABBREVIATIONS: AI = artificial intelligence; AUC = area under the curve; CADt = computer assisted triage devices; FDA = Food and Drug Administration; NPV = negative predictive value; PPV = positive predictive value; SD = standard deviation.

2.
Sci Rep ; 13(1): 189, 2023 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-36604467

RESUMEN

Non-contrast head CT (NCCT) is extremely insensitive for early (< 3-6 h) acute infarct identification. We developed a deep learning model that detects and delineates suspected early acute infarcts on NCCT, using diffusion MRI as ground truth (3566 NCCT/MRI training patient pairs). The model substantially outperformed 3 expert neuroradiologists on a test set of 150 CT scans of patients who were potential candidates for thrombectomy (60 stroke-negative, 90 stroke-positive middle cerebral artery territory only infarcts), with sensitivity 96% (specificity 72%) for the model versus 61-66% (specificity 90-92%) for the experts; model infarct volume estimates also strongly correlated with those of diffusion MRI (r2 > 0.98). When this 150 CT test set was expanded to include a total of 364 CT scans with a more heterogeneous distribution of infarct locations (94 stroke-negative, 270 stroke-positive mixed territory infarcts), model sensitivity was 97%, specificity 99%, for detection of infarcts larger than the 70 mL volume threshold used for patient selection in several major randomized controlled trials of thrombectomy treatment.


Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular , Humanos , Tomografía Computarizada por Rayos X , Accidente Cerebrovascular/diagnóstico por imagen , Imagen por Resonancia Magnética , Infarto de la Arteria Cerebral Media
3.
Radiology ; 306(2): e220101, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36125375

RESUMEN

Background Adrenal masses are common, but radiology reporting and recommendations for management can be variable. Purpose To create a machine learning algorithm to segment adrenal glands on contrast-enhanced CT images and classify glands as normal or mass-containing and to assess algorithm performance. Materials and Methods This retrospective study included two groups of contrast-enhanced abdominal CT examinations (development data set and secondary test set). Adrenal glands in the development data set were manually segmented by radiologists. Images in both the development data set and the secondary test set were manually classified as normal or mass-containing. Deep learning segmentation and classification models were trained on the development data set and evaluated on both data sets. Segmentation performance was evaluated with use of the Dice similarity coefficient (DSC), and classification performance with use of sensitivity and specificity. Results The development data set contained 274 CT examinations (251 patients; median age, 61 years; 133 women), and the secondary test set contained 991 CT examinations (991 patients; median age, 62 years; 578 women). The median model DSC on the development test set was 0.80 (IQR, 0.78-0.89) for normal glands and 0.84 (IQR, 0.79-0.90) for adrenal masses. On the development reader set, the median interreader DSC was 0.89 (IQR, 0.78-0.93) for normal glands and 0.89 (IQR, 0.85-0.97) for adrenal masses. Interreader DSC for radiologist manual segmentation did not differ from automated machine segmentation (P = .35). On the development test set, the model had a classification sensitivity of 83% (95% CI: 55, 95) and specificity of 89% (95% CI: 75, 96). On the secondary test set, the model had a classification sensitivity of 69% (95% CI: 58, 79) and specificity of 91% (95% CI: 90, 92). Conclusion A two-stage machine learning pipeline was able to segment the adrenal glands and differentiate normal adrenal glands from those containing masses. © RSNA, 2022 Online supplemental material is available for this article.


Asunto(s)
Aprendizaje Automático , Tomografía Computarizada por Rayos X , Humanos , Femenino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Estudios Retrospectivos , Algoritmos , Glándulas Suprarrenales
4.
JAMA Netw Open ; 5(12): e2247172, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36520432

RESUMEN

Importance: Early detection of pneumothorax, most often via chest radiography, can help determine need for emergent clinical intervention. The ability to accurately detect and rapidly triage pneumothorax with an artificial intelligence (AI) model could assist with earlier identification and improve care. Objective: To compare the accuracy of an AI model vs consensus thoracic radiologist interpretations in detecting any pneumothorax (incorporating both nontension and tension pneumothorax) and tension pneumothorax. Design, Setting, and Participants: This diagnostic study was a retrospective standalone performance assessment using a data set of 1000 chest radiographs captured between June 1, 2015, and May 31, 2021. The radiographs were obtained from patients aged at least 18 years at 4 hospitals in the Mass General Brigham hospital network in the United States. Included radiographs were selected using 2 strategies from all chest radiography performed at the hospitals, including inpatient and outpatient. The first strategy identified consecutive radiographs with pneumothorax through a manual review of radiology reports, and the second strategy identified consecutive radiographs with tension pneumothorax using natural language processing. For both strategies, negative radiographs were selected by taking the next negative radiograph acquired from the same radiography machine as each positive radiograph. The final data set was an amalgamation of these processes. Each radiograph was interpreted independently by up to 3 radiologists to establish consensus ground-truth interpretations. Each radiograph was then interpreted by the AI model for the presence of pneumothorax and tension pneumothorax. This study was conducted between July and October 2021, with the primary analysis performed between October and November 2021. Main Outcomes and Measures: The primary end points were the areas under the receiver operating characteristic curves (AUCs) for the detection of pneumothorax and tension pneumothorax. The secondary end points were the sensitivities and specificities for the detection of pneumothorax and tension pneumothorax. Results: The final analysis included radiographs from 985 patients (mean [SD] age, 60.8 [19.0] years; 436 [44.3%] female patients), including 307 patients with nontension pneumothorax, 128 patients with tension pneumothorax, and 550 patients without pneumothorax. The AI model detected any pneumothorax with an AUC of 0.979 (95% CI, 0.970-0.987), sensitivity of 94.3% (95% CI, 92.0%-96.3%), and specificity of 92.0% (95% CI, 89.6%-94.2%) and tension pneumothorax with an AUC of 0.987 (95% CI, 0.980-0.992), sensitivity of 94.5% (95% CI, 90.6%-97.7%), and specificity of 95.3% (95% CI, 93.9%-96.6%). Conclusions and Relevance: These findings suggest that the assessed AI model accurately detected pneumothorax and tension pneumothorax in this chest radiograph data set. The model's use in the clinical workflow could lead to earlier identification and improved care for patients with pneumothorax.


Asunto(s)
Aprendizaje Profundo , Neumotórax , Humanos , Femenino , Adolescente , Adulto , Persona de Mediana Edad , Masculino , Neumotórax/diagnóstico por imagen , Radiografía Torácica , Inteligencia Artificial , Estudios Retrospectivos , Radiografía
5.
Sci Rep ; 12(1): 2154, 2022 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-35140277

RESUMEN

Stroke is a leading cause of death and disability. The ability to quickly identify the presence of acute infarct and quantify the volume on magnetic resonance imaging (MRI) has important treatment implications. We developed a machine learning model that used the apparent diffusion coefficient and diffusion weighted imaging series. It was trained on 6,657 MRI studies from Massachusetts General Hospital (MGH; Boston, USA). All studies were labelled positive or negative for infarct (classification annotation) with 377 having the region of interest outlined (segmentation annotation). The different annotation types facilitated training on more studies while not requiring the extensive time to manually segment every study. We initially validated the model on studies sequestered from the training set. We then tested the model on studies from three clinical scenarios: consecutive stroke team activations for 6-months at MGH, consecutive stroke team activations for 6-months at a hospital that did not provide training data (Brigham and Women's Hospital [BWH]; Boston, USA), and an international site (Diagnósticos da América SA [DASA]; Brazil). The model results were compared to radiologist ground truth interpretations. The model performed better when trained on classification and segmentation annotations (area under the receiver operating curve [AUROC] 0.995 [95% CI 0.992-0.998] and median Dice coefficient for segmentation overlap of 0.797 [IQR 0.642-0.861]) compared to segmentation annotations alone (AUROC 0.982 [95% CI 0.972-0.990] and Dice coefficient 0.776 [IQR 0.584-0.857]). The model accurately identified infarcts for MGH stroke team activations (AUROC 0.964 [95% CI 0.943-0.982], 381 studies), BWH stroke team activations (AUROC 0.981 [95% CI 0.966-0.993], 247 studies), and at DASA (AUROC 0.998 [95% CI 0.993-1.000], 171 studies). The model accurately segmented infarcts with Pearson correlation comparing model output and ground truth volumes between 0.968 and 0.986 for the three scenarios. Acute infarct can be accurately detected and segmented on MRI in real-world clinical scenarios using a machine learning model.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...