Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
1.
Clin Imaging ; 112: 110207, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38838448

RESUMEN

PURPOSE: We created an infrastructure for no code machine learning (NML) platform for non-programming physicians to create NML model. We tested the platform by creating an NML model for classifying radiographs for the presence and absence of clavicle fractures. METHODS: Our IRB-approved retrospective study included 4135 clavicle radiographs from 2039 patients (mean age 52 ± 20 years, F:M 1022:1017) from 13 hospitals. Each patient had two-view clavicle radiographs with axial and anterior-posterior projections. The positive radiographs had either displaced or non-displaced clavicle fractures. We configured the NML platform to automatically retrieve the eligible exams using the series' unique identification from the hospital virtual network archive via web access to DICOM Objects. The platform trained a model until the validation loss plateaus. Once the testing was complete, the platform provided the receiver operating characteristics curve and confusion matrix for estimating sensitivity, specificity, and accuracy. RESULTS: The NML platform successfully retrieved 3917 radiographs (3917/4135, 94.7 %) and parsed them for creating a ML classifier with 2151 radiographs in the training, 100 radiographs for validation, and 1666 radiographs in testing datasets (772 radiographs with clavicle fracture, 894 without clavicle fracture). The network identified clavicle fracture with 90 % sensitivity, 87 % specificity, and 88 % accuracy with AUC of 0.95 (confidence interval 0.94-0.96). CONCLUSION: A NML platform can help physicians create and test machine learning models from multicenter imaging datasets such as the one in our study for classifying radiographs based on the presence of clavicle fracture.


Asunto(s)
Clavícula , Fracturas Óseas , Aprendizaje Automático , Humanos , Clavícula/lesiones , Clavícula/diagnóstico por imagen , Fracturas Óseas/diagnóstico por imagen , Fracturas Óseas/clasificación , Femenino , Persona de Mediana Edad , Masculino , Estudios Retrospectivos , Sensibilidad y Especificidad , Adulto , Radiografía/métodos
2.
Clin Imaging ; 112: 110210, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38850710

RESUMEN

BACKGROUND: Clinical adoption of AI applications requires stakeholders see value in their use. AI-enabled opportunistic-CT-screening (OS) capitalizes on incidentally-detected findings within CTs for potential health benefit. This study evaluates primary care providers' (PCP) perspectives on OS. METHODS: A survey was distributed to US Internal and Family Medicine residencies. Assessed were familiarity with AI and OS, perspectives on potential value/costs, communication of results, and technology implementation. RESULTS: 62 % of respondents (n = 71) were in Family Medicine, 64.8 % practiced in community hospitals. Although 74.6 % of respondents had heard of AI/machine learning, 95.8 % had little-to-no familiarity with OS. The majority reported little-to-no trust in AI. Reported concerns included AI accuracy (74.6 %) and unknown liability (73.2 %). 78.9 % of respondents reported that OS applications would require radiologist oversight. 53.5 % preferred OS results be included in a separate "screening" section within the Radiology report, accompanied by condition risks and management recommendations. The majority of respondents reported results would likely affect clinical management for all queried applications, and that atherosclerotic cardiovascular disease risk, abdominal aortic aneurysm, and liver fibrosis should be included within every CT report regardless of reason for examination. 70.5 % felt that PCP practices are unlikely to pay for OS. Added costs to the patient (91.5 %), the healthcare provider (77.5 %), and unknown liability (74.6 %) were the most frequently reported concerns. CONCLUSION: PCP preferences and concerns around AI-enabled OS offer insights into clinical value and costs. As AI applications grow, feedback from end-users should be considered in the development of such technology to optimize implementation and adoption. Increasing stakeholder familiarity with AI may be a critical prerequisite first step before stakeholders consider implementation.


Asunto(s)
Tomografía Computarizada por Rayos X , Humanos , Atención Primaria de Salud , Encuestas y Cuestionarios , Actitud del Personal de Salud , Tamizaje Masivo , Estados Unidos , Masculino , Femenino , Inteligencia Artificial , Hallazgos Incidentales
3.
Artículo en Inglés | MEDLINE | ID: mdl-38806239

RESUMEN

BACKGROUND AND PURPOSE: Mass effect and vasogenic edema are critical findings on CT of the head. This study compared the accuracy of an artificial intelligence model (Annalise Enterprise CTB) to consensus neuroradiologist interpretations in detecting mass effect and vasogenic edema. MATERIALS AND METHODS: A retrospective standalone performance assessment was conducted on datasets of non-contrast CT head cases acquired between 2016 and 2022 for each finding. The cases were obtained from patients aged 18 years or older from five hospitals in the United States. The positive cases were selected consecutively based on the original clinical reports using natural language processing and manual confirmation. The negative cases were selected by taking the next negative case acquired from the same CT scanner after positive cases. Each case was interpreted independently by up to three neuroradiologists to establish consensus interpretations. Each case was then interpreted by the AI model for the presence of the relevant finding. The neuroradiologists were provided with the entire CT study. The AI model separately received thin (≤1.5mm) and/or thick (>1.5 and ≤5mm) axial series. RESULTS: The two cohorts included 818 cases for mass effect and 310 cases for vasogenic edema. The AI model identified mass effect with sensitivity 96.6% (95% CI, 94.9-98.2) and specificity 89.8% (95% CI, 84.7-94.2) for the thin series, and 95.3% (95% CI, 93.5-96.8) and 93.1% (95% CI, 89.1-96.6) for the thick series. It identified vasogenic edema with sensitivity 90.2% (95% CI, 82.0-96.7) and specificity 93.5% (95% CI, 88.9-97.2) for the thin series, and 90.0% (95% CI, 84.0-96.0) and 95.5% (95% CI, 92.5-98.0) for the thick series. The corresponding areas under the curve were at least 0.980. CONCLUSIONS: The assessed AI model accurately identified mass effect and vasogenic edema in this CT dataset. It could assist the clinical workflow by prioritizing interpretation of abnormal cases, which could benefit patients through earlier identification and subsequent treatment. ABBREVIATIONS: AI = artificial intelligence; AUC = area under the curve; CADt = computer assisted triage devices; FDA = Food and Drug Administration; NPV = negative predictive value; PPV = positive predictive value; SD = standard deviation.

5.
Acad Radiol ; 30(12): 2921-2930, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37019698

RESUMEN

RATIONALE AND OBJECTIVES: Suboptimal chest radiographs (CXR) can limit interpretation of critical findings. Radiologist-trained AI models were evaluated for differentiating suboptimal(sCXR) and optimal(oCXR) chest radiographs. MATERIALS AND METHODS: Our IRB-approved study included 3278 CXRs from adult patients (mean age 55 ± 20 years) identified from a retrospective search of CXR in radiology reports from 5 sites. A chest radiologist reviewed all CXRs for the cause of suboptimality. The de-identified CXRs were uploaded into an AI server application for training and testing 5 AI models. The training set consisted of 2202 CXRs (n = 807 oCXR; n = 1395 sCXR) while 1076 CXRs (n = 729 sCXR; n = 347 oCXR) were used for testing. Data were analyzed with the Area under the curve (AUC) for the model's ability to classify oCXR and sCXR correctly. RESULTS: For the two-class classification into sCXR or oCXR from all sites, for CXR with missing anatomy, AI had sensitivity, specificity, accuracy, and AUC of 78%, 95%, 91%, 0.87(95% CI 0.82-0.92), respectively. AI identified obscured thoracic anatomy with 91% sensitivity, 97% specificity, 95% accuracy, and 0.94 AUC (95% CI 0.90-0.97). Inadequate exposure with 90% sensitivity, 93% specificity, 92% accuracy, and AUC of 0.91 (95% CI 0.88-0.95). The presence of low lung volume was identified with 96% sensitivity, 92% specificity, 93% accuracy, and 0.94 AUC (95% CI 0.92-0.96). The sensitivity, specificity, accuracy, and AUC of AI in identifying patient rotation were 92%, 96%, 95%, and 0.94 (95% CI 0.91-0.98), respectively. CONCLUSION: The radiologist-trained AI models can accurately classify optimal and suboptimal CXRs. Such AI models at the front end of radiographic equipment can enable radiographers to repeat sCXRs when necessary.


Asunto(s)
Pulmón , Radiografía Torácica , Adulto , Humanos , Persona de Mediana Edad , Anciano , Pulmón/diagnóstico por imagen , Estudios Retrospectivos , Radiografía , Radiólogos
6.
J Am Coll Radiol ; 20(3): 352-360, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36922109

RESUMEN

The multitude of artificial intelligence (AI)-based solutions, vendors, and platforms poses a challenging proposition to an already complex clinical radiology practice. Apart from assessing and ensuring acceptable local performance and workflow fit to improve imaging services, AI tools require multiple stakeholders, including clinical, technical, and financial, who collaborate to move potential deployable applications to full clinical deployment in a structured and efficient manner. Postdeployment monitoring and surveillance of such tools require an infrastructure that ensures proper and safe use. Herein, the authors describe their experience and framework for implementing and supporting the use of AI applications in radiology workflow.


Asunto(s)
Inteligencia Artificial , Radiología , Radiología/métodos , Diagnóstico por Imagen , Flujo de Trabajo , Comercio
7.
Diagnostics (Basel) ; 13(4)2023 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-36832266

RESUMEN

Purpose: Motion-impaired CT images can result in limited or suboptimal diagnostic interpretation (with missed or miscalled lesions) and patient recall. We trained and tested an artificial intelligence (AI) model for identifying substantial motion artifacts on CT pulmonary angiography (CTPA) that have a negative impact on diagnostic interpretation. Methods: With IRB approval and HIPAA compliance, we queried our multicenter radiology report database (mPower, Nuance) for CTPA reports between July 2015 and March 2022 for the following terms: "motion artifacts", "respiratory motion", "technically inadequate", and "suboptimal" or "limited exam". All CTPA reports were from two quaternary (Site A, n = 335; B, n = 259) and a community (C, n = 199) healthcare sites. A thoracic radiologist reviewed CT images of all positive hits for motion artifacts (present or absent) and their severity (no diagnostic effect or major diagnostic impairment). Coronal multiplanar images from 793 CTPA exams were de-identified and exported offline into an AI model building prototype (Cognex Vision Pro, Cognex Corporation) to train an AI model to perform two-class classification ("motion" or "no motion") with data from the three sites (70% training dataset, n = 554; 30% validation dataset, n = 239). Separately, data from Site A and Site C were used for training and validating; testing was performed on the Site B CTPA exams. A five-fold repeated cross-validation was performed to evaluate the model performance with accuracy and receiver operating characteristics analysis (ROC). Results: Among the CTPA images from 793 patients (mean age 63 ± 17 years; 391 males, 402 females), 372 had no motion artifacts, and 421 had substantial motion artifacts. The statistics for the average performance of the AI model after five-fold repeated cross-validation for the two-class classification included 94% sensitivity, 91% specificity, 93% accuracy, and 0.93 area under the ROC curve (AUC: 95% CI 0.89-0.97). Conclusion: The AI model used in this study can successfully identify CTPA exams with diagnostic interpretation limiting motion artifacts in multicenter training and test datasets. Clinical relevance: The AI model used in the study can help alert technologists about the presence of substantial motion artifacts on CTPA, where a repeat image acquisition can help salvage diagnostic information.

8.
Clin Imaging ; 95: 47-51, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36610270

RESUMEN

PURPOSE: To assess feasibility of automated segmentation and measurement of tracheal collapsibility for detecting tracheomalacia on inspiratory and expiratory chest CT images. METHODS: Our study included 123 patients (age 67 ± 11 years; female: male 69:54) who underwent clinically indicated chest CT examinations in both inspiration and expiration phases. A thoracic radiologist measured anteroposterior length of trachea in inspiration and expiration phase image at the level of maximum collapsibility or aortic arch (in absence of luminal change). Separately, another investigator separately processed the inspiratory and expiratory DICOM CT images with Airway Segmentation component of a commercial COPD software (IntelliSpace Portal, Philips Healthcare). Upon segmentation, the software automatically estimated average lumen diameter (in mm) and lumen area (sq.mm) both along the entire length of trachea and at the level of aortic arch. Data were analyzed with independent t-tests and area under the receiver operating characteristic curve (AUC). RESULTS: Of the 123 patients, 48 patients had tracheomalacia and 75 patients did not. Ratios of inspiration to expiration phases average lumen area and lumen diameter from the length of trachea had the highest AUC of 0.93 (95% CI = 0.88-0.97) for differentiating presence and absence of tracheomalacia. A decrease of ≥25% in average lumen diameter had sensitivity of 82% and specificity of 87% for detecting tracheomalacia. A decrease of ≥40% in the average lumen area had sensitivity and specificity of 86% for detecting tracheomalacia. CONCLUSION: Automatic segmentation and measurement of tracheal dimension over the entire tracheal length is more accurate than a single-level measurement for detecting tracheomalacia.


Asunto(s)
Traqueomalacia , Humanos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Traqueomalacia/diagnóstico por imagen , Tráquea/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Sensibilidad y Especificidad , Curva ROC
9.
Sci Rep ; 13(1): 189, 2023 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-36604467

RESUMEN

Non-contrast head CT (NCCT) is extremely insensitive for early (< 3-6 h) acute infarct identification. We developed a deep learning model that detects and delineates suspected early acute infarcts on NCCT, using diffusion MRI as ground truth (3566 NCCT/MRI training patient pairs). The model substantially outperformed 3 expert neuroradiologists on a test set of 150 CT scans of patients who were potential candidates for thrombectomy (60 stroke-negative, 90 stroke-positive middle cerebral artery territory only infarcts), with sensitivity 96% (specificity 72%) for the model versus 61-66% (specificity 90-92%) for the experts; model infarct volume estimates also strongly correlated with those of diffusion MRI (r2 > 0.98). When this 150 CT test set was expanded to include a total of 364 CT scans with a more heterogeneous distribution of infarct locations (94 stroke-negative, 270 stroke-positive mixed territory infarcts), model sensitivity was 97%, specificity 99%, for detection of infarcts larger than the 70 mL volume threshold used for patient selection in several major randomized controlled trials of thrombectomy treatment.


Asunto(s)
Aprendizaje Profundo , Accidente Cerebrovascular , Humanos , Tomografía Computarizada por Rayos X , Accidente Cerebrovascular/diagnóstico por imagen , Imagen por Resonancia Magnética , Infarto de la Arteria Cerebral Media
10.
Radiology ; 306(2): e220101, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36125375

RESUMEN

Background Adrenal masses are common, but radiology reporting and recommendations for management can be variable. Purpose To create a machine learning algorithm to segment adrenal glands on contrast-enhanced CT images and classify glands as normal or mass-containing and to assess algorithm performance. Materials and Methods This retrospective study included two groups of contrast-enhanced abdominal CT examinations (development data set and secondary test set). Adrenal glands in the development data set were manually segmented by radiologists. Images in both the development data set and the secondary test set were manually classified as normal or mass-containing. Deep learning segmentation and classification models were trained on the development data set and evaluated on both data sets. Segmentation performance was evaluated with use of the Dice similarity coefficient (DSC), and classification performance with use of sensitivity and specificity. Results The development data set contained 274 CT examinations (251 patients; median age, 61 years; 133 women), and the secondary test set contained 991 CT examinations (991 patients; median age, 62 years; 578 women). The median model DSC on the development test set was 0.80 (IQR, 0.78-0.89) for normal glands and 0.84 (IQR, 0.79-0.90) for adrenal masses. On the development reader set, the median interreader DSC was 0.89 (IQR, 0.78-0.93) for normal glands and 0.89 (IQR, 0.85-0.97) for adrenal masses. Interreader DSC for radiologist manual segmentation did not differ from automated machine segmentation (P = .35). On the development test set, the model had a classification sensitivity of 83% (95% CI: 55, 95) and specificity of 89% (95% CI: 75, 96). On the secondary test set, the model had a classification sensitivity of 69% (95% CI: 58, 79) and specificity of 91% (95% CI: 90, 92). Conclusion A two-stage machine learning pipeline was able to segment the adrenal glands and differentiate normal adrenal glands from those containing masses. © RSNA, 2022 Online supplemental material is available for this article.


Asunto(s)
Aprendizaje Automático , Tomografía Computarizada por Rayos X , Humanos , Femenino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X/métodos , Estudios Retrospectivos , Algoritmos , Glándulas Suprarrenales
11.
JAMA Netw Open ; 5(12): e2247172, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36520432

RESUMEN

Importance: Early detection of pneumothorax, most often via chest radiography, can help determine need for emergent clinical intervention. The ability to accurately detect and rapidly triage pneumothorax with an artificial intelligence (AI) model could assist with earlier identification and improve care. Objective: To compare the accuracy of an AI model vs consensus thoracic radiologist interpretations in detecting any pneumothorax (incorporating both nontension and tension pneumothorax) and tension pneumothorax. Design, Setting, and Participants: This diagnostic study was a retrospective standalone performance assessment using a data set of 1000 chest radiographs captured between June 1, 2015, and May 31, 2021. The radiographs were obtained from patients aged at least 18 years at 4 hospitals in the Mass General Brigham hospital network in the United States. Included radiographs were selected using 2 strategies from all chest radiography performed at the hospitals, including inpatient and outpatient. The first strategy identified consecutive radiographs with pneumothorax through a manual review of radiology reports, and the second strategy identified consecutive radiographs with tension pneumothorax using natural language processing. For both strategies, negative radiographs were selected by taking the next negative radiograph acquired from the same radiography machine as each positive radiograph. The final data set was an amalgamation of these processes. Each radiograph was interpreted independently by up to 3 radiologists to establish consensus ground-truth interpretations. Each radiograph was then interpreted by the AI model for the presence of pneumothorax and tension pneumothorax. This study was conducted between July and October 2021, with the primary analysis performed between October and November 2021. Main Outcomes and Measures: The primary end points were the areas under the receiver operating characteristic curves (AUCs) for the detection of pneumothorax and tension pneumothorax. The secondary end points were the sensitivities and specificities for the detection of pneumothorax and tension pneumothorax. Results: The final analysis included radiographs from 985 patients (mean [SD] age, 60.8 [19.0] years; 436 [44.3%] female patients), including 307 patients with nontension pneumothorax, 128 patients with tension pneumothorax, and 550 patients without pneumothorax. The AI model detected any pneumothorax with an AUC of 0.979 (95% CI, 0.970-0.987), sensitivity of 94.3% (95% CI, 92.0%-96.3%), and specificity of 92.0% (95% CI, 89.6%-94.2%) and tension pneumothorax with an AUC of 0.987 (95% CI, 0.980-0.992), sensitivity of 94.5% (95% CI, 90.6%-97.7%), and specificity of 95.3% (95% CI, 93.9%-96.6%). Conclusions and Relevance: These findings suggest that the assessed AI model accurately detected pneumothorax and tension pneumothorax in this chest radiograph data set. The model's use in the clinical workflow could lead to earlier identification and improved care for patients with pneumothorax.


Asunto(s)
Aprendizaje Profundo , Neumotórax , Humanos , Femenino , Adolescente , Adulto , Persona de Mediana Edad , Masculino , Neumotórax/diagnóstico por imagen , Radiografía Torácica , Inteligencia Artificial , Estudios Retrospectivos , Radiografía
12.
Diagnostics (Basel) ; 12(10)2022 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-36292071

RESUMEN

BACKGROUND: Missed findings in chest X-ray interpretation are common and can have serious consequences. METHODS: Our study included 2407 chest radiographs (CXRs) acquired at three Indian and five US sites. To identify CXRs reported as normal, we used a proprietary radiology report search engine based on natural language processing (mPower, Nuance). Two thoracic radiologists reviewed all CXRs and recorded the presence and clinical significance of abnormal findings on a 5-point scale (1-not important; 5-critical importance). All CXRs were processed with the AI model (Qure.ai) and outputs were recorded for the presence of findings. Data were analyzed to obtain area under the ROC curve (AUC). RESULTS: Of 410 CXRs (410/2407, 18.9%) with unreported/missed findings, 312 (312/410, 76.1%) findings were clinically important: pulmonary nodules (n = 157), consolidation (60), linear opacities (37), mediastinal widening (21), hilar enlargement (17), pleural effusions (11), rib fractures (6) and pneumothoraces (3). AI detected 69 missed findings (69/131, 53%) with an AUC of up to 0.935. The AI model was generalizable across different sites, geographic locations, patient genders and age groups. CONCLUSION: A substantial number of important CXR findings are missed; the AI model can help to identify and reduce the frequency of important missed findings in a generalizable manner.

14.
Diagnostics (Basel) ; 12(9)2022 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-36140488

RESUMEN

Purpose: We assessed whether a CXR AI algorithm was able to detect missed or mislabeled chest radiograph (CXR) findings in radiology reports. Methods: We queried a multi-institutional radiology reports search database of 13 million reports to identify all CXR reports with addendums from 1999-2021. Of the 3469 CXR reports with an addendum, a thoracic radiologist excluded reports where addenda were created for typographic errors, wrong report template, missing sections, or uninterpreted signoffs. The remaining reports contained addenda (279 patients) with errors related to side-discrepancies or missed findings such as pulmonary nodules, consolidation, pleural effusions, pneumothorax, and rib fractures. All CXRs were processed with an AI algorithm. Descriptive statistics were performed to determine the sensitivity, specificity, and accuracy of the AI in detecting missed or mislabeled findings. Results: The AI had high sensitivity (96%), specificity (100%), and accuracy (96%) for detecting all missed and mislabeled CXR findings. The corresponding finding-specific statistics for the AI were nodules (96%, 100%, 96%), pneumothorax (84%, 100%, 85%), pleural effusion (100%, 17%, 67%), consolidation (98%, 100%, 98%), and rib fractures (87%, 100%, 94%). Conclusions: The CXR AI could accurately detect mislabeled and missed findings. Clinical Relevance: The CXR AI can reduce the frequency of errors in detection and side-labeling of radiographic findings.

15.
JAMA Netw Open ; 5(8): e2229289, 2022 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-36044215

RESUMEN

Importance: The efficient and accurate interpretation of radiologic images is paramount. Objective: To evaluate whether a deep learning-based artificial intelligence (AI) engine used concurrently can improve reader performance and efficiency in interpreting chest radiograph abnormalities. Design, Setting, and Participants: This multicenter cohort study was conducted from April to November 2021 and involved radiologists, including attending radiologists, thoracic radiology fellows, and residents, who independently participated in 2 observer performance test sessions. The sessions included a reading session with AI and a session without AI, in a randomized crossover manner with a 4-week washout period in between. The AI produced a heat map and the image-level probability of the presence of the referrable lesion. The data used were collected at 2 quaternary academic hospitals in Boston, Massachusetts: Beth Israel Deaconess Medical Center (The Medical Information Mart for Intensive Care Chest X-Ray [MIMIC-CXR]) and Massachusetts General Hospital (MGH). Main Outcomes and Measures: The ground truths for the labels were created via consensual reading by 2 thoracic radiologists. Each reader documented their findings in a customized report template, in which the 4 target chest radiograph findings and the reader confidence of the presence of each finding was recorded. The time taken for reporting each chest radiograph was also recorded. Sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC) were calculated for each target finding. Results: A total of 6 radiologists (2 attending radiologists, 2 thoracic radiology fellows, and 2 residents) participated in the study. The study involved a total of 497 frontal chest radiographs-247 from the MIMIC-CXR data set (demographic data for patients were not available) and 250 chest radiographs from MGH (mean [SD] age, 63 [16] years; 133 men [53.2%])-from adult patients with and without 4 target findings (pneumonia, nodule, pneumothorax, and pleural effusion). The target findings were found in 351 of 497 chest radiographs. The AI was associated with higher sensitivity for all findings compared with the readers (nodule, 0.816 [95% CI, 0.732-0.882] vs 0.567 [95% CI, 0.524-0.611]; pneumonia, 0.887 [95% CI, 0.834-0.928] vs 0.673 [95% CI, 0.632-0.714]; pleural effusion, 0.872 [95% CI, 0.808-0.921] vs 0.889 [95% CI, 0.862-0.917]; pneumothorax, 0.988 [95% CI, 0.932-1.000] vs 0.792 [95% CI, 0.756-0.827]). AI-aided interpretation was associated with significantly improved reader sensitivities for all target findings, without negative impacts on the specificity. Overall, the AUROCs of readers improved for all 4 target findings, with significant improvements in detection of pneumothorax and nodule. The reporting time with AI was 10% lower than without AI (40.8 vs 36.9 seconds; difference, 3.9 seconds; 95% CI, 2.9-5.2 seconds; P < .001). Conclusions and Relevance: These findings suggest that AI-aided interpretation was associated with improved reader performance and efficiency for identifying major thoracic findings on a chest radiograph.


Asunto(s)
Aprendizaje Profundo , Derrame Pleural , Neumonía , Neumotórax , Adulto , Inteligencia Artificial , Estudios de Cohortes , Humanos , Masculino , Persona de Mediana Edad , Neumonía/diagnóstico por imagen
16.
Radiology ; 305(3): 555-563, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35916673

RESUMEN

As the role of artificial intelligence (AI) in clinical practice evolves, governance structures oversee the implementation, maintenance, and monitoring of clinical AI algorithms to enhance quality, manage resources, and ensure patient safety. In this article, a framework is established for the infrastructure required for clinical AI implementation and presents a road map for governance. The road map answers four key questions: Who decides which tools to implement? What factors should be considered when assessing an application for implementation? How should applications be implemented in clinical practice? Finally, how should tools be monitored and maintained after clinical implementation? Among the many challenges for the implementation of AI in clinical practice, devising flexible governance structures that can quickly adapt to a changing environment will be essential to ensure quality patient care and practice improvement objectives.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiografía , Algoritmos , Calidad de la Atención de Salud
17.
Diagnostics (Basel) ; 12(8)2022 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-36010194

RESUMEN

(1) Background: Optimal anatomic coverage is important for radiation-dose optimization. We trained and tested (R2.2.4) two (R3-2) deep learning (DL) algorithms on a machine vision tool library platform (Cognex Vision Pro Deep Learning software) to recognize anatomic landmarks and classify chest CT as those with optimum, under-scanned, or over-scanned scan length. (2) Methods: To test our hypothesis, we performed a study with 428 consecutive chest CT examinations (mean age 70 ± 14 years; male:female 190:238) performed at one of the four hospitals. CT examinations from two hospitals were used to train the DL classification algorithms to identify lung apices and bases. The developed algorithms were then tested on the data from the remaining two hospitals. For each CT, we recorded the scan lengths above and below the lung apices and bases. Model performance was assessed with receiver operating characteristics (ROC) analysis. (3) Results: The two DL models for lung apex and bases had high sensitivity, specificity, accuracy, and areas under the curve (AUC) for identifying under-scanning (100%, 99%, 99%, and 0.999 (95% CI 0.996-1.000)) and over-scanning (99%, 99%, 99%, and 0.998 (95%CI 0.992-1.000)). (4) Conclusions: Our DL models can accurately identify markers for missing anatomic coverage and over-scanning in chest CTs.

18.
Medicine (Baltimore) ; 101(29): e29587, 2022 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-35866818

RESUMEN

To tune and test the generalizability of a deep learning-based model for assessment of COVID-19 lung disease severity on chest radiographs (CXRs) from different patient populations. A published convolutional Siamese neural network-based model previously trained on hospitalized patients with COVID-19 was tuned using 250 outpatient CXRs. This model produces a quantitative measure of COVID-19 lung disease severity (pulmonary x-ray severity (PXS) score). The model was evaluated on CXRs from 4 test sets, including 3 from the United States (patients hospitalized at an academic medical center (N = 154), patients hospitalized at a community hospital (N = 113), and outpatients (N = 108)) and 1 from Brazil (patients at an academic medical center emergency department (N = 303)). Radiologists from both countries independently assigned reference standard CXR severity scores, which were correlated with the PXS scores as a measure of model performance (Pearson R). The Uniform Manifold Approximation and Projection (UMAP) technique was used to visualize the neural network results. Tuning the deep learning model with outpatient data showed high model performance in 2 United States hospitalized patient datasets (R = 0.88 and R = 0.90, compared to baseline R = 0.86). Model performance was similar, though slightly lower, when tested on the United States outpatient and Brazil emergency department datasets (R = 0.86 and R = 0.85, respectively). UMAP showed that the model learned disease severity information that generalized across test sets. A deep learning model that extracts a COVID-19 severity score on CXRs showed generalizable performance across multiple populations from 2 continents, including outpatients and hospitalized patients.


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico por imagen , Humanos , Pulmón , Radiografía Torácica/métodos , Radiólogos
19.
Semin Neurol ; 42(1): 39-47, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-35576929

RESUMEN

Artificial intelligence is already innovating in the provision of neurologic care. This review explores key artificial intelligence concepts; their application to neurologic diagnosis, prognosis, and treatment; and challenges that await their broader adoption. The development of new diagnostic biomarkers, individualization of prognostic information, and improved access to treatment are among the plethora of possibilities. These advances, however, reflect only the tip of the iceberg for the ways in which artificial intelligence may transform neurologic care in the future.


Asunto(s)
Inteligencia Artificial , Neurología , Humanos , Pronóstico
20.
PLoS One ; 17(4): e0267213, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35486572

RESUMEN

A standardized objective evaluation method is needed to compare machine learning (ML) algorithms as these tools become available for clinical use. Therefore, we designed, built, and tested an evaluation pipeline with the goal of normalizing performance measurement of independently developed algorithms, using a common test dataset of our clinical imaging. Three vendor applications for detecting solid, part-solid, and groundglass lung nodules in chest CT examinations were assessed in this retrospective study using our data-preprocessing and algorithm assessment chain. The pipeline included tools for image cohort creation and de-identification; report and image annotation for ground-truth labeling; server partitioning to receive vendor "black box" algorithms and to enable model testing on our internal clinical data (100 chest CTs with 243 nodules) from within our security firewall; model validation and result visualization; and performance assessment calculating algorithm recall, precision, and receiver operating characteristic curves (ROC). Algorithm true positives, false positives, false negatives, recall, and precision for detecting lung nodules were as follows: Vendor-1 (194, 23, 49, 0.80, 0.89); Vendor-2 (182, 270, 61, 0.75, 0.40); Vendor-3 (75, 120, 168, 0.32, 0.39). The AUCs for detection of solid (0.61-0.74), groundglass (0.66-0.86) and part-solid (0.52-0.86) nodules varied between the three vendors. Our ML model validation pipeline enabled testing of multi-vendor algorithms within the institutional firewall. Wide variations in algorithm performance for detection as well as classification of lung nodules justifies the premise for a standardized objective ML algorithm evaluation process.


Asunto(s)
Neoplasias Pulmonares , Algoritmos , Humanos , Neoplasias Pulmonares/diagnóstico , Aprendizaje Automático , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...