Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 87
Filter
1.
Article in English | MEDLINE | ID: mdl-38806239

ABSTRACT

BACKGROUND AND PURPOSE: Mass effect and vasogenic edema are critical findings on CT of the head. This study compared the accuracy of an artificial intelligence model (Annalise Enterprise CTB) to consensus neuroradiologist interpretations in detecting mass effect and vasogenic edema. MATERIALS AND METHODS: A retrospective standalone performance assessment was conducted on datasets of non-contrast CT head cases acquired between 2016 and 2022 for each finding. The cases were obtained from patients aged 18 years or older from five hospitals in the United States. The positive cases were selected consecutively based on the original clinical reports using natural language processing and manual confirmation. The negative cases were selected by taking the next negative case acquired from the same CT scanner after positive cases. Each case was interpreted independently by up to three neuroradiologists to establish consensus interpretations. Each case was then interpreted by the AI model for the presence of the relevant finding. The neuroradiologists were provided with the entire CT study. The AI model separately received thin (≤1.5mm) and/or thick (>1.5 and ≤5mm) axial series. RESULTS: The two cohorts included 818 cases for mass effect and 310 cases for vasogenic edema. The AI model identified mass effect with sensitivity 96.6% (95% CI, 94.9-98.2) and specificity 89.8% (95% CI, 84.7-94.2) for the thin series, and 95.3% (95% CI, 93.5-96.8) and 93.1% (95% CI, 89.1-96.6) for the thick series. It identified vasogenic edema with sensitivity 90.2% (95% CI, 82.0-96.7) and specificity 93.5% (95% CI, 88.9-97.2) for the thin series, and 90.0% (95% CI, 84.0-96.0) and 95.5% (95% CI, 92.5-98.0) for the thick series. The corresponding areas under the curve were at least 0.980. CONCLUSIONS: The assessed AI model accurately identified mass effect and vasogenic edema in this CT dataset. It could assist the clinical workflow by prioritizing interpretation of abnormal cases, which could benefit patients through earlier identification and subsequent treatment. ABBREVIATIONS: AI = artificial intelligence; AUC = area under the curve; CADt = computer assisted triage devices; FDA = Food and Drug Administration; NPV = negative predictive value; PPV = positive predictive value; SD = standard deviation.

3.
J Med Syst ; 48(1): 41, 2024 Apr 18.
Article in English | MEDLINE | ID: mdl-38632172

ABSTRACT

Polypharmacy remains an important challenge for patients with extensive medical complexity. Given the primary care shortage and the increasing aging population, effective polypharmacy management is crucial to manage the increasing burden of care. The capacity of large language model (LLM)-based artificial intelligence to aid in polypharmacy management has yet to be evaluated. Here, we evaluate ChatGPT's performance in polypharmacy management via its deprescribing decisions in standardized clinical vignettes. We inputted several clinical vignettes originally from a study of general practicioners' deprescribing decisions into ChatGPT 3.5, a publicly available LLM, and evaluated its capacity for yes/no binary deprescribing decisions as well as list-based prompts in which the model was prompted to choose which of several medications to deprescribe. We recorded ChatGPT responses to yes/no binary deprescribing prompts and the number and types of medications deprescribed. In yes/no binary deprescribing decisions, ChatGPT universally recommended deprescribing medications regardless of ADL status in patients with no overlying CVD history; in patients with CVD history, ChatGPT's answers varied by technical replicate. Total number of medications deprescribed ranged from 2.67 to 3.67 (out of 7) and did not vary with CVD status, but increased linearly with severity of ADL impairment. Among medication types, ChatGPT preferentially deprescribed pain medications. ChatGPT's deprescribing decisions vary along the axes of ADL status, CVD history, and medication type, indicating some concordance of internal logic between general practitioners and the model. These results indicate that specifically trained LLMs may provide useful clinical support in polypharmacy management for primary care physicians.


Subject(s)
Cardiovascular Diseases , Deprescriptions , General Practitioners , Humans , Aged , Polypharmacy , Artificial Intelligence
5.
J Am Coll Radiol ; 21(2): 329-340, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37196818

ABSTRACT

PURPOSE: To evaluate the real-world performance of two FDA-approved artificial intelligence (AI)-based computer-aided triage and notification (CADt) detection devices and compare them with the manufacturer-reported performance testing in the instructions for use. MATERIALS AND METHODS: Clinical performance of two FDA-cleared CADt large-vessel occlusion (LVO) devices was retrospectively evaluated at two separate stroke centers. Consecutive "code stroke" CT angiography examinations were included and assessed for patient demographics, scanner manufacturer, presence or absence of CADt result, CADt result, and LVO in the internal carotid artery (ICA), horizontal middle cerebral artery (MCA) segment (M1), Sylvian MCA segments after the bifurcation (M2), precommunicating part of cerebral artery, postcommunicating part of the cerebral artery, vertebral artery, basilar artery vessel segments. The original radiology report served as the reference standard, and a study radiologist extracted the above data elements from the imaging examination and radiology report. RESULTS: At hospital A, the CADt algorithm manufacturer reports assessment of intracranial ICA and MCA with sensitivity of 97% and specificity of 95.6%. Real-world performance of 704 cases included 79 in which no CADt result was available. Sensitivity and specificity in ICA and M1 segments were 85.3% and 91.9%. Sensitivity decreased to 68.5% when M2 segments were included and to 59.9% when all proximal vessel segments were included. At hospital B the CADt algorithm manufacturer reports sensitivity of 87.8% and specificity of 89.6%, without specifying the vessel segments. Real-world performance of 642 cases included 20 cases in which no CADt result was available. Sensitivity and specificity in ICA and M1 segments were 90.7% and 97.9%. Sensitivity decreased to 76.4% when M2 segments were included and to 59.4% when all proximal vessel segments are included. DISCUSSION: Real-world testing of two CADt LVO detection algorithms identified gaps in the detection and communication of potentially treatable LVOs when considering vessels beyond the intracranial ICA and M1 segments and in cases with absent and uninterpretable data.


Subject(s)
Artificial Intelligence , Stroke , Humans , Triage , Retrospective Studies , Stroke/diagnostic imaging , Algorithms , Computers
6.
J Am Coll Radiol ; 21(2): 225-226, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37659452
7.
J Am Coll Radiol ; 21(4): 617-623, 2024 Apr.
Article in English | MEDLINE | ID: mdl-37843483

ABSTRACT

PURPOSE: Medical imaging accounts for 85% of digital health's venture capital funding. As funding grows, it is expected that artificial intelligence (AI) products will increase commensurately. The study's objective is to project the number of new AI products given the statistical association between historical funding and FDA-approved AI products. METHODS: The study used data from the ACR Data Science Institute and for the number of FDA-approved AI products (2008-2022) and data from Rock Health for AI funding (2013-2022). Employing a 6-year lag between funding and product approved, we used linear regression to estimate the association between new products approved in a certain year, based on the lagged funding (ie, product-year funding). Using this statistical relationship, we forecasted the number of new FDA-approved products. RESULTS: The results show that there are 11.33 (95% confidence interval: 7.03-15.64) new AI products for every $1 billion in funding assuming a 6-year lag between funding and product approval. In 2022 there were 69 new FDA-approved products associated with $4.8 billion in funding. In 2035, product-year funding is projected to reach $30.8 billion, resulting in 350 new products that year. CONCLUSIONS: FDA-approved AI products are expected to grow from 69 in 2022 to 350 in 2035 given the expected funding growth in the coming years. AI is likely to change the practice of diagnostic radiology as new products are developed and integrated into practice. As more AI products are integrated, it may incentivize increased investment for future AI products.


Subject(s)
Artificial Intelligence , Capital Financing , Academies and Institutes , Data Science , Investments
8.
JMIR Med Educ ; 9: e51199, 2023 Dec 28.
Article in English | MEDLINE | ID: mdl-38153778

ABSTRACT

The growing presence of large language models (LLMs) in health care applications holds significant promise for innovative advancements in patient care. However, concerns about ethical implications and potential biases have been raised by various stakeholders. Here, we evaluate the ethics of LLMs in medicine along 2 key axes: empathy and equity. We outline the importance of these factors in novel models of care and develop frameworks for addressing these alongside LLM deployment.


Subject(s)
Empathy , Medicine , Humans , Health Facilities , Language , Delivery of Health Care
9.
J Med Internet Res ; 25: e48659, 2023 08 22.
Article in English | MEDLINE | ID: mdl-37606976

ABSTRACT

BACKGROUND: Large language model (LLM)-based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. OBJECTIVE: This study aimed to evaluate ChatGPT's capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. METHODS: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT's performance on clinical tasks. RESULTS: ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (ß=-15.8%; P<.001) and clinical management (ß=-7.4%; P=.02) question types. CONCLUSIONS: ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT's training data set.


Subject(s)
Artificial Intelligence , Humans , Clinical Decision-Making , Organizations , Workflow , User-Centered Design
10.
J Am Coll Radiol ; 20(10): 990-997, 2023 10.
Article in English | MEDLINE | ID: mdl-37356806

ABSTRACT

OBJECTIVE: Despite rising popularity and performance, studies evaluating the use of large language models for clinical decision support are lacking. Here, we evaluate ChatGPT (Generative Pre-trained Transformer)-3.5 and GPT-4's (OpenAI, San Francisco, California) capacity for clinical decision support in radiology via the identification of appropriate imaging services for two important clinical presentations: breast cancer screening and breast pain. METHODS: We compared ChatGPT's responses to the ACR Appropriateness Criteria for breast pain and breast cancer screening. Our prompt formats included an open-ended (OE) and a select all that apply (SATA) format. Scoring criteria evaluated whether proposed imaging modalities were in accordance with ACR guidelines. Three replicate entries were conducted for each prompt, and the average of these was used to determine final scores. RESULTS: Both ChatGPT-3.5 and ChatGPT-4 achieved an average OE score of 1.830 (out of 2) for breast cancer screening prompts. ChatGPT-3.5 achieved a SATA average percentage correct of 88.9%, compared with ChatGPT-4's average percentage correct of 98.4% for breast cancer screening prompts. For breast pain, ChatGPT-3.5 achieved an average OE score of 1.125 (out of 2) and a SATA average percentage correct of 58.3%, as compared with an average OE score of 1.666 (out of 2) and a SATA average percentage correct of 77.7%. DISCUSSION: Our results demonstrate the eventual feasibility of using large language models like ChatGPT for radiologic decision making, with the potential to improve clinical workflow and responsible use of radiology services. More use cases and greater accuracy are necessary to evaluate and implement such tools.


Subject(s)
Breast Neoplasms , Mastodynia , Radiology , Humans , Female , Breast Neoplasms/diagnostic imaging , Decision Making
11.
Radiology ; 307(5): e222044, 2023 06.
Article in English | MEDLINE | ID: mdl-37219444

ABSTRACT

Radiologic tests often contain rich imaging data not relevant to the clinical indication. Opportunistic screening refers to the practice of systematically leveraging these incidental imaging findings. Although opportunistic screening can apply to imaging modalities such as conventional radiography, US, and MRI, most attention to date has focused on body CT by using artificial intelligence (AI)-assisted methods. Body CT represents an ideal high-volume modality whereby a quantitative assessment of tissue composition (eg, bone, muscle, fat, and vascular calcium) can provide valuable risk stratification and help detect unsuspected presymptomatic disease. The emergence of "explainable" AI algorithms that fully automate these measurements could eventually lead to their routine clinical use. Potential barriers to widespread implementation of opportunistic CT screening include the need for buy-in from radiologists, referring providers, and patients. Standardization of acquiring and reporting measures is needed, in addition to expanded normative data according to age, sex, and race and ethnicity. Regulatory and reimbursement hurdles are not insurmountable but pose substantial challenges to commercialization and clinical use. Through demonstration of improved population health outcomes and cost-effectiveness, these opportunistic CT-based measures should be attractive to both payers and health care systems as value-based reimbursement models mature. If highly successful, opportunistic screening could eventually justify a practice of standalone "intended" CT screening.


Subject(s)
Artificial Intelligence , Radiology , Humans , Algorithms , Radiologists , Mass Screening/methods , Radiology/methods
12.
Acad Radiol ; 30(12): 2921-2930, 2023 12.
Article in English | MEDLINE | ID: mdl-37019698

ABSTRACT

RATIONALE AND OBJECTIVES: Suboptimal chest radiographs (CXR) can limit interpretation of critical findings. Radiologist-trained AI models were evaluated for differentiating suboptimal(sCXR) and optimal(oCXR) chest radiographs. MATERIALS AND METHODS: Our IRB-approved study included 3278 CXRs from adult patients (mean age 55 ± 20 years) identified from a retrospective search of CXR in radiology reports from 5 sites. A chest radiologist reviewed all CXRs for the cause of suboptimality. The de-identified CXRs were uploaded into an AI server application for training and testing 5 AI models. The training set consisted of 2202 CXRs (n = 807 oCXR; n = 1395 sCXR) while 1076 CXRs (n = 729 sCXR; n = 347 oCXR) were used for testing. Data were analyzed with the Area under the curve (AUC) for the model's ability to classify oCXR and sCXR correctly. RESULTS: For the two-class classification into sCXR or oCXR from all sites, for CXR with missing anatomy, AI had sensitivity, specificity, accuracy, and AUC of 78%, 95%, 91%, 0.87(95% CI 0.82-0.92), respectively. AI identified obscured thoracic anatomy with 91% sensitivity, 97% specificity, 95% accuracy, and 0.94 AUC (95% CI 0.90-0.97). Inadequate exposure with 90% sensitivity, 93% specificity, 92% accuracy, and AUC of 0.91 (95% CI 0.88-0.95). The presence of low lung volume was identified with 96% sensitivity, 92% specificity, 93% accuracy, and 0.94 AUC (95% CI 0.92-0.96). The sensitivity, specificity, accuracy, and AUC of AI in identifying patient rotation were 92%, 96%, 95%, and 0.94 (95% CI 0.91-0.98), respectively. CONCLUSION: The radiologist-trained AI models can accurately classify optimal and suboptimal CXRs. Such AI models at the front end of radiographic equipment can enable radiographers to repeat sCXRs when necessary.


Subject(s)
Lung , Radiography, Thoracic , Adult , Humans , Middle Aged , Aged , Lung/diagnostic imaging , Retrospective Studies , Radiography , Radiologists
13.
medRxiv ; 2023 Feb 26.
Article in English | MEDLINE | ID: mdl-36865204

ABSTRACT

IMPORTANCE: Large language model (LLM) artificial intelligence (AI) chatbots direct the power of large training datasets towards successive, related tasks, as opposed to single-ask tasks, for which AI already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as virtual physicians, has not yet been evaluated. OBJECTIVE: To evaluate ChatGPT's capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. DESIGN: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. SETTING: ChatGPT, a publicly available LLM. PARTICIPANTS: Clinical vignettes featured hypothetical patients with a variety of age and gender identities, and a range of Emergency Severity Indices (ESIs) based on initial clinical presentation. EXPOSURES: MSD Clinical Manual vignettes. MAIN OUTCOMES AND MEASURES: We measured the proportion of correct responses to the questions posed within the clinical vignettes tested. RESULTS: ChatGPT achieved 71.7% (95% CI, 69.3% to 74.1%) accuracy overall across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI, 67.8% to 86.1%), and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI, 54.2% to 66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (ß=-15.8%, p<0.001) and clinical management (ß=-7.4%, p=0.02) type questions. CONCLUSIONS AND RELEVANCE: ChatGPT achieves impressive accuracy in clinical decision making, with particular strengths emerging as it has more clinical information at its disposal.

14.
J Am Coll Radiol ; 20(3): 352-360, 2023 03.
Article in English | MEDLINE | ID: mdl-36922109

ABSTRACT

The multitude of artificial intelligence (AI)-based solutions, vendors, and platforms poses a challenging proposition to an already complex clinical radiology practice. Apart from assessing and ensuring acceptable local performance and workflow fit to improve imaging services, AI tools require multiple stakeholders, including clinical, technical, and financial, who collaborate to move potential deployable applications to full clinical deployment in a structured and efficient manner. Postdeployment monitoring and surveillance of such tools require an infrastructure that ensures proper and safe use. Herein, the authors describe their experience and framework for implementing and supporting the use of AI applications in radiology workflow.


Subject(s)
Artificial Intelligence , Radiology , Radiology/methods , Diagnostic Imaging , Workflow , Commerce
15.
Diagnostics (Basel) ; 13(3)2023 Jan 23.
Article in English | MEDLINE | ID: mdl-36766516

ABSTRACT

Chest radiographs (CXR) are the most performed imaging tests and rank high among the radiographic exams with suboptimal quality and high rejection rates. Suboptimal CXRs can cause delays in patient care and pitfalls in radiographic interpretation, given their ubiquitous use in the diagnosis and management of acute and chronic ailments. Suboptimal CXRs can also compound and lead to high inter-radiologist variations in CXR interpretation. While advances in radiography with transitions to computerized and digital radiography have reduced the prevalence of suboptimal exams, the problem persists. Advances in machine learning and artificial intelligence (AI), particularly in the radiographic acquisition, triage, and interpretation of CXRs, could offer a plausible solution for suboptimal CXRs. We review the literature on suboptimal CXRs and the potential use of AI to help reduce the prevalence of suboptimal CXRs.

16.
Diagnostics (Basel) ; 13(4)2023 Feb 18.
Article in English | MEDLINE | ID: mdl-36832266

ABSTRACT

Purpose: Motion-impaired CT images can result in limited or suboptimal diagnostic interpretation (with missed or miscalled lesions) and patient recall. We trained and tested an artificial intelligence (AI) model for identifying substantial motion artifacts on CT pulmonary angiography (CTPA) that have a negative impact on diagnostic interpretation. Methods: With IRB approval and HIPAA compliance, we queried our multicenter radiology report database (mPower, Nuance) for CTPA reports between July 2015 and March 2022 for the following terms: "motion artifacts", "respiratory motion", "technically inadequate", and "suboptimal" or "limited exam". All CTPA reports were from two quaternary (Site A, n = 335; B, n = 259) and a community (C, n = 199) healthcare sites. A thoracic radiologist reviewed CT images of all positive hits for motion artifacts (present or absent) and their severity (no diagnostic effect or major diagnostic impairment). Coronal multiplanar images from 793 CTPA exams were de-identified and exported offline into an AI model building prototype (Cognex Vision Pro, Cognex Corporation) to train an AI model to perform two-class classification ("motion" or "no motion") with data from the three sites (70% training dataset, n = 554; 30% validation dataset, n = 239). Separately, data from Site A and Site C were used for training and validating; testing was performed on the Site B CTPA exams. A five-fold repeated cross-validation was performed to evaluate the model performance with accuracy and receiver operating characteristics analysis (ROC). Results: Among the CTPA images from 793 patients (mean age 63 ± 17 years; 391 males, 402 females), 372 had no motion artifacts, and 421 had substantial motion artifacts. The statistics for the average performance of the AI model after five-fold repeated cross-validation for the two-class classification included 94% sensitivity, 91% specificity, 93% accuracy, and 0.93 area under the ROC curve (AUC: 95% CI 0.89-0.97). Conclusion: The AI model used in this study can successfully identify CTPA exams with diagnostic interpretation limiting motion artifacts in multicenter training and test datasets. Clinical relevance: The AI model used in the study can help alert technologists about the presence of substantial motion artifacts on CTPA, where a repeat image acquisition can help salvage diagnostic information.

18.
Sci Rep ; 13(1): 189, 2023 01 05.
Article in English | MEDLINE | ID: mdl-36604467

ABSTRACT

Non-contrast head CT (NCCT) is extremely insensitive for early (< 3-6 h) acute infarct identification. We developed a deep learning model that detects and delineates suspected early acute infarcts on NCCT, using diffusion MRI as ground truth (3566 NCCT/MRI training patient pairs). The model substantially outperformed 3 expert neuroradiologists on a test set of 150 CT scans of patients who were potential candidates for thrombectomy (60 stroke-negative, 90 stroke-positive middle cerebral artery territory only infarcts), with sensitivity 96% (specificity 72%) for the model versus 61-66% (specificity 90-92%) for the experts; model infarct volume estimates also strongly correlated with those of diffusion MRI (r2 > 0.98). When this 150 CT test set was expanded to include a total of 364 CT scans with a more heterogeneous distribution of infarct locations (94 stroke-negative, 270 stroke-positive mixed territory infarcts), model sensitivity was 97%, specificity 99%, for detection of infarcts larger than the 70 mL volume threshold used for patient selection in several major randomized controlled trials of thrombectomy treatment.


Subject(s)
Deep Learning , Stroke , Humans , Tomography, X-Ray Computed , Stroke/diagnostic imaging , Magnetic Resonance Imaging , Infarction, Middle Cerebral Artery
19.
Clin Imaging ; 95: 47-51, 2023 Mar.
Article in English | MEDLINE | ID: mdl-36610270

ABSTRACT

PURPOSE: To assess feasibility of automated segmentation and measurement of tracheal collapsibility for detecting tracheomalacia on inspiratory and expiratory chest CT images. METHODS: Our study included 123 patients (age 67 ± 11 years; female: male 69:54) who underwent clinically indicated chest CT examinations in both inspiration and expiration phases. A thoracic radiologist measured anteroposterior length of trachea in inspiration and expiration phase image at the level of maximum collapsibility or aortic arch (in absence of luminal change). Separately, another investigator separately processed the inspiratory and expiratory DICOM CT images with Airway Segmentation component of a commercial COPD software (IntelliSpace Portal, Philips Healthcare). Upon segmentation, the software automatically estimated average lumen diameter (in mm) and lumen area (sq.mm) both along the entire length of trachea and at the level of aortic arch. Data were analyzed with independent t-tests and area under the receiver operating characteristic curve (AUC). RESULTS: Of the 123 patients, 48 patients had tracheomalacia and 75 patients did not. Ratios of inspiration to expiration phases average lumen area and lumen diameter from the length of trachea had the highest AUC of 0.93 (95% CI = 0.88-0.97) for differentiating presence and absence of tracheomalacia. A decrease of ≥25% in average lumen diameter had sensitivity of 82% and specificity of 87% for detecting tracheomalacia. A decrease of ≥40% in the average lumen area had sensitivity and specificity of 86% for detecting tracheomalacia. CONCLUSION: Automatic segmentation and measurement of tracheal dimension over the entire tracheal length is more accurate than a single-level measurement for detecting tracheomalacia.


Subject(s)
Tracheomalacia , Humans , Male , Female , Middle Aged , Aged , Tracheomalacia/diagnostic imaging , Trachea/diagnostic imaging , Tomography, X-Ray Computed/methods , Sensitivity and Specificity , ROC Curve
20.
Jpn J Radiol ; 41(2): 194-200, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36331701

ABSTRACT

PURPOSE: Knowledge of kidney stone composition can help in patient management; urine composition analysis and dual-energy CT are frequently used to assess stone type. We assessed if threshold-based stone segmentation and radiomics can determine the composition of kidney stones from single-energy, non-contrast abdomen-pelvis CT. METHODS: With IRB approval, we identified 218 consecutive patients (mean age 64 ± 13 years; male:female 138:80) with the presence of kidney stones on non-contrast, abdomen-pelvis CT and surgical or biochemical proof of their stone composition. CT examinations were performed on one of the seven multidetector-row scanners from four vendors (GE, Philips, Siemens, Toshiba). Deidentified CT images were processed with a radiomics prototype (Frontier, Siemens Healthineers) to segment the entire kidney volumes with an AI-based organ segmentation tool. We applied a threshold of 130 HU to isolate stones in the segmented kidneys and to estimate radiomics over the segmented stone volume. A coinvestigator verified kidney stone segmentation and adjusted the volume of interest to include the entire stone volume when necessary. We applied multiple logistic regression tests with precision recall plots to obtain area under the curve (AUC) using a built-in R statistical program. RESULTS: The threshold-based stone segmentation successfully isolated kidney stones (uric acid: n = 102 patients, calcium oxalate/phosphate: n = 116 patients) in all patients. Radiomics differentiated between calcium and uric acid stones with an AUC of 0.78 (p < 0.01, 95% CI 0.73-0.83), 0.79 sensitivity, and 0.90 specificity regardless of CT vendors (GE CT: AUC = 0.82, p < 0.01, 95% CI 0.740-0896; Siemens CT: AUC = 0.77, 95% CI 0.700-0.846, p < 0.01). CONCLUSION: Automated threshold-based stone segmentation and radiomics can differentiate between calcium oxalate/phosphate and urate stones from non-contrast, single-energy abdomen CT.


Subject(s)
Calcium Oxalate , Kidney Calculi , Humans , Male , Female , Middle Aged , Aged , Calcium Oxalate/analysis , Uric Acid/analysis , Kidney Calculi/diagnostic imaging , Kidney Calculi/chemistry , Tomography, X-Ray Computed/methods , Oxalates , Phosphates
SELECTION OF CITATIONS
SEARCH DETAIL
...