ABSTRACT
Precision medicine aims to provide personalized care based on individual patient characteristics, rather than guideline-directed therapies for groups of diseases or patient demographics. Images-both radiology- and pathology-derived-are a major source of information on presence, type, and status of disease. Exploring the mathematical relationship of pixels in medical imaging ("radiomics") and cellular-scale structures in digital pathology slides ("pathomics") offers powerful tools for extracting both qualitative and, increasingly, quantitative data. These analytical approaches, however, may be significantly enhanced by applying additional methods arising from fields of mathematics such as differential geometry and algebraic topology that remain underexplored in this context. Geometry's strength lies in its ability to provide precise local measurements, such as curvature, that can be crucial for identifying abnormalities at multiple spatial levels. These measurements can augment the quantitative features extracted in conventional radiomics, leading to more nuanced diagnostics. By contrast, topology serves as a robust shape descriptor, capturing essential features such as connected components and holes. The field of topological data analysis was initially founded to explore the shape of data, with functional network connectivity in the brain being a prominent example. Increasingly, its tools are now being used to explore organizational patterns of physical structures in medical images and digitized pathology slides. By leveraging tools from both differential geometry and algebraic topology, researchers and clinicians may be able to obtain a more comprehensive, multi-layered understanding of medical images and contribute to precision medicine's armamentarium.
Subject(s)
Precision Medicine , Precision Medicine/methods , Humans , Radiology/methods , Image Processing, Computer-Assisted/methodsABSTRACT
This editorial explores the emerging role of Graph Filtration Learning (GFL) in revolutionizing Hepatocellular carcinoma (HCC) imaging analysis. As traditional pixel-based methods reach their limits, GFL offers a novel approach to capture complex topological features in medical images. By representing imaging data as graphs and leveraging persistent homology, GFL unveils new dimensions of information that were previously inaccessible. This paradigm shift holds promise for enhancing HCC diagnosis, treatment planning, and prognostication. We discuss the principles of GFL, its potential applications in HCC imaging, and the challenges in translating this innovative technique into clinical practice.
Subject(s)
Carcinoma, Hepatocellular , Liver Neoplasms , Carcinoma, Hepatocellular/diagnostic imaging , Carcinoma, Hepatocellular/pathology , Carcinoma, Hepatocellular/diagnosis , Humans , Liver Neoplasms/diagnostic imaging , Liver Neoplasms/pathology , Liver Neoplasms/diagnosis , Image Processing, Computer-Assisted/methods , Machine Learning , Algorithms , Image Interpretation, Computer-Assisted/methodsABSTRACT
In an era of rapid technological progress, this Special Issue aims to provide a comprehensive overview of the state-of-the-art in tomographic imaging [...].
Subject(s)
Tomography , Humans , Tomography/methods , Tomography, X-Ray Computed/methodsABSTRACT
Topological deep learning (TDL) introduces a novel approach to enhancing diagnostic and monitoring processes for metabolic dysfunction-associated fatty liver disease (MAFLD), a condition that is increasingly prevalent globally and a leading cause of liver transplantation. This editorial explores the integration of topology, a branch of mathematics focused on spatial properties preserved under continuous transformations, with deep learning models to improve the accuracy and efficacy of MAFLD diagnosis and staging from medical imaging. TDL's ability to recognize complex patterns in imaging data that traditional methods might miss can lead to earlier and more precise detection, personalized treatment, and potentially better patient outcomes. Challenges remain, particularly regarding the computational demands and the interpretability of TDL outputs, which necessitate further research and development for clinical application. The potential of TDL to transform the gastroenterological landscape marks a significant step toward the incorporation of advanced mathematical methodologies in medical practice.
ABSTRACT
Generative AI is revolutionizing oncological imaging, enhancing cancer detection and diagnosis. This editorial explores its impact on expanding datasets, improving image quality, and enabling predictive oncology. We discuss ethical considerations and introduce a unique perspective on personalized cancer screening using AI-generated digital twins. This approach could optimize screening protocols, improve early detection, and tailor treatment plans. While challenges remain, generative AI in oncological imaging offers unprecedented opportunities to advance cancer care and improve patient outcomes.
Subject(s)
Artificial Intelligence , Neoplasms , Humans , Neoplasms/diagnosis , Neoplasms/diagnostic imaging , Early Detection of Cancer/methods , Diagnostic Imaging/methods , Precision Medicine/methodsABSTRACT
Adult ingestion of foreign bodies in the digestive system is a common clinical challenge, often involving mentally impaired individuals, criminals, and drug dealers or occurring accidentally. Encounters with multiple sharp foreign bodies are infrequent and pose significant risks, including gastrointestinal (GI) bleeding, perforation, internal fistulas, and infection. The choice between endoscopy and emergency surgery for removal is contentious, with the less invasive endoscopy typically favored as the first line of management, depending on the foreign body's location and endoscopic accessibility. The current literature on the treatment of numerous sharp foreign bodies is sparse. This case report illustrates the successful endoscopic removal of a large quantity of sharp foreign bodies (35 half blades) from the upper GI tract, utilizing various extraction tools. It also aims to contribute to the existing literature regarding management strategies for ingested sharp foreign bodies. A comprehensive account is provided of the clinical presentation, imaging studies, consultations, and endoscopic procedures performed, culminating in the patient's safe discharge from our facility.
ABSTRACT
To provide accurate predictions, current machine learning-based solutions require large, manually labeled training datasets. We implement persistent homology (PH), a topological tool for studying the pattern of data, to analyze echocardiography-based strain data and differentiate between rare diseases like constrictive pericarditis (CP) and restrictive cardiomyopathy (RCM). Patient population (retrospectively registered) included those presenting with heart failure due to CP (n = 51), RCM (n = 47), and patients without heart failure symptoms (n = 53). Longitudinal, radial, and circumferential strains/strain rates for left ventricular segments were processed into topological feature vectors using Machine learning PH workflow. In differentiating CP and RCM, the PH workflow model had a ROC AUC of 0.94 (Sensitivity = 92%, Specificity = 81%), compared with the GLS model AUC of 0.69 (Sensitivity = 65%, Specificity = 66%). In differentiating between all three conditions, the PH workflow model had an AUC of 0.83 (Sensitivity = 68%, Specificity = 84%), compared with the GLS model AUC of 0.68 (Sensitivity = 52% and Specificity = 76%). By employing persistent homology to differentiate the "pattern" of cardiac deformations, our machine-learning approach provides reasonable accuracy when evaluating small datasets and aids in understanding and visualizing patterns of cardiac imaging data in clinically challenging disease states.
Subject(s)
Echocardiography , Machine Learning , Humans , Male , Echocardiography/methods , Female , Middle Aged , Rare Diseases/diagnostic imaging , Pericarditis, Constrictive/diagnostic imaging , Pericarditis, Constrictive/diagnosis , Cardiomyopathy, Restrictive/diagnostic imaging , Retrospective Studies , Aged , Heart Ventricles/diagnostic imaging , Heart Ventricles/physiopathology , Heart Failure/diagnostic imaging , AdultABSTRACT
BACKGROUND AND PURPOSE: Recent advances in deep learning have shown promising results in medical image analysis and segmentation. However, most brain MRI segmentation models are limited by the size of their datasets and/or the number of structures they can identify. This study evaluates the performance of six advanced deep learning models in segmenting 122 brain structures from T1-weighted MRI scans, aiming to identify the most effective model for clinical and research applications. MATERIALS AND METHODS: 1,510 T1-weighted MRIs were used to compare six deep-learning models for the segmentation of 122 distinct gray matter structures: nnU-Net, SegResNet, SwinUNETR, UNETR, U-Mamba_BOT and U-Mamba_ Enc. Each model was rigorously tested for accuracy using the Dice Similarity Coefficient (DSC) and the 95th percentile Hausdorff Distance (HD95). Additionally, the volume of each structure was calculated and compared between normal control (NC) and Alzheimer's Disease (AD) patients. RESULTS: U-Mamba_Bot achieved the highest performance with a median DSC of 0.9112 [IQR:0.8957, 0.9250]. nnU-Net achieved a median DSC of 0.9027 [IQR: 0.8847, 0.9205] and had the highest HD95 of 1.392[IQR: 1.174, 2.029]. The value of each HD95 (<3mm) indicates its superior capability in capturing detailed brain structures accurately. Following segmentation, volume calculations were performed, and the resultant volumes of normal controls and AD patients were compared. The volume changes observed in thirteen brain substructures were all consistent with those reported in existing literature, reinforcing the reliability of the segmentation outputs. CONCLUSIONS: This study underscores the efficacy of U-Mamba_Bot as a robust tool for detailed brain structure segmentation in T1-weighted MRI scans. The congruence of our volumetric analysis with the literature further validates the potential of advanced deep-learning models to enhance the understanding of neurodegenerative diseases such as AD. Future research should consider larger datasets to validate these findings further and explore the applicability of these models in other neurological conditions. ABBREVIATIONS: AD = Alzheimer's Disease; ADNI = Alzheimer's Disease Neuroimaging Initiative; DSC = Dice Similarity Coefficient; HD95 = the 95th Percentile Hausdorff Distance; IQR = Interquartile Range; NC = Normal Control; SSMs = State-space Sequence Models.
ABSTRACT
BACKGROUND: Percutaneous endoscopic gastrostomy (PEG) tube placement is generally safe but is associated with a range of complications. Minor complications include infections, granuloma formation, leakage, and blockages, while major complications encompass aspiration pneumonia, hemorrhage, and more serious conditions such as necrotizing fasciitis and colonic fistula. AIM: This study aimed to assess the rate of short-term complications within one month of endoscopic PEG insertion, focusing on their correlation with patient characteristics. METHODOLOGY: This retrospective cohort study analyzed data from patients who underwent PEG insertion between January 2020 and December 2022. It evaluated the incidence of complications in relation to variables such as the indication for the procedure, the patient's immune status, albumin and CRP levels, and the setting of the procedure (inpatient vs. outpatient). RESULTS: The study included 121 patients, with a mean age of 69.73 years, comprising 71 males (58.7%) and 50 females (41.3%). Neurological indications accounted for 64.5% of the cases. Notably, 67.8% of the patients were immunocompromised. Within 30 days of PEG insertion, 16.5% experienced complications, including GI bleeding (4.1%), infection at the PEG site (11.6%), and peritonitis (0.8%). Complications were significantly higher in immunocompromised patients and those with non-neurological indications. Higher serum albumin and lower CRP levels were associated with fewer complications, though the association was not statistically significant. CONCLUSION: The study highlights that gastrostomy site infection is the most common short-term complication following PEG insertion. Immune status and the reason for PEG insertion emerged as key factors influencing the likelihood of complications.
ABSTRACT
Background Gastroparesis, characterized by delayed gastric emptying without mechanical obstruction, is a significant complication, especially in diabetic individuals. It manifests through symptoms such as abdominal bloating, feelings of fullness, and pain. This study investigates the prevalence of gastroparesis among non-diabetic and diabetic patients, exploring associations with demographic data, hemoglobin A1C (HbA1C) levels, and symptoms. Methodology This retrospective, observational, cohort study included patients with gastroparesis symptoms who underwent a nuclear gastric emptying study from January 2021 to April 2023. The study analyzed demographic data, symptoms, and HbA1c levels to identify correlations with delayed gastric emptying. Results Of 157 patients, 34.4% exhibited delayed gastric emptying. Diabetic patients comprised 29.3% of the sample, with a notable disease duration of over 10 years in 77.3% of cases. Symptoms such as nausea, vomiting, epigastric pain, and early satiety were prevalent, with significant associations between delayed emptying and female gender, higher HbA1c, and vomiting. Conclusions Delayed gastric emptying is significantly associated with female gender, elevated HbA1c levels, and when vomiting is the presenting symptom. Highlighting the importance of awareness among healthcare providers and the community, the findings encourage collaborative efforts for further gastroparesis research to better understand the predictive factors and mechanisms.
ABSTRACT
Introduction: In the evolving landscape of healthcare and medicine, the merging of extensive medical datasets with the powerful capabilities of machine learning (ML) models presents a significant opportunity for transforming diagnostics, treatments, and patient care. Methods: This research paper delves into the realm of data-driven healthcare, placing a special focus on identifying the most effective ML models for diabetes prediction and uncovering the critical features that aid in this prediction. The prediction performance is analyzed using a variety of ML models, such as Random Forest (RF), XG Boost (XGB), Linear Regression (LR), Gradient Boosting (GB), and Support VectorMachine (SVM), across numerousmedical datasets. The study of feature importance is conducted using methods including Filter-based, Wrapper-based techniques, and Explainable Artificial Intelligence (Explainable AI). By utilizing Explainable AI techniques, specifically Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), the decision-making process of the models is ensured to be transparent, thereby bolstering trust in AI-driven decisions. Results: Features identified by RF in Wrapper-based techniques and the Chi-square in Filter-based techniques have been shown to enhance prediction performance. A notable precision and recall values, reaching up to 0.9 is achieved in predicting diabetes. Discussion: Both approaches are found to assign considerable importance to features like age, family history of diabetes, polyuria, polydipsia, and high blood pressure, which are strongly associated with diabetes. In this age of data-driven healthcare, the research presented here aspires to substantially improve healthcare outcomes.
ABSTRACT
The application of deep learning (DL) in medicine introduces transformative tools with the potential to enhance prognosis, diagnosis, and treatment planning. However, ensuring transparent documentation is essential for researchers to enhance reproducibility and refine techniques. Our study addresses the unique challenges presented by DL in medical imaging by developing a comprehensive checklist using the Delphi method to enhance reproducibility and reliability in this dynamic field. We compiled a preliminary checklist based on a comprehensive review of existing checklists and relevant literature. A panel of 11 experts in medical imaging and DL assessed these items using Likert scales, with two survey rounds to refine responses and gauge consensus. We also employed the content validity ratio with a cutoff of 0.59 to determine item face and content validity. Round 1 included a 27-item questionnaire, with 12 items demonstrating high consensus for face and content validity that were then left out of round 2. Round 2 involved refining the checklist, resulting in an additional 17 items. In the last round, 3 items were deemed non-essential or infeasible, while 2 newly suggested items received unanimous agreement for inclusion, resulting in a final 26-item DL model reporting checklist derived from the Delphi process. The 26-item checklist facilitates the reproducible reporting of DL tools and enables scientists to replicate the study's results.
Subject(s)
Checklist , Deep Learning , Delphi Technique , Diagnostic Imaging , Humans , Reproducibility of Results , Diagnostic Imaging/methods , Diagnostic Imaging/standards , Surveys and QuestionnairesABSTRACT
OBJECTIVE: Scar tissue is an identified cause for the development of malignant ventricular arrhythmias in patients of myocardial infarction, which ultimately leads to cardiac death, a fatal outcome. We aim to evaluate the left ventricular endocardial Scar tissue pattern using Radon descriptor-based machine learning. We performed automated Left ventricle (LV) segmentation to find the LV endocardial wall, performed morphological operations, and marked the region of the scar tissue on the endocardial wall of LV. Motivated by a Radon descriptor-based machine learning approach; the patches of 17 patients from Computer tomography (CT) images of the heart were used and categorized into "endocardial Scar tissue" and "normal tissue" groups. The ten feature vectors are extracted from patches using Radon descriptors and fed into a traditional machine learning model. RESULTS: The decision tree has shown the best performance with 98.07% accuracy. This study is the first attempt to provide a Radon transform-based machine learning method to distinguish patterns between "endocardial Scar tissue" and "normal tissue" groups. Our proposed research method could be potentially used in advanced interventions.
Subject(s)
Heart Ventricles , Radon , Humans , Heart Ventricles/diagnostic imaging , Cicatrix/diagnostic imaging , Heart , Machine LearningABSTRACT
PURPOSE: Distinguishing stage 1-2 adrenocortical carcinoma (ACC) and large, lipid poor adrenal adenoma (LPAA) via imaging is challenging due to overlapping imaging characteristics. This study investigated the ability of deep learning to distinguish ACC and LPAA on single time-point CT images. METHODS: Retrospective cohort study from 1994 to 2022. Imaging studies of patients with adrenal masses who had available adequate CT studies and histology as the reference standard by method of adrenal biopsy and/or adrenalectomy were included as well as four patients with LPAA determined by stability or regression on follow-up imaging. Forty-eight (48) subjects with pathology-proven, stage 1-2 ACC and 43 subjects with adrenal adenoma >3 cm in size demonstrating a mean non-contrast CT attenuation > 20 Hounsfield Units centrally were included. We used annotated single time-point contrast-enhanced CT images of these adrenal masses as input to a 3D Densenet121 model for classifying as ACC or LPAA with five-fold cross-validation. For each fold, two checkpoints were reported, highest accuracy with highest sensitivity (accuracy focused) and highest sensitivity with the highest accuracy (sensitivity focused). RESULTS: We trained a deep learning model (3D Densenet121) to predict ACC versus LPAA. The sensitivity-focused model achieved mean accuracy: 87.2% and mean sensitivity: 100%. The accuracy-focused model achieved mean accuracy: 91% and mean sensitivity: 96%. CONCLUSION: Deep learning demonstrates promising results distinguishing between ACC and large LPAA using single time-point CT images. Before being widely adopted in clinical practice, multicentric and external validation are needed.
Subject(s)
Adenoma , Adrenal Cortex Neoplasms , Adrenal Gland Neoplasms , Adrenocortical Adenoma , Adrenocortical Carcinoma , Deep Learning , Humans , Adrenal Gland Neoplasms/pathology , Retrospective Studies , Sensitivity and Specificity , Adrenocortical Adenoma/pathology , Adrenocortical Carcinoma/pathology , Tomography, X-Ray Computed/methodsABSTRACT
Machine learning, and especially deep learning, is rapidly gaining acceptance and clinical usage in a wide range of image analysis applications and is regarded as providing high performance in detecting anatomical structures and identification and classification of patterns of disease in medical images. However, there are many roadblocks to the widespread implementation of machine learning in clinical image analysis, including differences in data capture leading to different measurements, high dimensionality of imaging and other medical data, and the black-box nature of machine learning, with a lack of insight into relevant features. Techniques such as radiomics have been used in traditional machine learning approaches to model the mathematical relationships between adjacent pixels in an image and provide an explainable framework for clinicians and researchers. Newer paradigms, such as topological data analysis (TDA), have recently been adopted to design and develop innovative image analysis schemes that go beyond the abilities of pixel-to-pixel comparisons. TDA can automatically construct filtrations of topological shapes of image texture through a technique known as persistent homology (PH); these features can then be fed into machine learning models that provide explainable outputs and can distinguish different image classes in a computationally more efficient way, when compared to other currently used methods. The aim of this review is to introduce PH and its variants and to review TDA's recent successes in medical imaging studies.
ABSTRACT
Atrial fibrillation (AF) is a common complication in patients who underwent transcatheter aortic valve implantation. Some of these patients have preexisting AF as well. The management of these patients is complex, especially after the procedure, when there is a sudden change in hemodynamics. There are no established guidelines about the management of the patients who underwent transcatheter aortic valve replacement with preexisting or new-onset AF. This review article discusses the management of these patients with rate and rhythm control strategies with medications. This article also highlights the role of newer oral anticoagulation medications and left atrial occlusion devices to prevent stroke after the procedure. We will also discuss new advances in the care of this patient population to prevent the occurrence of AF after transcatheter aortic valve implantation. In conclusion, this article is a synopsis of both pharmacologic and device interventions for the management of AF in patients who underwent transcatheter aortic valve replacement.
Subject(s)
Aortic Valve Stenosis , Atrial Fibrillation , Stroke , Transcatheter Aortic Valve Replacement , Humans , Transcatheter Aortic Valve Replacement/adverse effects , Atrial Fibrillation/drug therapy , Aortic Valve Stenosis/complications , Treatment Outcome , Risk Factors , Stroke/epidemiology , Stroke/etiology , Stroke/prevention & control , Aortic Valve/surgeryABSTRACT
Background: Diagnosis of gastroesophageal reflux disease (GERD) relies on recognizing symptoms of reflux and mucosal changes during esophagogastroduodenoscopy. The desired response to acid suppression therapy is reliable resolution of GERD symptoms; however, these are not always reliable, hence the need for pH testing in unclear cases. Our objective was to identify potential predictors of a high DeMeester score among patients with potential GERD symptoms to identify patients most likely to have pathological GERD. Methods: We conducted a retrospective case-control study on patients who underwent wireless pH monitoring from January 2020 to April 2022. Cases were patients with a high DeMeester score (more than 14.7), indicating pathological reflux, and controls were those without. We collected clinical and demographic data, including age, sex, body mass index (BMI), smoking status, non-steroidal anti-inflammatory drugs (NSAIDs) use, and presence of atypical symptoms. Results: 86 patients were enrolled in the study. 46 patients with high DeMeester scores were considered cases, and 40 patients with DeMeester scores less than 14.7 were considered controls. Esophagitis (grade A) was found in 41.1% of the cases and in 22.5% of the control group. In our study, age of more than 50 years compared with age of 20-29 years and being overweight appeared to be predictors of true pathological reflux among patients with reflux symptoms who underwent wireless pH monitoring. Conclusion: Age above 50 years compared with age between 20-29 years and being overweight appeared to be predictors of true pathological reflux among patients with reflux symptoms who underwent wireless oesophageal pH monitoring. The presence of oesophagitis was approximately four times more likely to be associated with true pathological reflux.
ABSTRACT
Heart disease has a higher fatality rate than any other disease. Increased Atrial fat on the left atrium has been discovered to cause Atrial Fibrillation (AF) in most patients. AF can put one's life at risk and eventually lead to death. AF might worsen over time; therefore, it is crucial to have an early diagnosis and treatment. To evaluate the left atrium fat tissue pattern using Radon descriptor-based machine learning. This study developed a bridge between the Radon transform framework and machine learning to distinguish two distinct patterns. Motivated by a Radon descriptor-based machine learning approach, the patches of eight patients from CT images of the heart were used and categorized into "epicardial fat tissue" and "nonfat tissue" groups. The 10 feature vectors are extracted from each big patch using Radon descriptors and then fed into a traditional machine learning model. The results show that the proposed methodology discriminates between fat tissues and nonfat tissues clearly. KNN has shown the best performance with 96.77% specificity, 98.28% sensitivity, and 97.50% accuracy. To our knowledge, this study is the first attempt to provide a Radon transform-based machine learning method to distinguish between fat tissue and nonfat tissue on the left atrium. Our proposed research method could be potentially used in advanced interventions.
Subject(s)
Atrial Fibrillation , Radon , Atrial Fibrillation/diagnostic imaging , Atrial Fibrillation/etiology , Heart Atria/diagnostic imaging , Humans , Machine Learning , Tomography, X-Ray ComputedABSTRACT
OBJECTIVE: Atrial Fibrillation (A-fib) is an abnormal heartbeat condition in which the heart races and beats in an uncontrollable way. It is observed that the presence of increased epicardial fat/fatty tissue in the atrium can lead to A-fib. Persistent homology using topological features can be used to recapitulate enormous amounts of spatially complicated medical data into a visual code to identify a specific pattern of epicardial fat tissue with non-fat tissue. Our aim is to evaluate the topological pattern of left atrium epicardial fat tissue with non-fat tissue. RESULTS: A topological data analysis approach was acquired to study the imaging pattern between the left atrium epicardial fat tissue and non-fat tissue patches. The patches of eight patients from CT images of the left atrium heart were used and categorized into "left atrium epicardial fat tissue" and "non-fat tissue" groups. The features that distinguish the "epicardial fat tissue" and "non-fat tissue" groups are extracted using persistent homology (PH). Our result reveals that our proposed research can discriminate between left atrium epicardial fat tissue and non-fat tissue. Specifically, the range of Betti numbers in the epicardial tissue is smaller (0-30) than the non-fat tissue (0-100), indicating that non-fat tissue has good topology.
Subject(s)
Atrial Fibrillation , Pericardium , Adipose Tissue/diagnostic imaging , Atrial Fibrillation/diagnostic imaging , Heart Atria/diagnostic imaging , Humans , Pericardium/diagnostic imagingABSTRACT
Minimizing bias is critical to adoption and implementation of machine learning (ML) in clinical practice. Systematic mathematical biases produce consistent and reproducible differences between the observed and expected performance of ML systems, resulting in suboptimal performance. Such biases can be traced back to various phases of ML development: data handling, model development, and performance evaluation. This report presents 12 suboptimal practices during data handling of an ML study, explains how those practices can lead to biases, and describes what may be done to mitigate them. Authors employ an arbitrary and simplified framework that splits ML data handling into four steps: data collection, data investigation, data splitting, and feature engineering. Examples from the available research literature are provided. A Google Colaboratory Jupyter notebook includes code examples to demonstrate the suboptimal practices and steps to prevent them. Keywords: Data Handling, Bias, Machine Learning, Deep Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) © RSNA, 2022.