Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
1.
Am J Clin Pathol ; 2024 Apr 20.
Article in English | MEDLINE | ID: mdl-38642073

ABSTRACT

OBJECTIVES: Iron-deficiency anemia (IDA) is a common health problem worldwide, and up to 10% of adult patients with incidental IDA may have gastrointestinal cancer. A diagnosis of IDA can be established through a combination of laboratory tests, but it is often underrecognized until a patient becomes symptomatic. Based on advances in machine learning, we hypothesized that we could reduce the time to diagnosis by developing an IDA prediction model. Our goal was to develop 3 neural networks by using retrospective longitudinal outpatient laboratory data to predict the risk of IDA 3 to 6 months before traditional diagnosis. METHODS: We analyzed retrospective outpatient electronic health record data between 2009 and 2020 from an academic medical center in northern Texas. We included laboratory features from 30,603 patients to develop 3 types of neural networks: artificial neural networks, long short-term memory cells, and gated recurrent units. The classifiers were trained using the Adam Optimizer across 200 random training-validation splits. We calculated accuracy, area under the receiving operating characteristic curve, sensitivity, and specificity in the testing split. RESULTS: Although all models demonstrated comparable performance, the gated recurrent unit model outperformed the other 2, achieving an accuracy of 0.83, an area under the receiving operating characteristic curve of 0.89, a sensitivity of 0.75, and a specificity of 0.85 across 200 epochs. CONCLUSIONS: Our results showcase the feasibility of employing deep learning techniques for early prediction of IDA in the outpatient setting based on sequences of laboratory data, offering a substantial lead time for clinical intervention.

2.
Radiographics ; 44(5): e230067, 2024 May.
Article in English | MEDLINE | ID: mdl-38635456

ABSTRACT

Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.


Subject(s)
Algorithms , Artificial Intelligence , Humans , Automation , Machine Learning , Bias
3.
Clin Genitourin Cancer ; 22(1): 33-37, 2024 02.
Article in English | MEDLINE | ID: mdl-37468341

ABSTRACT

INTRODUCTION: Testicular germ cell tumors are the most common malignancy in young adult males. Patients with metastatic disease receive standard of care chemotherapy followed by retroperitoneal lymph node dissection for residual masses >1cm. However, there is a need for better preoperative tools to discern which patients will have persistent disease after chemotherapy given low rates of metastatic germ cell tumor after chemotherapy. The purpose of this study was to use radiomics to predict which patients would have viable germ cell tumor or teratoma after chemotherapy at time of retroperitoneal lymph node dissection. PATIENTS AND METHODS: Patients with nonseminomatous germ cell tumor undergoing postchemotherapy retroperitoneal lymph node dissection (PC-RPLND) between 2008 and 2019 were queried from our institutional database. Patients were included if prechemotherapy computed tomography (CT) scan and postchemotherapy imaging were available. Semiqualitative and quantitative features of residual masses and nodal regions of interest and radiomic feature extractions were performed by 2 board certified radiologists. Radiomic feature analysis was used to extract first order, shape, and second order statistics from each region of interest. Post-RPLND pathology was compared to the radiomic analysis using multiple t-tests. RESULTS: 45 patients underwent PC-RPLND at our institution, with the majority (28 patients) having stage III disease. 24 (53%) patients had teratoma on RPLND pathology, while 2 (4%) had viable germ cell tumor. After chemotherapy, 78%, 53%, and 33% of patients had cystic regions, fat stranding, and local infiltration present on imaging. After radiomic analysis, first order statistics mean, median, 90th percentile, and root mean squares were significant. Strong correlations were observed between these 4 features;a lower signal was associated with positive pathology at RPND. CONCLUSIONS: Testicular radiomics is an emerging tool that may help predict persistent disease after chemotherapy.


Subject(s)
Neoplasms, Germ Cell and Embryonal , Teratoma , Testicular Neoplasms , Male , Young Adult , Humans , Radiomics , Treatment Outcome , Retroperitoneal Space/diagnostic imaging , Neoplasms, Germ Cell and Embryonal/diagnostic imaging , Neoplasms, Germ Cell and Embryonal/drug therapy , Neoplasms, Germ Cell and Embryonal/surgery , Testicular Neoplasms/diagnostic imaging , Testicular Neoplasms/drug therapy , Testicular Neoplasms/surgery , Lymph Node Excision/methods , Teratoma/diagnostic imaging , Teratoma/drug therapy , Teratoma/surgery
4.
J Digit Imaging ; 35(1): 21-28, 2022 02.
Article in English | MEDLINE | ID: mdl-34997374

ABSTRACT

In this article, we demonstrate the use of a software-based radiologist reporting tool for the implementation of American College of Radiology Thyroid Imaging, Reporting and Data System thyroid nodule risk-stratification. The technical details are described with emphasis on addressing the information security and patient privacy issues while allowing it to integrate with the electronic health record and radiology reporting dictation software. Its practical implementation is assessed in a quality improvement project in which guideline adherence and recommendation congruence were measured pre and post implementation. The descriptions of our solution and the release of the open-sourced codes may be helpful in future implementation of similar web-based calculators.


Subject(s)
Thyroid Nodule , Humans , Internet , Retrospective Studies , Software , Thyroid Nodule/diagnostic imaging , Ultrasonography/methods
5.
Front Artif Intell ; 4: 694875, 2021.
Article in English | MEDLINE | ID: mdl-34268489

ABSTRACT

Since the outbreak of the COVID-19 pandemic, worldwide research efforts have focused on using artificial intelligence (AI) technologies on various medical data of COVID-19-positive patients in order to identify or classify various aspects of the disease, with promising reported results. However, concerns have been raised over their generalizability, given the heterogeneous factors in training datasets. This study aims to examine the severity of this problem by evaluating deep learning (DL) classification models trained to identify COVID-19-positive patients on 3D computed tomography (CT) datasets from different countries. We collected one dataset at UT Southwestern (UTSW) and three external datasets from different countries: CC-CCII Dataset (China), COVID-CTset (Iran), and MosMedData (Russia). We divided the data into two classes: COVID-19-positive and COVID-19-negative patients. We trained nine identical DL-based classification models by using combinations of datasets with a 72% train, 8% validation, and 20% test data split. The models trained on a single dataset achieved accuracy/area under the receiver operating characteristic curve (AUC) values of 0.87/0.826 (UTSW), 0.97/0.988 (CC-CCCI), and 0.86/0.873 (COVID-CTset) when evaluated on their own dataset. The models trained on multiple datasets and evaluated on a test set from one of the datasets used for training performed better. However, the performance dropped close to an AUC of 0.5 (random guess) for all models when evaluated on a different dataset outside of its training datasets. Including MosMedData, which only contained positive labels, into the training datasets did not necessarily help the performance of other datasets. Multiple factors likely contributed to these results, such as patient demographics and differences in image acquisition or reconstruction, causing a data shift among different study cohorts.

6.
Radiol Artif Intell ; 3(2): e200024, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33937858

ABSTRACT

PURPOSE: To determine how to optimize the delivery of machine learning techniques in a clinical setting to detect intracranial hemorrhage (ICH) on non-contrast-enhanced CT images to radiologists to improve workflow. MATERIALS AND METHODS: In this study, a commercially available machine learning algorithm that flags abnormal noncontrast CT examinations for ICH was implemented in a busy academic neuroradiology practice between September 2017 and March 2019. The algorithm was introduced in three phases: (a) as a "pop-up" widget on ancillary monitors, (b) as a marked examination in reading worklists, and (c) as a marked examination for reprioritization based on the presence of the flag. A statistical approach, which was based on a queuing theory, was implemented to assess the impact of each intervention on queue-adjusted wait and turnaround time compared with historical controls. RESULTS: Notification with a widget or flagging the examination had no effect on queue-adjusted image wait (P > .99) or turnaround time (P = .6). However, a reduction in queue-adjusted wait time was observed between negative (15.45 minutes; 95% CI: 15.07, 15.38) and positive (12.02 minutes; 95% CI: 11.06, 12.97; P < .0001) artificial intelligence-detected ICH examinations with reprioritization. Reduced wait time was present for all order classes but was greatest for examinations ordered as routine for both inpatients and outpatients because of their low priority. CONCLUSION: The approach used to present flags from artificial intelligence and machine learning algorithms to the radiologist can reduce image wait time and turnaround times.© RSNA, 2021See also the commentary by O'Connor and Bhalla in this issue.

7.
J Comput Assist Tomogr ; 44(2): 197-203, 2020.
Article in English | MEDLINE | ID: mdl-32195798

ABSTRACT

INTRODUCTION: Liver segmentation and volumetry have traditionally been performed using computed tomography (CT) attenuation to discriminate liver from other tissues. In this project, we evaluated if spectral detector CT (SDCT) can improve liver segmentation over conventional CT on 2 segmentation methods. MATERIALS AND METHODS: In this Health Insurance Portability and Accountability Act-compliant institutional review board-approved retrospective study, 30 contrast-enhanced SDCT scans with healthy livers were selected. The first segmentation method is based on Gaussian mixture models of the SDCT data. The second method is a convolutional neural network-based technique called U-Net. Both methods were compared against equivalent algorithms, which used conventional CT attenuation, with hand segmentation as the reference standard. Agreement to the reference standard was assessed using Dice similarity coefficient. RESULTS: Dice similarity coefficients to the reference standard are 0.93 ± 0.02 for the Gaussian mixture model method and 0.90 ± 0.04 for the CNN-based method (all 2 methods applied on SDCT). These were significantly higher compared with equivalent algorithms applied on conventional CT, with Dice coefficients of 0.90 ± 0.06 (P = 0.007) and 0.86 ± 0.06 (P < 0.001), respectively. CONCLUSION: On both liver segmentation methods tested, we demonstrated higher segmentation performance when the algorithms are applied on SDCT data compared with equivalent algorithms applied on conventional CT data.


Subject(s)
Liver/diagnostic imaging , Liver/pathology , Radiographic Image Interpretation, Computer-Assisted/methods , Tomography, X-Ray Computed/methods , Contrast Media , Humans , Organ Size , Radiographic Image Enhancement/methods , Retrospective Studies
9.
J Cell Biochem ; 113(5): 1714-23, 2012 May.
Article in English | MEDLINE | ID: mdl-22213010

ABSTRACT

MicroRNAs (miRNAs) are short noncoding ribonucleic acids known to affect gene expression at the translational level and there is mounting evidence that miRNAs play a role in the function of tumor-associated macrophages (TAMs). To aid the functional analyses of miRNAs in an in-vitro model of TAMs known as M2 macrophages, a transfection method to introduce artificial miRNA constructs or miRNA molecules into primary human monocytes is needed. Unlike differentiated macrophages or dendritic cells, undifferentiated primary human monocytes have been known to show resistance to lentiviral transduction. To circumvent this challenge, other techniques such as electroporation and chemical transfection have been used in other applications to deliver small gene constructs into human monocytes. To date, no studies have compared these two methods objectively to evaluate their suitability in the miRNA functional analysis of M2 macrophages. Of the methods tested, the electroporation of miRNA-construct containing plasmids and the chemical transfection of miRNA precursor molecules are the most efficient approaches. The use of a silencer siRNA labeling kit (Ambion) to conjugate Cy 3 fluorescence dyes to the precursor molecules allowed the isolation of successfully transfected cells with fluorescence-activated cell sorting. The chemical transfection of these dye-conjugated miRNA precursors yield an efficiency of 37.5 ± 0.6% and a cell viability of 74 ± 1%. RNA purified from the isolated cells demonstrated good quality, and was fit for subsequent mRNA expression qPCR analysis. While electroporation of plasmids containing miRNA constructs yield transfection efficiencies comparable to chemical transfection of miRNA precursors, these electroporated primary monocytes seemed to have lost their potential for differentiation. Among the most common methods of transfection, the chemical transfection of dye-conjugated miRNA precursors was determined to be the best-suited approach for the functional analysis of M2 macrophages.


Subject(s)
Macrophages/metabolism , MicroRNAs/genetics , Carbocyanines , Cell Differentiation , Cell Line, Tumor , Cell Survival , Cells, Cultured , Electroporation , Fluorescent Dyes , Humans , Macrophages/classification , Macrophages/pathology , MicroRNAs/chemistry , Monocytes/metabolism , Monocytes/pathology , RNA Precursors/chemistry , RNA Precursors/genetics , Transfection/methods , U937 Cells
SELECTION OF CITATIONS
SEARCH DETAIL
...