Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38703880

RESUMO

BACKGROUND & AIMS: Changes in body composition and metabolic factors may serve as biomarkers for the early detection of pancreatic ductal adenocarcinoma (PDAC). The aim of this study was to capture the longitudinal changes in body composition and metabolic factors before diagnosis of PDAC. METHODS: We performed a retrospective cohort study in which all patients (≥18 years) diagnosed with PDAC from 2002 to 2021 were identified. We collected all abdominal computed tomography scans and 10 different blood-based biomarkers up to 36 months before diagnosis. We applied a fully automated abdominal segmentation algorithm previously developed by our group for 3-dimensional quantification of body composition on computed tomography scans. Longitudinal trends of body composition and blood-based biomarkers before PDAC diagnosis were estimated using linear mixed models, compared across different time windows, and visualized using spline regression. RESULTS: We included 1690 patients in body composition analysis, of whom 516 (30.5%) had ≥2 prediagnostic computed tomography scans. For analysis of longitudinal trends of blood-based biomarkers, 3332 individuals were included. As an early manifestation of PDAC, we observed a significant decrease in visceral and subcutaneous adipose tissue (ß = -1.94 [95% confidence interval (CI), -2.39 to -1.48] and ß = -2.59 [95% CI, -3.17 to -2.02]) in area (cm2)/height (m2) per 6 months closer to diagnosis, accompanied by a decrease in serum lipids (eg, low-density lipoprotein [ß = -2.83; 95% CI, -3.31 to -2.34], total cholesterol [ß = -2.69; 95% CI, -3.18 to -2.20], and triglycerides [ß = -1.86; 95% CI, -2.61 to -1.11]), and an increase in blood glucose levels. Loss of muscle tissue and bone volume was predominantly observed in the last 6 months before diagnosis. CONCLUSIONS: This study identified significant alterations in a variety of soft tissue and metabolic markers that occur in the development of PDAC. Early recognition of these metabolic changes may provide an opportunity for early detection.

2.
Mayo Clin Proc ; 99(2): 260-270, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38309937

RESUMO

OBJECTIVE: To evaluate a machine learning (ML)-based model for pulmonary hypertension (PH) prediction using measurements and impressions made during echocardiography. METHODS: A total of 7853 consecutive patients with right-sided heart catheterization and transthoracic echocardiography performed within 1 week from January 1, 2012, through December 31, 2019, were included. The data were split into training (n=5024 [64%]), validation (n=1275 [16%]), and testing (n=1554 [20%]). A gradient boosting machine with enumerated grid search for optimization was selected to allow missing data in the boosted trees without imputation. The training target was PH, defined by right-sided heart catheterization as mean pulmonary artery pressure above 20 mm Hg; model performance was maximized relative to area under the receiver operating characteristic curve using 5-fold cross-validation. RESULTS: Cohort age was 64±14 years; 3467 (44%) were female, and 81% (6323/7853) had PH. The final trained model included 19 characteristics, measurements, or impressions derived from the echocardiogram. In the testing data, the model had high discrimination for the detection of PH (area under the receiver operating characteristic curve, 0.83; 95% CI, 0.80 to 0.85). The model's accuracy, sensitivity, positive predictive value, and negative predictive value were 82% (1267/1554), 88% (1098/1242), 89% (1098/1241), and 54% (169/313), respectively. CONCLUSION: By use of ML, PH could be predicted on the basis of clinical and echocardiographic variables, without tricuspid regurgitation velocity. Machine learning methods appear promising for identifying patients with low likelihood of PH.


Assuntos
Hipertensão Pulmonar , Humanos , Pessoa de Meia-Idade , Idoso , Hipertensão Pulmonar/diagnóstico por imagem , Ecocardiografia/métodos , Cateterismo Cardíaco/métodos , Curva ROC , Aprendizado de Máquina , Estudos Retrospectivos
3.
Artigo em Inglês | MEDLINE | ID: mdl-38373180

RESUMO

BACKGROUND: Body composition can be accurately quantified from abdominal computed tomography (CT) exams and is a predictor for the development of aging-related conditions and for mortality. However, reference ranges for CT-derived body composition measures of obesity, sarcopenia, and bone loss have yet to be defined in the general population. METHODS: We identified a population-representative sample of 4 900 persons aged 20 to 89 years who underwent an abdominal CT exam from 2010 to 2020. The sample was constructed using propensity score matching an age and sex stratified sample of persons residing in the 27-county region of Southern Minnesota and Western Wisconsin. The matching included race, ethnicity, education level, region of residence, and the presence of 20 chronic conditions. We used a validated deep learning based algorithm to calculate subcutaneous adipose tissue area, visceral adipose tissue area, skeletal muscle area, skeletal muscle density, vertebral bone area, and vertebral bone density from a CT abdominal section. RESULTS: We report CT-based body composition reference ranges on 4 649 persons representative of our geographic region. Older age was associated with a decrease in skeletal muscle area and density, and an increase in visceral adiposity. All chronic conditions were associated with a statistically significant difference in at least one body composition biomarker. The presence of a chronic condition was generally associated with greater subcutaneous and visceral adiposity, and lower muscle density and vertebrae bone density. CONCLUSIONS: We report reference ranges for CT-based body composition biomarkers in a population-representative cohort of 4 649 persons by age, sex, body mass index, and chronic conditions.


Assuntos
Composição Corporal , Sarcopenia , Humanos , Valores de Referência , Músculo Esquelético , Sarcopenia/diagnóstico por imagem , Sarcopenia/epidemiologia , Índice de Massa Corporal , Gordura Intra-Abdominal , Biomarcadores , Obesidade Abdominal
4.
Mayo Clin Proc ; 99(6): 878-890, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38310501

RESUMO

OBJECTIVE: To determine whether body composition derived from medical imaging may be useful for assessing biologic age at the tissue level because people of the same chronologic age may vary with respect to their biologic age. METHODS: We identified an age- and sex-stratified cohort of 4900 persons with an abdominal computed tomography scan from January 1, 2010, to December 31, 2020, who were 20 to 89 years old and representative of the general population in Southeast Minnesota and West Central Wisconsin. We constructed a model for estimating tissue age that included 6 body composition biomarkers calculated from abdominal computed tomography using a previously validated deep learning model. RESULTS: Older tissue age associated with intermediate subcutaneous fat area, higher visceral fat area, lower muscle area, lower muscle density, higher bone area, and lower bone density. A tissue age older than chronologic age was associated with chronic conditions that result in reduced physical fitness (including chronic obstructive pulmonary disease, arthritis, cardiovascular disease, and behavioral disorders). Furthermore, a tissue age older than chronologic age was associated with an increased risk of death (hazard ratio, 1.56; 95% CI, 1.33 to 1.84) that was independent of demographic characteristics, county of residency, education, body mass index, and baseline chronic conditions. CONCLUSION: Imaging-based body composition measures may be useful in understanding the biologic processes underlying accelerated aging.


Assuntos
Composição Corporal , Tomografia Computadorizada por Raios X , Humanos , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Doença Crônica , Adulto , Idoso de 80 Anos ou mais , Tomografia Computadorizada por Raios X/métodos , Biomarcadores/análise , Envelhecimento/fisiologia , Minnesota/epidemiologia , Wisconsin/epidemiologia , Adulto Jovem , Músculo Esquelético/diagnóstico por imagem , Fatores Etários
5.
J Allergy Clin Immunol Pract ; 12(5): 1181-1191.e10, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38242531

RESUMO

BACKGROUND: Using the reaction history in logistic regression and machine learning (ML) models to predict penicillin allergy has been reported based on non-US data. OBJECTIVE: We developed ML positive penicillin allergy testing prediction models from multisite US data. METHODS: Retrospective data from 4 US-based hospitals were grouped into 4 datasets: enriched training (1:3 case-control matched cohort), enriched testing, nonenriched internal testing, and nonenriched external testing. ML algorithms were used for model development. We determined area under the curve (AUC) and applied the Shapley Additive exPlanations (SHAP) framework to interpret risk drivers. RESULTS: Of 4777 patients (mean age 60 [standard deviation: 17] years; 68% women, 91% White, and 86% non-Hispanic) evaluated for penicillin allergy labels, 513 (11%) had positive penicillin allergy testing. Model input variables were frequently missing: immediate or delayed onset (71%), signs or symptoms (13%), and treatment (31%). The gradient-boosted model was the strongest model with an AUC of 0.67 (95% confidence interval [CI]: 0.57-0.77), which improved to 0.87 (95% CI: 0.73-1) when only cases with complete data were used. Top SHAP drivers for positive testing were reactions within the last year and reactions requiring medical attention; female sex and reaction of hives/urticaria were also positive drivers. CONCLUSIONS: An ML prediction model for positive penicillin allergy skin testing using US-based retrospective data did not achieve performance strong enough for acceptance and adoption. The optimal ML prediction model for positive penicillin allergy testing was driven by time since reaction, seek medical attention, female sex, and hives/urticaria.


Assuntos
Hipersensibilidade a Drogas , Aprendizado de Máquina , Penicilinas , Humanos , Feminino , Penicilinas/efeitos adversos , Masculino , Hipersensibilidade a Drogas/epidemiologia , Hipersensibilidade a Drogas/diagnóstico , Estudos Retrospectivos , Pessoa de Meia-Idade , Estados Unidos/epidemiologia , Idoso , Adulto , Antibacterianos/efeitos adversos , Estudos de Casos e Controles , Testes Cutâneos
6.
J Arthroplasty ; 38(10): 1943-1947, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37598784

RESUMO

Electronic health records have facilitated the extraction and analysis of a vast amount of data with many variables for clinical care and research. Conventional regression-based statistical methods may not capture all the complexities in high-dimensional data analysis. Therefore, researchers are increasingly using machine learning (ML)-based methods to better handle these more challenging datasets for the discovery of hidden patterns in patients' data and for classification and predictive purposes. This article describes commonly used ML methods in structured data analysis with examples in orthopedic surgery. We present practical considerations in starting an ML project and appraising published studies in this field.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Humanos
7.
Eur Heart J Digit Health ; 4(3): 188-195, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37265866

RESUMO

Aims: The current guidelines recommend aortic valve intervention in patients with severe aortic regurgitation (AR) with the onset of symptoms, left ventricular enlargement, or systolic dysfunction. Recent studies have suggested that we might be missing the window of early intervention in a significant number of patients by following the guidelines. Methods and results: The overarching goal was to determine if machine learning (ML)-based algorithms could be trained to identify patients at risk for death from AR independent of aortic valve replacement (AVR). Models were trained with five-fold cross-validation on a dataset of 1035 patients, and performance was reported on an independent dataset of 207 patients. Optimal predictive performance was observed with a conditional random survival forest model. A subset of 19/41 variables was selected for inclusion in the final model. Variable selection was performed with 10-fold cross-validation using random survival forest model. The top variables included were age, body surface area, body mass index, diastolic blood pressure, New York Heart Association class, AVR, comorbidities, ejection fraction, end-diastolic volume, and end-systolic dimension, and the relative variable importance averaged across five splits of cross-validation in each repeat were evaluated. The concordance index for predicting survival of the best-performing model was 0.84 at 1 year, 0.86 at 2 years, and 0.87 overall, respectively. Conclusion: Using common echocardiographic parameters and patient characteristics, we successfully trained multiple ML models to predict survival in patients with severe AR. This technique could be applied to identify high-risk patients who would benefit from early intervention, thereby improving patient outcomes.

8.
Front Med (Lausanne) ; 9: 992703, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36250077

RESUMO

Liver disease such as cirrhosis is known to cause changes in the composition of volatile organic compounds (VOC) present in patient breath samples. Previous studies have demonstrated the diagnosis of liver cirrhosis from these breath samples, but studies are limited to a handful of discrete, well-characterized compounds. We utilized VOC profiles from breath samples from 46 individuals, 35 with cirrhosis and 11 healthy controls. A deep-neural network was optimized to discriminate between healthy controls and individuals with cirrhosis. A 1D convolutional neural network (CNN) was accurate in predicting which patients had cirrhosis with an AUC of 0.90 (95% CI: 0.75, 0.99). Shapley Additive Explanations characterized the presence of discrete, observable peaks which were implicated in prediction, and the top peaks (based on the average SHAP profiles on the test dataset) were noted. CNNs demonstrate the ability to predict the presence of cirrhosis based on a full volatolomics profile of patient breath samples. SHAP values indicate the presence of discrete, detectable peaks in the VOC signal.

9.
Clin Nutr ; 41(8): 1676-1679, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35777106

RESUMO

BACKGROUND & AIMS: The association between body composition parameters measured on computed tomography (CT) and severity of acute pancreatitis (AP) is conflicting because these composition parameters vary considerably by sex and age. We previously developed normative body composition data, in healthy subjects. Z-score calculated from the normative data gives age and sex adjusted body composition parameters. We studied the above association using this novel Z-score in a large cohort of patients with AP. METHODS: Between January 2014 and March 2018, patients admitted with AP and had CT scans within a week of admission, were enrolled. Body composition data including skeletal muscle (SM), subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) were calculated from the CT scan using deep learning automated algorithm. Then we converted the value to Z-score, and then compared the same score between mild AP, moderately severe AP and severe AP defined by revised Atlanta criteria. RESULTS: Out of 514 patients, 336 (65.4%) are mild AP, 130 (25.3%) moderately severe AP, and 48 (9.3%) severe AP. Patients with moderately severe AP had significantly lower SM-z-score than those with mild AP (1.21 vs1.73, p = 0.048) and patients with severe AP had significantly lower SAT-z-score than those with mild AP (0.70 vs.1.29, p = 0.016). VAT-z-score was not significantly different between three groups. (p = 0.76). CONCLUSION: Lower SM-z-score and SAT-z-score were associated with moderately severe and severe types of AP, respectively. Future prospective studies in patients with AP using Z-scores, may define the association between body composition and severity of AP, and explain the inconsistencies reported in previous studies.


Assuntos
Pancreatite , Doença Aguda , Composição Corporal , Humanos , Obesidade/complicações , Pancreatite/diagnóstico por imagem , Estudos Prospectivos , Tomografia Computadorizada por Raios X/métodos
10.
Pancreatology ; 21(8): 1524-1530, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34507900

RESUMO

BACKGROUND & AIMS: Increased intrapancreatic fat is associated with pancreatic diseases; however, there are no established objective diagnostic criteria for fatty pancreas. On non-contrast computed tomography (CT), adipose tissue shows negative Hounsfield Unit (HU) attenuations (-150 to -30 HU). Using whole organ segmentation on non-contrast CT, we aimed to describe whole gland pancreatic attenuation and establish 5th and 10th percentile thresholds across a spectrum of age and sex. Subsequently, we aimed to evaluate the association between low pancreatic HU and risk of pancreatic ductal adenocarcinoma (PDAC). METHODS: The whole pancreas was segmented in 19,456 images from 469 non-contrast CT scans. A convolutional neural network was trained to assist pancreas segmentation. Mean pancreatic HU, volume, and body composition metrics were calculated. The lower 5th and 10th percentile for mean pancreatic HU were identified, examining the association with age and sex. Pre-diagnostic CT scans from patients who later developed PDAC were compared to cancer-free controls. RESULTS: Less than 5th percentile mean pancreatic HU was significantly associated with increase in BMI (OR 1.07; 1.03-1.11), visceral fat (OR 1.37; 1.15-1.64), total abdominal fat (OR 1.12; 1.03-1.22), and diabetes mellitus type 1 (OR 6.76; 1.68-27.28). Compared to controls, pre-diagnostic scans in PDAC cases had lower mean whole gland pancreatic HU (-0.2 vs 7.8, p = 0.026). CONCLUSION: In this study, we report age and sex-specific distribution of pancreatic whole-gland CT attenuation. Compared to controls, mean whole gland pancreatic HU is significantly lower in the pre-diagnostic phase of PDAC.


Assuntos
Carcinoma Ductal Pancreático , Pancreatopatias , Neoplasias Pancreáticas , Inteligência Artificial , Composição Corporal , Feminino , Humanos , Masculino , Pâncreas/diagnóstico por imagem , Neoplasias Pancreáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Neoplasias Pancreáticas
12.
J Arthroplasty ; 36(7): 2510-2517.e6, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33678445

RESUMO

BACKGROUND: Inappropriate acetabular component angular position is believed to increase the risk of hip dislocation after total hip arthroplasty. However, manual measurement of these angles is time consuming and prone to interobserver variability. The purpose of this study was to develop a deep learning tool to automate the measurement of acetabular component angles on postoperative radiographs. METHODS: Two cohorts of 600 anteroposterior (AP) pelvis and 600 cross-table lateral hip postoperative radiographs were used to develop deep learning models to segment the acetabular component and the ischial tuberosities. Cohorts were manually annotated, augmented, and randomly split to train-validation-test data sets on an 8:1:1 basis. Two U-Net convolutional neural network models (one for AP and one for cross-table lateral radiographs) were trained for 50 epochs. Image processing was then deployed to measure the acetabular component angles on the predicted masks for anatomical landmarks. Performance of the tool was tested on 80 AP and 80 cross-table lateral radiographs. RESULTS: The convolutional neural network models achieved a mean Dice similarity coefficient of 0.878 and 0.903 on AP and cross-table lateral test data sets, respectively. The mean difference between human-level and machine-level measurements was 1.35° (σ = 1.07°) and 1.39° (σ = 1.27°) for the inclination and anteversion angles, respectively. Differences of 5° or more between human-level and machine-level measurements were observed in less than 2.5% of cases. CONCLUSION: We developed a highly accurate deep learning tool to automate the measurement of angular position of acetabular components for use in both clinical and research settings. LEVEL OF EVIDENCE: III.


Assuntos
Artroplastia de Quadril , Aprendizado Profundo , Prótese de Quadril , Acetábulo/diagnóstico por imagem , Acetábulo/cirurgia , Artroplastia de Quadril/efeitos adversos , Prótese de Quadril/efeitos adversos , Humanos , Radiografia
13.
Radiology ; 299(2): 313-323, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33687284

RESUMO

Background Missing MRI sequences represent an obstacle in the development and use of deep learning (DL) models that require multiple inputs. Purpose To determine if synthesizing brain MRI scans using generative adversarial networks (GANs) allows for the use of a DL model for brain lesion segmentation that requires T1-weighted images, postcontrast T1-weighted images, fluid-attenuated inversion recovery (FLAIR) images, and T2-weighted images. Materials and Methods In this retrospective study, brain MRI scans obtained between 2011 and 2019 were collected, and scenarios were simulated in which the T1-weighted images and FLAIR images were missing. Two GANs were trained, validated, and tested using 210 glioblastomas (GBMs) (Multimodal Brain Tumor Image Segmentation Benchmark [BRATS] 2017) to generate T1-weighted images from postcontrast T1-weighted images and FLAIR images from T2-weighted images. The quality of the generated images was evaluated with mean squared error (MSE) and the structural similarity index (SSI). The segmentations obtained with the generated scans were compared with those obtained with the original MRI scans using the dice similarity coefficient (DSC). The GANs were validated on sets of GBMs and central nervous system lymphomas from the authors' institution to assess their generalizability. Statistical analysis was performed using the Mann-Whitney, Friedman, and Dunn tests. Results Two hundred ten GBMs from the BRATS data set and 46 GBMs (mean patient age, 58 years ± 11 [standard deviation]; 27 men [59%] and 19 women [41%]) and 21 central nervous system lymphomas (mean patient age, 67 years ± 13; 12 men [57%] and nine women [43%]) from the authors' institution were evaluated. The median MSE for the generated T1-weighted images ranged from 0.005 to 0.013, and the median MSE for the generated FLAIR images ranged from 0.004 to 0.103. The median SSI ranged from 0.82 to 0.92 for the generated T1-weighted images and from 0.76 to 0.92 for the generated FLAIR images. The median DSCs for the segmentation of the whole lesion, the FLAIR hyperintensities, and the contrast-enhanced areas using the generated scans were 0.82, 0.71, and 0.92, respectively, when replacing both T1-weighted and FLAIR images; 0.84, 0.74, and 0.97 when replacing only the FLAIR images; and 0.97, 0.95, and 0.92 when replacing only the T1-weighted images. Conclusion Brain MRI scans generated using generative adversarial networks can be used as deep learning model inputs in case MRI sequences are missing. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Zhong in this issue. An earlier incorrect version of this article appeared online. This article was corrected on April 12, 2021.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado Profundo , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Linfoma/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Idoso , Meios de Contraste , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
14.
Med Phys ; 47(11): 5609-5618, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32740931

RESUMO

PURPOSE: Organ segmentation of computed tomography (CT) imaging is essential for radiotherapy treatment planning. Treatment planning requires segmentation not only of the affected tissue, but nearby healthy organs-at-risk, which is laborious and time-consuming. We present a fully automated segmentation method based on the three-dimensional (3D) U-Net convolutional neural network (CNN) capable of whole abdomen and pelvis segmentation into 33 unique organ and tissue structures, including tissues that may be overlooked by other automated segmentation approaches such as adipose tissue, skeletal muscle, and connective tissue and vessels. Whole abdomen segmentation is capable of quantifying exposure beyond a handful of organs-at-risk to all tissues within the abdomen. METHODS: Sixty-six (66) CT examinations of 64 individuals were included in the training and validation sets and 18 CT examinations from 16 individuals were included in the test set. All pixels in each examination were segmented by image analysts (with physician correction) and assigned one of 33 labels. Segmentation was performed with a 3D U-Net variant architecture which included residual blocks, and model performance was quantified on 18 test cases. Human interobserver variability (using semiautomated segmentation) was also reported on two scans, and manual interobserver variability of three individuals was reported on one scan. Model performance was also compared to several of the best models reported in the literature for multiple organ segmentation. RESULTS: The accuracy of the 3D U-Net model ranges from a Dice coefficient of 0.95 in the liver, 0.93 in the kidneys, 0.79 in the pancreas, 0.69 in the adrenals, and 0.51 in the renal arteries. Model accuracy is within 5% of human segmentation in eight of 19 organs and within 10% accuracy in 13 of 19 organs. CONCLUSIONS: The CNN approaches the accuracy of human tracers and on certain complex organs displays more consistent prediction than human tracers. Fully automated deep learning-based segmentation of CT abdomen has the potential to improve both the speed and accuracy of radiotherapy dose prediction for organs-at-risk.


Assuntos
Abdome , Redes Neurais de Computação , Abdome/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Órgãos em Risco , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X
15.
Radiol Artif Intell ; 2(5): e190183, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33937839

RESUMO

PURPOSE: To develop a deep learning model that segments intracranial structures on head CT scans. MATERIALS AND METHODS: In this retrospective study, a primary dataset containing 62 normal noncontrast head CT scans from 62 patients (mean age, 73 years; age range, 27-95 years) acquired between August and December 2018 was used for model development. Eleven intracranial structures were manually annotated on the axial oblique series. The dataset was split into 40 scans for training, 10 for validation, and 12 for testing. After initial training, eight model configurations were evaluated on the validation dataset and the highest performing model was evaluated on the test dataset. Interobserver variability was reported using multirater consensus labels obtained from the test dataset. To ensure that the model learned generalizable features, it was further evaluated on two secondary datasets containing 12 volumes with idiopathic normal pressure hydrocephalus (iNPH) and 30 normal volumes from a publicly available source. Statistical significance was determined using categorical linear regression with P < .05. RESULTS: Overall Dice coefficient on the primary test dataset was 0.84 ± 0.05 (standard deviation). Performance ranged from 0.96 ± 0.01 (brainstem and cerebrum) to 0.74 ± 0.06 (internal capsule). Dice coefficients were comparable to expert annotations and exceeded those of existing segmentation methods. The model remained robust on external CT scans and scans demonstrating ventricular enlargement. The use of within-network normalization and class weighting facilitated learning of underrepresented classes. CONCLUSION: Automated segmentation of CT neuroanatomy is feasible with a high degree of accuracy. The model generalized to external CT scans as well as scans demonstrating iNPH.Supplemental material is available for this article.© RSNA, 2020.

16.
J Am Coll Radiol ; 16(9 Pt B): 1318-1328, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31492410

RESUMO

Ultrasound is the most commonly used imaging modality in clinical practice because it is a nonionizing, low-cost, and portable point-of-care imaging tool that provides real-time images. Artificial intelligence (AI)-powered ultrasound is becoming more mature and getting closer to routine clinical applications in recent times because of an increased need for efficient and objective acquisition and evaluation of ultrasound images. Because ultrasound images involve operator-, patient-, and scanner-dependent variations, the adaptation of classical machine learning methods to clinical applications becomes challenging. With their self-learning ability, deep-learning (DL) methods are able to harness exponentially growing graphics processing unit computing power to identify abstract and complex imaging features. This has given rise to tremendous opportunities such as providing robust and generalizable AI models for improving image acquisition, real-time assessment of image quality, objective diagnosis and detection of diseases, and optimizing ultrasound clinical workflow. In this report, the authors review current DL approaches and research directions in rapidly advancing ultrasound technology and present their outlook on future directions and trends for DL techniques to further improve diagnosis, reduce health care cost, and optimize ultrasound clinical workflow.


Assuntos
Aprendizado Profundo/tendências , Melhoria de Qualidade , Ultrassonografia Doppler em Cores/métodos , Fluxo de Trabalho , Algoritmos , Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem , Feminino , Previsões , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Masculino , Inquéritos e Questionários , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Estados Unidos
17.
J Digit Imaging ; 32(4): 571-581, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31089974

RESUMO

Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to "learn" from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.


Assuntos
Conjuntos de Dados como Assunto , Aprendizado Profundo , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Sistemas de Informação em Radiologia , Humanos
18.
Gastroenterology ; 156(6): 1742-1752, 2019 05.
Artigo em Inglês | MEDLINE | ID: mdl-30677401

RESUMO

BACKGROUND & AIMS: Identifying metabolic abnormalities that occur before pancreatic ductal adenocarcinoma (PDAC) diagnosis could increase chances for early detection. We collected data on changes in metabolic parameters (glucose, serum lipids, triglycerides; total, low-density, and high-density cholesterol; and total body weight) and soft tissues (abdominal subcutaneous fat [SAT], adipose tissue, visceral adipose tissue [VAT], and muscle) from patients 5 years before the received a diagnosis of PDAC. METHODS: We collected data from 219 patients with a diagnosis of PDAC (patients) and 657 healthy individuals (controls) from the Rochester Epidemiology Project, from 2000 through 2015. We compared metabolic profiles of patients with those of age- and sex-matched controls, constructing temporal profiles of fasting blood glucose, serum lipids including triglycerides, cholesterol profiles, and body weight and temperature for 60 months before the diagnosis of PDAC (index date). To construct the temporal profile of soft tissue changes, we collected computed tomography scans from 68 patients, comparing baseline (>18 months before diagnosis) areas of SAT, VAT, and muscle at L2/L3 vertebra with those of later scans until time of diagnosis. SAT and VAT, isolated from healthy individuals, were exposed to exosomes isolated from PDAC cell lines and analyzed by RNA sequencing. SAT was collected from KRAS+/LSLG12D P53flox/flox mice with PDACs, C57/BL6 (control) mice, and 5 patients and analyzed by histology and immunohistochemistry. RESULTS: There were no significant differences in metabolic or soft tissue features of patients vs controls until 30 months before PDAC diagnosis. In the 30 to 18 months before PDAC diagnosis (phase 1, hyperglycemia), a significant proportion of patients developed hyperglycemia, compared with controls, without soft tissue changes. In the 18 to 6 months before PDAC diagnosis (phase 2, pre-cachexia), patients had significant increases in hyperglycemia and decreases in serum lipids, body weight, and SAT, with preserved VAT and muscle. In the 6 to 0 months before PDAC diagnosis (phase 3, cachexia), a significant proportion of patients had hyperglycemia compared with controls, and patients had significant reductions in all serum lipids, SAT, VAT, and muscle. We believe the patients had browning of SAT, based on increases in body temperature, starting 18 months before PDAC diagnosis. We observed expression of uncoupling protein 1 (UCP1) in SAT exposed to PDAC exosomes, SAT from mice with PDACs, and SAT from all 5 patients but only 1 of 4 controls. CONCLUSIONS: We identified 3 phases of metabolic and soft tissue changes that precede a diagnosis of PDAC. Loss of SAT starts 18 months before PDAC identification, and is likely due to browning. Overexpression of UCP1 in SAT might be a biomarker of early-stage PDAC, but further studies are needed.


Assuntos
Caquexia/etiologia , Carcinoma Ductal Pancreático/sangue , Carcinoma Ductal Pancreático/diagnóstico , Hiperglicemia/sangue , Neoplasias Pancreáticas/sangue , Neoplasias Pancreáticas/diagnóstico , Adipócitos/metabolismo , Adipócitos/patologia , Animais , Glicemia/metabolismo , Temperatura Corporal , Peso Corporal , Carcinoma Ductal Pancreático/complicações , Carcinoma Ductal Pancreático/genética , Estudos de Casos e Controles , Células Cultivadas , HDL-Colesterol/sangue , LDL-Colesterol/sangue , Exossomos , Humanos , Hiperglicemia/etiologia , Gordura Intra-Abdominal/diagnóstico por imagem , Gordura Intra-Abdominal/patologia , Camundongos , Pessoa de Meia-Idade , Músculo Esquelético/diagnóstico por imagem , Neoplasias Pancreáticas/complicações , Neoplasias Pancreáticas/genética , RNA Mensageiro/metabolismo , Estudos Retrospectivos , Gordura Subcutânea Abdominal/diagnóstico por imagem , Gordura Subcutânea Abdominal/patologia , Fatores de Tempo , Tomografia Computadorizada por Raios X , Triglicerídeos/sangue , Proteína Desacopladora 1/genética , Regulação para Cima
19.
Radiology ; 290(3): 669-679, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30526356

RESUMO

Purpose To develop and evaluate a fully automated algorithm for segmenting the abdomen from CT to quantify body composition. Materials and Methods For this retrospective study, a convolutional neural network based on the U-Net architecture was trained to perform abdominal segmentation on a data set of 2430 two-dimensional CT examinations and was tested on 270 CT examinations. It was further tested on a separate data set of 2369 patients with hepatocellular carcinoma (HCC). CT examinations were performed between 1997 and 2015. The mean age of patients was 67 years; for male patients, it was 67 years (range, 29-94 years), and for female patients, it was 66 years (range, 31-97 years). Differences in segmentation performance were assessed by using two-way analysis of variance with Bonferroni correction. Results Compared with reference segmentation, the model for this study achieved Dice scores (mean ± standard deviation) of 0.98 ± 0.03, 0.96 ± 0.02, and 0.97 ± 0.01 in the test set, and 0.94 ± 0.05, 0.92 ± 0.04, and 0.98 ± 0.02 in the HCC data set, for the subcutaneous, muscle, and visceral adipose tissue compartments, respectively. Performance met or exceeded that of expert manual segmentation. Conclusion Model performance met or exceeded the accuracy of expert manual segmentation of CT examinations for both the test data set and the hepatocellular carcinoma data set. The model generalized well to multiple levels of the abdomen and may be capable of fully automated quantification of body composition metrics in three-dimensional CT examinations. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Chang in this issue.


Assuntos
Composição Corporal , Aprendizado Profundo , Reconhecimento Automatizado de Padrão , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Abdominal , Tomografia Computadorizada por Raios X , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Carcinoma Hepatocelular/diagnóstico por imagem , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Pessoa de Meia-Idade , Estudos Retrospectivos
20.
AJR Am J Roentgenol ; 211(6): 1184-1193, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30403527

RESUMO

OBJECTIVE: Deep learning has shown great promise for improving medical image classification tasks. However, knowing what aspects of an image the deep learning system uses or, in a manner of speaking, sees to make its prediction is difficult. MATERIALS AND METHODS: Within a radiologic imaging context, we investigated the utility of methods designed to identify features within images on which deep learning activates. In this study, we developed a classifier to identify contrast enhancement phase from whole-slice CT data. We then used this classifier as an easily interpretable system to explore the utility of class activation map (CAMs), gradient-weighted class activation maps (Grad-CAMs), saliency maps, guided backpropagation maps, and the saliency activation map, a novel map reported here, to identify image features the model used when performing prediction. RESULTS: All techniques identified voxels within imaging that the classifier used. SAMs had greater specificity than did guided backpropagation maps, CAMs, and Grad-CAMs at identifying voxels within imaging that the model used to perform prediction. At shallow network layers, SAMs had greater specificity than Grad-CAMs at identifying input voxels that the layers within the model used to perform prediction. CONCLUSION: As a whole, voxel-level visualizations and visualizations of the imaging features that activate shallow network layers are powerful techniques to identify features that deep learning models use when performing prediction.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...