Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Arthroplasty ; 38(10): 1943-1947, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37598784

RESUMO

Electronic health records have facilitated the extraction and analysis of a vast amount of data with many variables for clinical care and research. Conventional regression-based statistical methods may not capture all the complexities in high-dimensional data analysis. Therefore, researchers are increasingly using machine learning (ML)-based methods to better handle these more challenging datasets for the discovery of hidden patterns in patients' data and for classification and predictive purposes. This article describes commonly used ML methods in structured data analysis with examples in orthopedic surgery. We present practical considerations in starting an ML project and appraising published studies in this field.


Assuntos
Registros Eletrônicos de Saúde , Aprendizado de Máquina , Humanos
2.
Radiology ; 299(2): 313-323, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33687284

RESUMO

Background Missing MRI sequences represent an obstacle in the development and use of deep learning (DL) models that require multiple inputs. Purpose To determine if synthesizing brain MRI scans using generative adversarial networks (GANs) allows for the use of a DL model for brain lesion segmentation that requires T1-weighted images, postcontrast T1-weighted images, fluid-attenuated inversion recovery (FLAIR) images, and T2-weighted images. Materials and Methods In this retrospective study, brain MRI scans obtained between 2011 and 2019 were collected, and scenarios were simulated in which the T1-weighted images and FLAIR images were missing. Two GANs were trained, validated, and tested using 210 glioblastomas (GBMs) (Multimodal Brain Tumor Image Segmentation Benchmark [BRATS] 2017) to generate T1-weighted images from postcontrast T1-weighted images and FLAIR images from T2-weighted images. The quality of the generated images was evaluated with mean squared error (MSE) and the structural similarity index (SSI). The segmentations obtained with the generated scans were compared with those obtained with the original MRI scans using the dice similarity coefficient (DSC). The GANs were validated on sets of GBMs and central nervous system lymphomas from the authors' institution to assess their generalizability. Statistical analysis was performed using the Mann-Whitney, Friedman, and Dunn tests. Results Two hundred ten GBMs from the BRATS data set and 46 GBMs (mean patient age, 58 years ± 11 [standard deviation]; 27 men [59%] and 19 women [41%]) and 21 central nervous system lymphomas (mean patient age, 67 years ± 13; 12 men [57%] and nine women [43%]) from the authors' institution were evaluated. The median MSE for the generated T1-weighted images ranged from 0.005 to 0.013, and the median MSE for the generated FLAIR images ranged from 0.004 to 0.103. The median SSI ranged from 0.82 to 0.92 for the generated T1-weighted images and from 0.76 to 0.92 for the generated FLAIR images. The median DSCs for the segmentation of the whole lesion, the FLAIR hyperintensities, and the contrast-enhanced areas using the generated scans were 0.82, 0.71, and 0.92, respectively, when replacing both T1-weighted and FLAIR images; 0.84, 0.74, and 0.97 when replacing only the FLAIR images; and 0.97, 0.95, and 0.92 when replacing only the T1-weighted images. Conclusion Brain MRI scans generated using generative adversarial networks can be used as deep learning model inputs in case MRI sequences are missing. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Zhong in this issue. An earlier incorrect version of this article appeared online. This article was corrected on April 12, 2021.


Assuntos
Neoplasias Encefálicas/diagnóstico por imagem , Aprendizado Profundo , Glioblastoma/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Linfoma/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Idoso , Meios de Contraste , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos
3.
Pancreatology ; 21(8): 1524-1530, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34507900

RESUMO

BACKGROUND & AIMS: Increased intrapancreatic fat is associated with pancreatic diseases; however, there are no established objective diagnostic criteria for fatty pancreas. On non-contrast computed tomography (CT), adipose tissue shows negative Hounsfield Unit (HU) attenuations (-150 to -30 HU). Using whole organ segmentation on non-contrast CT, we aimed to describe whole gland pancreatic attenuation and establish 5th and 10th percentile thresholds across a spectrum of age and sex. Subsequently, we aimed to evaluate the association between low pancreatic HU and risk of pancreatic ductal adenocarcinoma (PDAC). METHODS: The whole pancreas was segmented in 19,456 images from 469 non-contrast CT scans. A convolutional neural network was trained to assist pancreas segmentation. Mean pancreatic HU, volume, and body composition metrics were calculated. The lower 5th and 10th percentile for mean pancreatic HU were identified, examining the association with age and sex. Pre-diagnostic CT scans from patients who later developed PDAC were compared to cancer-free controls. RESULTS: Less than 5th percentile mean pancreatic HU was significantly associated with increase in BMI (OR 1.07; 1.03-1.11), visceral fat (OR 1.37; 1.15-1.64), total abdominal fat (OR 1.12; 1.03-1.22), and diabetes mellitus type 1 (OR 6.76; 1.68-27.28). Compared to controls, pre-diagnostic scans in PDAC cases had lower mean whole gland pancreatic HU (-0.2 vs 7.8, p = 0.026). CONCLUSION: In this study, we report age and sex-specific distribution of pancreatic whole-gland CT attenuation. Compared to controls, mean whole gland pancreatic HU is significantly lower in the pre-diagnostic phase of PDAC.


Assuntos
Carcinoma Ductal Pancreático , Pancreatopatias , Neoplasias Pancreáticas , Inteligência Artificial , Composição Corporal , Feminino , Humanos , Masculino , Pâncreas/diagnóstico por imagem , Neoplasias Pancreáticas/diagnóstico por imagem , Tomografia Computadorizada por Raios X , Neoplasias Pancreáticas
4.
J Arthroplasty ; 36(7): 2510-2517.e6, 2021 07.
Artigo em Inglês | MEDLINE | ID: mdl-33678445

RESUMO

BACKGROUND: Inappropriate acetabular component angular position is believed to increase the risk of hip dislocation after total hip arthroplasty. However, manual measurement of these angles is time consuming and prone to interobserver variability. The purpose of this study was to develop a deep learning tool to automate the measurement of acetabular component angles on postoperative radiographs. METHODS: Two cohorts of 600 anteroposterior (AP) pelvis and 600 cross-table lateral hip postoperative radiographs were used to develop deep learning models to segment the acetabular component and the ischial tuberosities. Cohorts were manually annotated, augmented, and randomly split to train-validation-test data sets on an 8:1:1 basis. Two U-Net convolutional neural network models (one for AP and one for cross-table lateral radiographs) were trained for 50 epochs. Image processing was then deployed to measure the acetabular component angles on the predicted masks for anatomical landmarks. Performance of the tool was tested on 80 AP and 80 cross-table lateral radiographs. RESULTS: The convolutional neural network models achieved a mean Dice similarity coefficient of 0.878 and 0.903 on AP and cross-table lateral test data sets, respectively. The mean difference between human-level and machine-level measurements was 1.35° (σ = 1.07°) and 1.39° (σ = 1.27°) for the inclination and anteversion angles, respectively. Differences of 5° or more between human-level and machine-level measurements were observed in less than 2.5% of cases. CONCLUSION: We developed a highly accurate deep learning tool to automate the measurement of angular position of acetabular components for use in both clinical and research settings. LEVEL OF EVIDENCE: III.


Assuntos
Artroplastia de Quadril , Aprendizado Profundo , Prótese de Quadril , Acetábulo/diagnóstico por imagem , Acetábulo/cirurgia , Artroplastia de Quadril/efeitos adversos , Prótese de Quadril/efeitos adversos , Humanos , Radiografia
5.
Radiology ; 290(3): 669-679, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30526356

RESUMO

Purpose To develop and evaluate a fully automated algorithm for segmenting the abdomen from CT to quantify body composition. Materials and Methods For this retrospective study, a convolutional neural network based on the U-Net architecture was trained to perform abdominal segmentation on a data set of 2430 two-dimensional CT examinations and was tested on 270 CT examinations. It was further tested on a separate data set of 2369 patients with hepatocellular carcinoma (HCC). CT examinations were performed between 1997 and 2015. The mean age of patients was 67 years; for male patients, it was 67 years (range, 29-94 years), and for female patients, it was 66 years (range, 31-97 years). Differences in segmentation performance were assessed by using two-way analysis of variance with Bonferroni correction. Results Compared with reference segmentation, the model for this study achieved Dice scores (mean ± standard deviation) of 0.98 ± 0.03, 0.96 ± 0.02, and 0.97 ± 0.01 in the test set, and 0.94 ± 0.05, 0.92 ± 0.04, and 0.98 ± 0.02 in the HCC data set, for the subcutaneous, muscle, and visceral adipose tissue compartments, respectively. Performance met or exceeded that of expert manual segmentation. Conclusion Model performance met or exceeded the accuracy of expert manual segmentation of CT examinations for both the test data set and the hepatocellular carcinoma data set. The model generalized well to multiple levels of the abdomen and may be capable of fully automated quantification of body composition metrics in three-dimensional CT examinations. © RSNA, 2018 Online supplemental material is available for this article. See also the editorial by Chang in this issue.


Assuntos
Composição Corporal , Aprendizado Profundo , Reconhecimento Automatizado de Padrão , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Radiografia Abdominal , Tomografia Computadorizada por Raios X , Adulto , Idoso , Idoso de 80 Anos ou mais , Algoritmos , Carcinoma Hepatocelular/diagnóstico por imagem , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Pessoa de Meia-Idade , Estudos Retrospectivos
6.
J Digit Imaging ; 32(4): 571-581, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31089974

RESUMO

Deep-learning algorithms typically fall within the domain of supervised artificial intelligence and are designed to "learn" from annotated data. Deep-learning models require large, diverse training datasets for optimal model convergence. The effort to curate these datasets is widely regarded as a barrier to the development of deep-learning systems. We developed RIL-Contour to accelerate medical image annotation for and with deep-learning. A major goal driving the development of the software was to create an environment which enables clinically oriented users to utilize deep-learning models to rapidly annotate medical imaging. RIL-Contour supports using fully automated deep-learning methods, semi-automated methods, and manual methods to annotate medical imaging with voxel and/or text annotations. To reduce annotation error, RIL-Contour promotes the standardization of image annotations across a dataset. RIL-Contour accelerates medical imaging annotation through the process of annotation by iterative deep learning (AID). The underlying concept of AID is to iteratively annotate, train, and utilize deep-learning models during the process of dataset annotation and model development. To enable this, RIL-Contour supports workflows in which multiple-image analysts annotate medical images, radiologists approve the annotations, and data scientists utilize these annotations to train deep-learning models. To automate the feedback loop between data scientists and image analysts, RIL-Contour provides mechanisms to enable data scientists to push deep newly trained deep-learning models to other users of the software. RIL-Contour and the AID methodology accelerate dataset annotation and model development by facilitating rapid collaboration between analysts, radiologists, and engineers.


Assuntos
Conjuntos de Dados como Assunto , Aprendizado Profundo , Diagnóstico por Imagem/métodos , Processamento de Imagem Assistida por Computador/métodos , Sistemas de Informação em Radiologia , Humanos
7.
AJR Am J Roentgenol ; 211(6): 1184-1193, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30403527

RESUMO

OBJECTIVE: Deep learning has shown great promise for improving medical image classification tasks. However, knowing what aspects of an image the deep learning system uses or, in a manner of speaking, sees to make its prediction is difficult. MATERIALS AND METHODS: Within a radiologic imaging context, we investigated the utility of methods designed to identify features within images on which deep learning activates. In this study, we developed a classifier to identify contrast enhancement phase from whole-slice CT data. We then used this classifier as an easily interpretable system to explore the utility of class activation map (CAMs), gradient-weighted class activation maps (Grad-CAMs), saliency maps, guided backpropagation maps, and the saliency activation map, a novel map reported here, to identify image features the model used when performing prediction. RESULTS: All techniques identified voxels within imaging that the classifier used. SAMs had greater specificity than did guided backpropagation maps, CAMs, and Grad-CAMs at identifying voxels within imaging that the model used to perform prediction. At shallow network layers, SAMs had greater specificity than Grad-CAMs at identifying input voxels that the layers within the model used to perform prediction. CONCLUSION: As a whole, voxel-level visualizations and visualizations of the imaging features that activate shallow network layers are powerful techniques to identify features that deep learning models use when performing prediction.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Tomografia Computadorizada por Raios X , Algoritmos , Humanos , Sensibilidade e Especificidade
9.
Mayo Clin Proc ; 99(2): 260-270, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38309937

RESUMO

OBJECTIVE: To evaluate a machine learning (ML)-based model for pulmonary hypertension (PH) prediction using measurements and impressions made during echocardiography. METHODS: A total of 7853 consecutive patients with right-sided heart catheterization and transthoracic echocardiography performed within 1 week from January 1, 2012, through December 31, 2019, were included. The data were split into training (n=5024 [64%]), validation (n=1275 [16%]), and testing (n=1554 [20%]). A gradient boosting machine with enumerated grid search for optimization was selected to allow missing data in the boosted trees without imputation. The training target was PH, defined by right-sided heart catheterization as mean pulmonary artery pressure above 20 mm Hg; model performance was maximized relative to area under the receiver operating characteristic curve using 5-fold cross-validation. RESULTS: Cohort age was 64±14 years; 3467 (44%) were female, and 81% (6323/7853) had PH. The final trained model included 19 characteristics, measurements, or impressions derived from the echocardiogram. In the testing data, the model had high discrimination for the detection of PH (area under the receiver operating characteristic curve, 0.83; 95% CI, 0.80 to 0.85). The model's accuracy, sensitivity, positive predictive value, and negative predictive value were 82% (1267/1554), 88% (1098/1242), 89% (1098/1241), and 54% (169/313), respectively. CONCLUSION: By use of ML, PH could be predicted on the basis of clinical and echocardiographic variables, without tricuspid regurgitation velocity. Machine learning methods appear promising for identifying patients with low likelihood of PH.


Assuntos
Hipertensão Pulmonar , Humanos , Pessoa de Meia-Idade , Idoso , Hipertensão Pulmonar/diagnóstico por imagem , Ecocardiografia/métodos , Cateterismo Cardíaco/métodos , Curva ROC , Aprendizado de Máquina , Estudos Retrospectivos
10.
J Am Heart Assoc ; 13(20): e032195, 2024 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-39392139

RESUMO

BACKGROUND: We developed a simplified ABC/2-derived method to estimate total subarachnoid hemorrhage volume (SAHV) on noncontrast computed tomography in patients with aneurysmal SAH and compared the clinical and radiographic outcomes. METHODS AND RESULTS: In this retrospective observational cohort study, we analyzed 277 patients with SAH admitted to our Comprehensive Stroke Center between 2012 and 2022. We derived a mathematical model (model 1) by measuring SAH basal cisternal blood volume using an ABC/2-derived ellipsoid formula (A=width/thickness, B=length, C=vertical extension) on head noncontrast computed tomography in 5 major SAH cisternal compartments. We compared model 1 against a manual segmentation method (model 2) on noncontrast computed tomography. Data were analyzed using logistic regression analysis, t test, receiver operator characteristic curves, and area under the curve analysis. There was no significant difference in cisternal SAHV analysis between the 2 models (P=0.14). Mean SAHV by the simplified method was 7.0 mL (95% CI, 5.89-8.09) for good outcome and 16.6 mL (95% CI, 13.49-19.77) for poor outcome. Patients with delayed cerebral ischemia had higher SAHV, with a cutoff value of 10 mL. CONCLUSIONS: Our simplified ABC/2-derived method to estimate SAHV is comparable to manual segmentation and can be performed in low-resource settings. Higher total SAHV was associated with worse outcomes and higher risk of delayed cerebral ischemia. A potential dose-response relationship was observed, with SAHV >10 mL predicting worse outcomes and higher risk of DCI.


Assuntos
Isquemia Encefálica , Hemorragia Subaracnóidea , Humanos , Hemorragia Subaracnóidea/diagnóstico por imagem , Hemorragia Subaracnóidea/etiologia , Hemorragia Subaracnóidea/diagnóstico , Feminino , Masculino , Estudos Retrospectivos , Pessoa de Meia-Idade , Isquemia Encefálica/etiologia , Isquemia Encefálica/diagnóstico por imagem , Idoso , Valor Preditivo dos Testes , Tomografia Computadorizada por Raios X , Volume Sanguíneo , Determinação do Volume Sanguíneo/métodos , Prognóstico , Fatores de Tempo
11.
Artigo em Inglês | MEDLINE | ID: mdl-38373180

RESUMO

BACKGROUND: Body composition can be accurately quantified from abdominal computed tomography (CT) exams and is a predictor for the development of aging-related conditions and for mortality. However, reference ranges for CT-derived body composition measures of obesity, sarcopenia, and bone loss have yet to be defined in the general population. METHODS: We identified a population-representative sample of 4 900 persons aged 20 to 89 years who underwent an abdominal CT exam from 2010 to 2020. The sample was constructed using propensity score matching an age and sex stratified sample of persons residing in the 27-county region of Southern Minnesota and Western Wisconsin. The matching included race, ethnicity, education level, region of residence, and the presence of 20 chronic conditions. We used a validated deep learning based algorithm to calculate subcutaneous adipose tissue area, visceral adipose tissue area, skeletal muscle area, skeletal muscle density, vertebral bone area, and vertebral bone density from a CT abdominal section. RESULTS: We report CT-based body composition reference ranges on 4 649 persons representative of our geographic region. Older age was associated with a decrease in skeletal muscle area and density, and an increase in visceral adiposity. All chronic conditions were associated with a statistically significant difference in at least one body composition biomarker. The presence of a chronic condition was generally associated with greater subcutaneous and visceral adiposity, and lower muscle density and vertebrae bone density. CONCLUSIONS: We report reference ranges for CT-based body composition biomarkers in a population-representative cohort of 4 649 persons by age, sex, body mass index, and chronic conditions.


Assuntos
Composição Corporal , Sarcopenia , Humanos , Valores de Referência , Músculo Esquelético , Sarcopenia/diagnóstico por imagem , Sarcopenia/epidemiologia , Índice de Massa Corporal , Gordura Intra-Abdominal , Biomarcadores , Obesidade Abdominal
12.
Mayo Clin Proc ; 99(6): 878-890, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38310501

RESUMO

OBJECTIVE: To determine whether body composition derived from medical imaging may be useful for assessing biologic age at the tissue level because people of the same chronologic age may vary with respect to their biologic age. METHODS: We identified an age- and sex-stratified cohort of 4900 persons with an abdominal computed tomography scan from January 1, 2010, to December 31, 2020, who were 20 to 89 years old and representative of the general population in Southeast Minnesota and West Central Wisconsin. We constructed a model for estimating tissue age that included 6 body composition biomarkers calculated from abdominal computed tomography using a previously validated deep learning model. RESULTS: Older tissue age associated with intermediate subcutaneous fat area, higher visceral fat area, lower muscle area, lower muscle density, higher bone area, and lower bone density. A tissue age older than chronologic age was associated with chronic conditions that result in reduced physical fitness (including chronic obstructive pulmonary disease, arthritis, cardiovascular disease, and behavioral disorders). Furthermore, a tissue age older than chronologic age was associated with an increased risk of death (hazard ratio, 1.56; 95% CI, 1.33 to 1.84) that was independent of demographic characteristics, county of residency, education, body mass index, and baseline chronic conditions. CONCLUSION: Imaging-based body composition measures may be useful in understanding the biologic processes underlying accelerated aging.


Assuntos
Composição Corporal , Tomografia Computadorizada por Raios X , Humanos , Masculino , Feminino , Idoso , Pessoa de Meia-Idade , Doença Crônica , Adulto , Idoso de 80 Anos ou mais , Tomografia Computadorizada por Raios X/métodos , Biomarcadores/análise , Envelhecimento/fisiologia , Minnesota/epidemiologia , Wisconsin/epidemiologia , Adulto Jovem , Músculo Esquelético/diagnóstico por imagem , Fatores Etários
13.
J Allergy Clin Immunol Pract ; 12(5): 1181-1191.e10, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38242531

RESUMO

BACKGROUND: Using the reaction history in logistic regression and machine learning (ML) models to predict penicillin allergy has been reported based on non-US data. OBJECTIVE: We developed ML positive penicillin allergy testing prediction models from multisite US data. METHODS: Retrospective data from 4 US-based hospitals were grouped into 4 datasets: enriched training (1:3 case-control matched cohort), enriched testing, nonenriched internal testing, and nonenriched external testing. ML algorithms were used for model development. We determined area under the curve (AUC) and applied the Shapley Additive exPlanations (SHAP) framework to interpret risk drivers. RESULTS: Of 4777 patients (mean age 60 [standard deviation: 17] years; 68% women, 91% White, and 86% non-Hispanic) evaluated for penicillin allergy labels, 513 (11%) had positive penicillin allergy testing. Model input variables were frequently missing: immediate or delayed onset (71%), signs or symptoms (13%), and treatment (31%). The gradient-boosted model was the strongest model with an AUC of 0.67 (95% confidence interval [CI]: 0.57-0.77), which improved to 0.87 (95% CI: 0.73-1) when only cases with complete data were used. Top SHAP drivers for positive testing were reactions within the last year and reactions requiring medical attention; female sex and reaction of hives/urticaria were also positive drivers. CONCLUSIONS: An ML prediction model for positive penicillin allergy skin testing using US-based retrospective data did not achieve performance strong enough for acceptance and adoption. The optimal ML prediction model for positive penicillin allergy testing was driven by time since reaction, seek medical attention, female sex, and hives/urticaria.


Assuntos
Hipersensibilidade a Drogas , Aprendizado de Máquina , Penicilinas , Humanos , Feminino , Penicilinas/efeitos adversos , Masculino , Hipersensibilidade a Drogas/epidemiologia , Hipersensibilidade a Drogas/diagnóstico , Estudos Retrospectivos , Pessoa de Meia-Idade , Estados Unidos/epidemiologia , Idoso , Adulto , Antibacterianos/efeitos adversos , Estudos de Casos e Controles , Testes Cutâneos
14.
Eur Heart J Digit Health ; 4(3): 188-195, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-37265866

RESUMO

Aims: The current guidelines recommend aortic valve intervention in patients with severe aortic regurgitation (AR) with the onset of symptoms, left ventricular enlargement, or systolic dysfunction. Recent studies have suggested that we might be missing the window of early intervention in a significant number of patients by following the guidelines. Methods and results: The overarching goal was to determine if machine learning (ML)-based algorithms could be trained to identify patients at risk for death from AR independent of aortic valve replacement (AVR). Models were trained with five-fold cross-validation on a dataset of 1035 patients, and performance was reported on an independent dataset of 207 patients. Optimal predictive performance was observed with a conditional random survival forest model. A subset of 19/41 variables was selected for inclusion in the final model. Variable selection was performed with 10-fold cross-validation using random survival forest model. The top variables included were age, body surface area, body mass index, diastolic blood pressure, New York Heart Association class, AVR, comorbidities, ejection fraction, end-diastolic volume, and end-systolic dimension, and the relative variable importance averaged across five splits of cross-validation in each repeat were evaluated. The concordance index for predicting survival of the best-performing model was 0.84 at 1 year, 0.86 at 2 years, and 0.87 overall, respectively. Conclusion: Using common echocardiographic parameters and patient characteristics, we successfully trained multiple ML models to predict survival in patients with severe AR. This technique could be applied to identify high-risk patients who would benefit from early intervention, thereby improving patient outcomes.

15.
Clin Nutr ; 41(8): 1676-1679, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-35777106

RESUMO

BACKGROUND & AIMS: The association between body composition parameters measured on computed tomography (CT) and severity of acute pancreatitis (AP) is conflicting because these composition parameters vary considerably by sex and age. We previously developed normative body composition data, in healthy subjects. Z-score calculated from the normative data gives age and sex adjusted body composition parameters. We studied the above association using this novel Z-score in a large cohort of patients with AP. METHODS: Between January 2014 and March 2018, patients admitted with AP and had CT scans within a week of admission, were enrolled. Body composition data including skeletal muscle (SM), subcutaneous adipose tissue (SAT) and visceral adipose tissue (VAT) were calculated from the CT scan using deep learning automated algorithm. Then we converted the value to Z-score, and then compared the same score between mild AP, moderately severe AP and severe AP defined by revised Atlanta criteria. RESULTS: Out of 514 patients, 336 (65.4%) are mild AP, 130 (25.3%) moderately severe AP, and 48 (9.3%) severe AP. Patients with moderately severe AP had significantly lower SM-z-score than those with mild AP (1.21 vs1.73, p = 0.048) and patients with severe AP had significantly lower SAT-z-score than those with mild AP (0.70 vs.1.29, p = 0.016). VAT-z-score was not significantly different between three groups. (p = 0.76). CONCLUSION: Lower SM-z-score and SAT-z-score were associated with moderately severe and severe types of AP, respectively. Future prospective studies in patients with AP using Z-scores, may define the association between body composition and severity of AP, and explain the inconsistencies reported in previous studies.


Assuntos
Pancreatite , Doença Aguda , Composição Corporal , Humanos , Obesidade/complicações , Pancreatite/diagnóstico por imagem , Estudos Prospectivos , Tomografia Computadorizada por Raios X/métodos
16.
Med Phys ; 47(11): 5609-5618, 2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-32740931

RESUMO

PURPOSE: Organ segmentation of computed tomography (CT) imaging is essential for radiotherapy treatment planning. Treatment planning requires segmentation not only of the affected tissue, but nearby healthy organs-at-risk, which is laborious and time-consuming. We present a fully automated segmentation method based on the three-dimensional (3D) U-Net convolutional neural network (CNN) capable of whole abdomen and pelvis segmentation into 33 unique organ and tissue structures, including tissues that may be overlooked by other automated segmentation approaches such as adipose tissue, skeletal muscle, and connective tissue and vessels. Whole abdomen segmentation is capable of quantifying exposure beyond a handful of organs-at-risk to all tissues within the abdomen. METHODS: Sixty-six (66) CT examinations of 64 individuals were included in the training and validation sets and 18 CT examinations from 16 individuals were included in the test set. All pixels in each examination were segmented by image analysts (with physician correction) and assigned one of 33 labels. Segmentation was performed with a 3D U-Net variant architecture which included residual blocks, and model performance was quantified on 18 test cases. Human interobserver variability (using semiautomated segmentation) was also reported on two scans, and manual interobserver variability of three individuals was reported on one scan. Model performance was also compared to several of the best models reported in the literature for multiple organ segmentation. RESULTS: The accuracy of the 3D U-Net model ranges from a Dice coefficient of 0.95 in the liver, 0.93 in the kidneys, 0.79 in the pancreas, 0.69 in the adrenals, and 0.51 in the renal arteries. Model accuracy is within 5% of human segmentation in eight of 19 organs and within 10% accuracy in 13 of 19 organs. CONCLUSIONS: The CNN approaches the accuracy of human tracers and on certain complex organs displays more consistent prediction than human tracers. Fully automated deep learning-based segmentation of CT abdomen has the potential to improve both the speed and accuracy of radiotherapy dose prediction for organs-at-risk.


Assuntos
Abdome , Redes Neurais de Computação , Abdome/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador , Órgãos em Risco , Pelve/diagnóstico por imagem , Tomografia Computadorizada por Raios X
17.
Radiol Artif Intell ; 2(5): e190183, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33937839

RESUMO

PURPOSE: To develop a deep learning model that segments intracranial structures on head CT scans. MATERIALS AND METHODS: In this retrospective study, a primary dataset containing 62 normal noncontrast head CT scans from 62 patients (mean age, 73 years; age range, 27-95 years) acquired between August and December 2018 was used for model development. Eleven intracranial structures were manually annotated on the axial oblique series. The dataset was split into 40 scans for training, 10 for validation, and 12 for testing. After initial training, eight model configurations were evaluated on the validation dataset and the highest performing model was evaluated on the test dataset. Interobserver variability was reported using multirater consensus labels obtained from the test dataset. To ensure that the model learned generalizable features, it was further evaluated on two secondary datasets containing 12 volumes with idiopathic normal pressure hydrocephalus (iNPH) and 30 normal volumes from a publicly available source. Statistical significance was determined using categorical linear regression with P < .05. RESULTS: Overall Dice coefficient on the primary test dataset was 0.84 ± 0.05 (standard deviation). Performance ranged from 0.96 ± 0.01 (brainstem and cerebrum) to 0.74 ± 0.06 (internal capsule). Dice coefficients were comparable to expert annotations and exceeded those of existing segmentation methods. The model remained robust on external CT scans and scans demonstrating ventricular enlargement. The use of within-network normalization and class weighting facilitated learning of underrepresented classes. CONCLUSION: Automated segmentation of CT neuroanatomy is feasible with a high degree of accuracy. The model generalized to external CT scans as well as scans demonstrating iNPH.Supplemental material is available for this article.© RSNA, 2020.

18.
J Am Coll Radiol ; 16(9 Pt B): 1318-1328, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31492410

RESUMO

Ultrasound is the most commonly used imaging modality in clinical practice because it is a nonionizing, low-cost, and portable point-of-care imaging tool that provides real-time images. Artificial intelligence (AI)-powered ultrasound is becoming more mature and getting closer to routine clinical applications in recent times because of an increased need for efficient and objective acquisition and evaluation of ultrasound images. Because ultrasound images involve operator-, patient-, and scanner-dependent variations, the adaptation of classical machine learning methods to clinical applications becomes challenging. With their self-learning ability, deep-learning (DL) methods are able to harness exponentially growing graphics processing unit computing power to identify abstract and complex imaging features. This has given rise to tremendous opportunities such as providing robust and generalizable AI models for improving image acquisition, real-time assessment of image quality, objective diagnosis and detection of diseases, and optimizing ultrasound clinical workflow. In this report, the authors review current DL approaches and research directions in rapidly advancing ultrasound technology and present their outlook on future directions and trends for DL techniques to further improve diagnosis, reduce health care cost, and optimize ultrasound clinical workflow.


Assuntos
Aprendizado Profundo/tendências , Melhoria de Qualidade , Ultrassonografia Doppler em Cores/métodos , Fluxo de Trabalho , Algoritmos , Inteligência Artificial , Neoplasias da Mama/diagnóstico por imagem , Feminino , Previsões , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Masculino , Inquéritos e Questionários , Neoplasias da Glândula Tireoide/diagnóstico por imagem , Estados Unidos
19.
J Am Coll Radiol ; 15(3 Pt B): 521-526, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29396120

RESUMO

Deep learning (DL) is a popular method that is used to perform many important tasks in radiology and medical imaging. Some forms of DL are able to accurately segment organs (essentially, trace the boundaries, enabling volume measurements or calculation of other properties). Other DL networks are able to predict important properties from regions of an image-for instance, whether something is malignant, molecular markers for tissue in a region, even prognostic markers. DL is easier to train than traditional machine learning methods, but requires more data and much more care in analyzing results. It will automatically find the features of importance, but understanding what those features are can be a challenge. This article describes the basic concepts of DL systems and some of the traps that exist in building DL systems and how to identify those traps.


Assuntos
Aprendizado Profundo , Radiologia/métodos , Diagnóstico por Computador , Humanos , Aprendizado de Máquina
20.
Artigo em Inglês | MEDLINE | ID: mdl-26067038

RESUMO

Tissues such as skeletal muscle and kidneys have well-defined structure that affects the measurements of mechanical properties. As an approach to characterize the material properties of these tissues, different groups have assumed that they are transversely isotropic (TI) and measure the shear wave velocity as it varies with angle with respect to the structural architecture of the organ. To refine measurements in these organs, it is desirable to have tissue-mimicking phantoms that exhibit similar anisotropic characteristics. Some approaches involve embedding fibers into a material matrix. However, if a homogeneous solid is under compression due to a static stress, an acoustoelastic effect can manifest that makes the measured wave velocities change with the compression stress. We propose to exploit this characteristic to demonstrate that stressed tissue mimicking phantoms can be characterized as a TI material. We tested six phantoms made with different concentrations of gelatin and agar. Stress was applied by the weight of a water container centered on top of a plate on top of the phantom. A linear array transducer and a V-1 Verasonics system were used to induce and measure shear waves in the phantoms. The shear wave motion was measured using a compound plane wave imaging technique. Autocorrelation was applied to the received in-phase/quadrature data. The shear wave velocity, c, was estimated using a Radon transform method. The transducer was mounted on a rotating stage so measurements were made every 10° over a range of 0° to 360°, where the stress is applied along 0° to 180° direction. The shear moduli were estimated. A TI model was fit to the data and the fractional anisotropy was evaluated. This approach can be used to explore many configurations of transverse isotropy with the same phantom, simply by applying stress to the tissue-mimicking phantom.


Assuntos
Técnicas de Imagem por Elasticidade/instrumentação , Modelos Biológicos , Imagens de Fantasmas , Ágar/química , Técnicas de Imagem por Elasticidade/métodos , Gelatina/química
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA