Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
bioRxiv ; 2024 May 14.
Artículo en Inglés | MEDLINE | ID: mdl-38798566

RESUMEN

Aortic structure and function impact cardiovascular health through multiple mechanisms. Aortic structural degeneration increases left ventricular afterload, pulse pressure and promotes target organ damage. Despite the impact of aortic structure on cardiovascular health, aortic 3D-geometry has yet to be comprehensively assessed. Using a convolutional neural network (U-Net) combined with morphological operations, we quantified aortic 3D-geometric phenotypes (AGPs) from 53,612 participants in the UK Biobank and 8,066 participants in the Penn Medicine Biobank. AGPs reflective of structural aortic degeneration, characterized by arch unfolding, descending aortic lengthening and luminal dilation exhibited cross-sectional associations with hypertension and cardiac diseases, and were predictive for new-onset hypertension, heart failure, cardiomyopathy, and atrial fibrillation. We identified 237 novel genetic loci associated with 3D-AGPs. Fibrillin-2 gene polymorphisms were identified as key determinants of aortic arch-3D structure. Mendelian randomization identified putative causal effects of aortic geometry on the risk of chronic kidney disease and stroke.

2.
Med Phys ; 51(3): 1997-2006, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37523254

RESUMEN

PURPOSE: To clarify the causal relationship between factors contributing to the postoperative survival of patients with esophageal cancer. METHODS: A cohort of 195 patients who underwent surgery for esophageal cancer between 2008 and 2021 was used in the study. All patients had preoperative chest computed tomography (CT) and positron emission tomography-CT (PET-CT) scans prior to receiving any treatment. From these images, high throughput and quantitative radiomic features, tumor features, and various body composition features were automatically extracted. Causal relationships among these image features, patient demographics, and other clinicopathological variables were analyzed and visualized using a novel score-based directed graph called "Grouped Greedy Equivalence Search" (GGES) while taking prior knowledge into consideration. After supplementing and screening the causal variables, the intervention do-calculus adjustment (IDA) scores were calculated to determine the degree of impact of each variable on survival. Based on this IDA score, a GGES prediction formula was generated. Ten-fold cross-validation was used to assess the performance of the models. The prediction results were evaluated using the R-Squared Score (R2 score). RESULTS: The final causal graphical model was formed by two PET-based image variables, ten body composition variables, four pathological variables, four demographic variables, two tumor variables, and one radiological variable (Percentile 10). Intramuscular fat mass was found to have the most impact on overall survival month. Percentile 10 and overall TNM (T: tumor, N: nodes, M: metastasis) stage were identified as direct causes of overall survival (month). The GGES casual model outperformed GES in regression prediction (R2  = 0.251) (p < 0.05) and was able to avoid unreasonable causality that may contradict common sense. CONCLUSION: The GGES causal model can provide a reliable and straightforward representation of the intricate causal relationships among the variables that impact the postoperative survival of patients with esophageal cancer.


Asunto(s)
Neoplasias Esofágicas , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Tomografía de Emisión de Positrones/métodos , Fluorodesoxiglucosa F18 , Neoplasias Esofágicas/diagnóstico por imagen , Neoplasias Esofágicas/cirugía , Tomografía de Emisión de Positrones , Tomografía Computarizada por Rayos X , Estudios Retrospectivos
3.
J Med Imaging (Bellingham) ; 10(5): 051809, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37361550

RESUMEN

Purpose: To validate the effectiveness of an approach called batch-balanced focal loss (BBFL) in enhancing convolutional neural network (CNN) classification performance on imbalanced datasets. Materials and Methods: BBFL combines two strategies to tackle class imbalance: (1) batch-balancing to equalize model learning of class samples and (2) focal loss to add hard-sample importance to the learning gradient. BBFL was validated on two imbalanced fundus image datasets: a binary retinal nerve fiber layer defect (RNFLD) dataset (n=7,258) and a multiclass glaucoma dataset (n=7,873). BBFL was compared to several imbalanced learning techniques, including random oversampling (ROS), cost-sensitive learning, and thresholding, based on three state-of-the-art CNNs. Accuracy, F1-score, and the area under the receiver operator characteristic curve (AUC) were used as the performance metrics for binary classification. Mean accuracy and mean F1-score were used for multiclass classification. Confusion matrices, t-distributed neighbor embedding plots, and GradCAM were used for the visual assessment of performance. Results: In binary classification of RNFLD, BBFL with InceptionV3 (93.0% accuracy, 84.7% F1, 0.971 AUC) outperformed ROS (92.6% accuracy, 83.7% F1, 0.964 AUC), cost-sensitive learning (92.5% accuracy, 83.8% F1, 0.962 AUC), and thresholding (91.9% accuracy, 83.0% F1, 0.962 AUC) and others. In multiclass classification of glaucoma, BBFL with MobileNetV2 (79.7% accuracy, 69.6% average F1 score) outperformed ROS (76.8% accuracy, 64.7% F1), cost-sensitive learning (78.3% accuracy, 67.8.8% F1), and random undersampling (76.5% accuracy, 66.5% F1). Conclusion: The BBFL-based learning method can improve the performance of a CNN model in both binary and multiclass disease classification when the data are imbalanced.

4.
Saf Sci ; 1642023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37206436

RESUMEN

Objective: To investigate the feasibility of predicting the risk of underground coal mine operations using data from the National Institute for Occupational Safety and Health (NIOSH). Methods: A total of 22,068 data entries from 3,982 unique underground coal mines from 1990 to 2020 were extracted from the NIOSH mine employment database. We defined the risk index of a mine as the ratio between the number of injuries and the size of the mine. Several machine learning models were used to predict the risk of a mine based on its employment demographics (i.e., number of underground employees, number of surface employees, and coal production). Based on these models, a mine was classified into a "low-risk" or "high-risk" category and assigned with a fuzzy risk index. Risk probabilities were then computed to generate risk profiles and identify mines with potential hazards. Results: NIOSH mine demographic features yielded a prediction performance with an AUC of 0.724 (95% CI 0.717-0.731) based on the last 31-years' mine data and an AUC of 0.738 (95% CI: 0.726, 0.749) on the last 16-years' mine data. Fuzzy risk score shows that risk is greatest in mines with an average of 621 underground employees and a production of 4,210,150 tons. The ratio of tons/employee maximizes the risk at 16,342.18 tons/employee. Conclusion: It is possible to predict the risk of underground coal mines based on their employee demographics and optimizing the allocation and distribution of employees in coal mines can help minimize the risk of accidents and injuries.

5.
Am J Respir Cell Mol Biol ; 69(2): 126-134, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37236629

RESUMEN

Chord length is an indirect measure of alveolar size and a critical endpoint in animal models of chronic obstructive pulmonary disease (COPD). In assessing chord length, the lumens of nonalveolar structures are eliminated from measurement by various methods, including manual masking. However, manual masking is resource intensive and can introduce variability and bias. We created a fully automated deep learning-based tool to mask murine lung images and assess chord length to facilitate mechanistic and therapeutic discovery in COPD called Deep-Masker (available at http://47.93.0.75:8110/login). We trained the deep learning algorithm for Deep-Masker using 1,217 images from 137 mice from 12 strains exposed to room air or cigarette smoke for 6 months. We validated this algorithm against manual masking. Deep-Masker demonstrated high accuracy with an average difference in chord length compared with manual masking of -0.3 ± 1.4% (rs = 0.99) for room-air-exposed mice and 0.7 ± 1.9% (rs = 0.99) for cigarette-smoke-exposed mice. The difference between Deep-Masker and manually masked images for change in chord length because of cigarette smoke exposure was 6.0 ± 9.2% (rs = 0.95). These values exceed published estimates for interobserver variability for manual masking (rs = 0.65) and the accuracy of published algorithms by a significant margin. We validated the performance of Deep-Masker using an independent set of images. Deep-Masker can be an accurate, precise, fully automated method to standardize chord length measurement in murine models of lung disease.


Asunto(s)
Aprendizaje Profundo , Enfermedad Pulmonar Obstructiva Crónica , Animales , Ratones , Pulmón , Enfermedad Pulmonar Obstructiva Crónica/diagnóstico por imagen
6.
Lung Cancer ; 179: 107189, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37058786

RESUMEN

OBJECTIVES: To evaluate the impact of body composition derived from computed tomography (CT) scans on postoperative lung cancer recurrence. METHODS: We created a retrospective cohort of 363 lung cancer patients who underwent lung resections and had verified recurrence, death, or at least 5-year follow-up without either event. Five key body tissues and ten tumor features were automatically segmented and quantified based on preoperative whole-body CT scans (acquired as part of a PET-CT scan) and chest CT scans, respectively. Time-to-event analysis accounting for the competing event of death was performed to analyze the impact of body composition, tumor features, clinical information, and pathological features on lung cancer recurrence after surgery. The hazard ratio (HR) of normalized factors was used to assess individual significance univariately and in the combined models. The 5-fold cross-validated time-dependent receiver operating characteristics analysis, with an emphasis on the area under the 3-year ROC curve (AUC), was used to characterize the ability to predict lung cancer recurrence. RESULTS: Body tissues that showed a standalone potential to predict lung cancer recurrence include visceral adipose tissue (VAT) volume (HR = 0.88, p = 0.047), subcutaneous adipose tissue (SAT) density (HR = 1.14, p = 0.034), inter-muscle adipose tissue (IMAT) volume (HR = 0.83, p = 0.002), muscle density (HR = 1.27, p < 0.001), and total fat volume (HR = 0.89, p = 0.050). The CT-derived muscular and tumor features significantly contributed to a model including clinicopathological factors, resulting in an AUC of 0.78 (95% CI: 0.75-0.83) to predict recurrence at 3 years. CONCLUSIONS: Body composition features (e.g., muscle density, or muscle and inter-muscle adipose tissue volumes) can improve the prediction of recurrence when combined with clinicopathological factors.


Asunto(s)
Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/patología , Estudios Retrospectivos , Tomografía Computarizada por Tomografía de Emisión de Positrones , Recurrencia Local de Neoplasia , Pulmón/patología , Composición Corporal/fisiología , Tomografía Computarizada por Rayos X/métodos
7.
J Clin Med ; 12(6)2023 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-36983109

RESUMEN

BACKGROUND: Body composition can be accurately quantified based on computed tomography (CT) and typically reflects an individual's overall health status. However, there is a dearth of research examining the relationship between body composition and survival following esophagectomy. METHODS: We created a cohort consisting of 183 patients who underwent esophagectomy for esophageal cancer without neoadjuvant therapy. The cohort included preoperative PET-CT scans, along with pathologic and clinical data, which were collected prospectively. Radiomic, tumor, PET, and body composition features were automatically extracted from the images. Cox regression models were utilized to identify variables associated with survival. Logistic regression and machine learning models were developed to predict one-, three-, and five-year survival rates. Model performance was evaluated based on the area under the receiver operating characteristics curve (ROC/AUC). To test for the statistical significance of the impact of body composition on survival, body composition features were excluded for the best-performing models, and the DeLong test was used. RESULTS: The one-year survival model contained 10 variables, including three body composition variables (bone mass, bone density, and visceral adipose tissue (VAT) density), and demonstrated an AUC of 0.817 (95% CI: 0.738-0.897). The three-year survival model incorporated 14 variables, including three body composition variables (intermuscular adipose tissue (IMAT) volume, IMAT mass, and bone mass), with an AUC of 0.693 (95% CI: 0.594-0.792). For the five-year survival model, 10 variables were included, of which two were body composition variables (intramuscular adipose tissue (IMAT) volume and visceral adipose tissue (VAT) mass), with an AUC of 0.861 (95% CI: 0.783-0.938). The one- and five-year survival models exhibited significantly inferior performance when body composition features were not incorporated. CONCLUSIONS: Body composition features derived from preoperative CT scans should be considered when predicting survival following esophagectomy.

8.
Med Phys ; 50(1): 449-464, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36184848

RESUMEN

OBJECTIVE: To develop and validate a novel deep learning architecture to classify retinal vein occlusion (RVO) on color fundus photographs (CFPs) and reveal the image features contributing to the classification. METHODS: The neural understanding network (NUN) is formed by two components: (1) convolutional neural network (CNN)-based feature extraction and (2) graph neural networks (GNN)-based feature understanding. The CNN-based image features were transformed into a graph representation to encode and visualize long-range feature interactions to identify the image regions that significantly contributed to the classification decision. A total of 7062 CFPs were classified into three categories: (1) no vein occlusion ("normal"), (2) central RVO, and (3) branch RVO. The area under the receiver operative characteristic (ROC) curve (AUC) was used as the metric to assess the performance of the trained classification models. RESULTS: The AUC, accuracy, sensitivity, and specificity for NUN to classify CFPs as normal, central occlusion, or branch occlusion were 0.975 (± 0.003), 0.911 (± 0.007), 0.983 (± 0.010), and 0.803 (± 0.005), respectively, which outperformed available classical CNN models. CONCLUSION: The NUN architecture can provide a better classification performance and a straightforward visualization of the results compared to CNNs.


Asunto(s)
Monjas , Oclusión de la Vena Retiniana , Humanos , Oclusión de la Vena Retiniana/diagnóstico por imagen , Redes Neurales de la Computación , Fondo de Ojo , Técnicas de Diagnóstico Oftalmológico
9.
Pattern Recognit ; 1282022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35528144

RESUMEN

Objective: To develop and validate a novel convolutional neural network (CNN) termed "Super U-Net" for medical image segmentation. Methods: Super U-Net integrates a dynamic receptive field module and a fusion upsampling module into the classical U-Net architecture. The model was developed and tested to segment retinal vessels, gastrointestinal (GI) polyps, skin lesions on several image types (i.e., fundus images, endoscopic images, dermoscopic images). We also trained and tested the traditional U-Net architecture, seven U-Net variants, and two non-U-Net segmentation architectures. K-fold cross-validation was used to evaluate performance. The performance metrics included Dice similarity coefficient (DSC), accuracy, positive predictive value (PPV), and sensitivity. Results: Super U-Net achieved average DSCs of 0.808±0.0210, 0.752±0.019, 0.804±0.239, and 0.877±0.135 for segmenting retinal vessels, pediatric retinal vessels, GI polyps, and skin lesions, respectively. The Super U-net consistently outperformed U-Net, seven U-Net variants, and two non-U-Net segmentation architectures (p < 0.05). Conclusion: Dynamic receptive fields and fusion upsampling can significantly improve image segmentation performance.

10.
Med Image Anal ; 77: 102367, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35066393

RESUMEN

We present a novel integrative computerized solution to automatically identify and differentiate pulmonary arteries and veins depicted on chest computed tomography (CT) without iodinated contrast agents. We first identified the central extrapulmonary arteries and veins using a convolutional neural network (CNN) model. Then, a computational differential geometry method was used to automatically identify the tubular-like structures in the lungs with high densities, which we believe are the intrapulmonary vessels. Beginning with the extrapulmonary arteries and veins, we progressively traced the intrapulmonary vessels by following their skeletons and differentiated them into arteries and veins. Instead of manually labeling the numerous arteries and veins in the lungs for machine learning, this integrative strategy limits the manual effort only to the large extrapulmonary vessels. We used a dataset consisting of 120 chest CT scans acquired on different subjects using various protocols to develop, train, and test the algorithms. Our experiments on an independent test set (n = 15) showed promising performance. The computer algorithm achieved a sensitivity of ∼98% in labeling the pulmonary artery and vein branches when compared with a human expert's results, demonstrating the feasibility of our computerized solution in pulmonary artery/vein labeling.


Asunto(s)
Arteria Pulmonar , Tomografía Computarizada por Rayos X , Algoritmos , Humanos , Redes Neurales de la Computación , Arteria Pulmonar/diagnóstico por imagen , Tórax , Tomografía Computarizada por Rayos X/métodos
11.
Med Phys ; 48(10): 6237-6246, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34382221

RESUMEN

PURPOSE: To investigate the relationship between macrovasculature features and the standardized uptake value (SUV) of positron emission tomography (PET), which is a surrogate for the metabolic activity of a lung tumor. METHODS: We retrospectively analyzed a cohort of 90 lung cancer patients who had both chest CT and PET-CT examinations before receiving cancer treatment. The SUVs in the medical reports were used. We quantified three macrovasculature features depicted on CT images (i.e., vessel number, vessel volume, and vessel tortuosity) and several tumor features (i.e., volume, maximum diameter, mean diameter, surface area, and density). Tumor size (e.g., volume) was used as a covariate to adjust for possible confounding factors. Backward stepwise multiple regression analysis was performed to develop a model for predicting PET SUV from the relevant image features. The Bonferroni correction was used for multiple comparisons. RESULTS: PET SUV was positively correlated with vessel volume (R = 0.44, p < 0.001) and vessel number (R = 0.44, p < 0.001) but not with vessel tortuosity (R = 0.124, p > 0.05). After adjusting for tumor size, PET SUV was significantly correlated with vessel tortuosity (R = 0.299, p = 0.004) and vessel number (R = 0.224, p = 0.035), but only marginally correlated with vessel volume (R = 0.187, p = 0.079). The multiple regression model showed a performance with an R-Squared of 0.391 and an adjusted R-Squared of 0.355 (p < 0.001). CONCLUSIONS: Our investigations demonstrate the potential relationship between macrovasculature and PET SUV and suggest the possibility of inferring the metabolic activity of a lung tumor from chest CT images.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Tomografía de Emisión de Positrones , Fluorodesoxiglucosa F18 , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Tomografía de Emisión de Positrones , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA