Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters











Database
Language
Publication year range
1.
Comput Methods Programs Biomed ; 256: 108382, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39213898

ABSTRACT

OBJECTIVE: In diabetes mellitus patients, hyperuricemia may lead to the development of diabetic complications, including macrovascular and microvascular dysfunction. However, the level of blood uric acid in diabetic patients is obtained by sampling peripheral blood from the patient, which is an invasive procedure and not conducive to routine monitoring. Therefore, we developed deep learning algorithm to detect noninvasively hyperuricemia from retina photographs and metadata of patients with diabetes and evaluated performance in multiethnic populations and different subgroups. MATERIALS AND METHODS: To achieve the task of non-invasive detection of hyperuricemia in diabetic patients, given that blood uric acid metabolism is directly related to estimated glomerular filtration rate(eGFR), we first performed a regression task for eGFR value before the classification task for hyperuricemia and reintroduced the eGFR regression values into the baseline information. We trained 3 deep learning models: (1) metadata model adjusted for sex, age, body mass index, duration of diabetes, HbA1c, systolic blood pressure, diastolic blood pressure; (2) image model based on fundus photographs; (3)hybrid model combining image and metadata model. Data from the Shanghai General Hospital Diabetes Management Center (ShDMC) were used to develop (6091 participants with diabetes) and internally validated (using 5-fold cross-validation) the models. External testing was performed on an independent dataset (UK Biobank dataset) consisting of 9327 participants with diabetes. RESULTS: For the regression task of eGFR, in ShDMC dataset, the coefficient of determination (R2) was 0.684±0.07 (95 % CI) for image model, 0.501±0.04 for metadata model, and 0.727±0.002 for hybrid model. In external UK Biobank dataset, a coefficient of determination (R2) was 0.647±0.06 for image model, 0.627±0.03 for metadata model, and 0.697±0.07 for hybrid model. Our method was demonstrably superior to previous methods. For the classification of hyperuricemia, in ShDMC validation, the area, under the curve (AUC) was 0.86±0.013for image model, 0.86±0.013 for metadata model, and 0.92±0.026 for hybrid model. Estimates with UK biobank were 0.82±0.017 for image model, 0.79±0.024 for metadata model, and 0.89±0.032 for hybrid model. CONCLUSION: There is a potential deep learning algorithm using fundus photographs as a noninvasively screening adjunct for hyperuricemia among individuals with diabetes. Meanwhile, combining patient's metadata enables higher screening accuracy. After applying the visualization tool, it found that the deep learning network for the identification of hyperuricemia mainly focuses on the fundus optic disc region.


Subject(s)
Algorithms , Deep Learning , Diabetes Mellitus , Glomerular Filtration Rate , Hyperuricemia , Metadata , Neural Networks, Computer , Humans , Middle Aged , Hyperuricemia/complications , Male , Female , Diabetes Mellitus/blood , Fundus Oculi , Aged , Adult , Uric Acid/blood , Image Processing, Computer-Assisted/methods
2.
Article in English | MEDLINE | ID: mdl-38083742

ABSTRACT

Positron emission tomography (PET) is the most sensitive molecular imaging modality routinely applied in our modern healthcare. High radioactivity caused by the injected tracer dose is a major concern in PET imaging and limits its clinical applications. However, reducing the dose leads to inadequate image quality for diagnostic practice. Motivated by the need to produce high quality images with minimum 'low-dose', convolutional neural networks (CNNs) based methods have been developed for high quality PET synthesis from its low-dose counterparts. Previous CNNs-based studies usually directly map low-dose PET into features space without consideration of different dose reduction level. In this study, a novel approach named CG-3DSRGAN (Classification-Guided Generative Adversarial Network with Super Resolution Refinement) is presented. Specifically, a multi-tasking coarse generator, guided by a classification head, allows for a more comprehensive understanding of the noise-level features present in the low-dose data, resulting in improved image synthesis. Moreover, to recover spatial details of standard PET, an auxiliary super resolution network - Contextual-Net - is proposed as a second-stage training to narrow the gap between coarse prediction and standard PET. We compared our method to the state-of-the-art methods on whole-body PET with different dose reduction factors (DRF). Experiments demonstrate our method can outperform others on all DRF.Clinical Relevance- Low-Dose PET, PET recovery, GAN, task driven image synthesis, super resolution.


Subject(s)
Image Processing, Computer-Assisted , Positron-Emission Tomography , Image Processing, Computer-Assisted/methods , Positron-Emission Tomography/methods , Neural Networks, Computer
3.
Phys Med Biol ; 66(24)2021 12 07.
Article in English | MEDLINE | ID: mdl-34818637

ABSTRACT

Objective.Positron emission tomography-computed tomography (PET-CT) is regarded as the imaging modality of choice for the management of soft-tissue sarcomas (STSs). Distant metastases (DM) are the leading cause of death in STS patients and early detection is important to effectively manage tumors with surgery, radiotherapy and chemotherapy. In this study, we aim to early detect DM in patients with STS using their PET-CT data.Approach.We derive a new convolutional neural network method for early DM detection. The novelty of our method is the introduction of a constrained hierarchical multi-modality feature learning approach to integrate functional imaging (PET) features with anatomical imaging (CT) features. In addition, we removed the reliance on manual input, e.g. tumor delineation, for extracting imaging features.Main results.Our experimental results on a well-established benchmark PET-CT dataset show that our method achieved the highest accuracy (0.896) and AUC (0.903) scores when compared to the state-of-the-art methods (unpaired student's t-testp-value < 0.05).Significance.Our method could be an effective and supportive tool to aid physicians in tumor quantification and in identifying image biomarkers for cancer treatment.


Subject(s)
Deep Learning , Sarcoma , Soft Tissue Neoplasms , Humans , Neural Networks, Computer , Positron Emission Tomography Computed Tomography/methods , Sarcoma/diagnostic imaging , Soft Tissue Neoplasms/diagnostic imaging
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3658-3688, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946670

ABSTRACT

Soft-tissue Sarcomas (STS) are a heterogeneous group of malignant neoplasms with a relatively high mortality rate from distant metastases. Early prediction or quantitative evaluation of distant metastases risk for patients with STS is an important step which can provide better-personalized treatments and thereby improve survival rates. Positron emission tomography-computed tomography (PET-CT) image is regarded as the imaging modality of choice for the evaluation, staging and assessment of STS. Radiomics, which refers to the extraction and analysis of the quantitative of high-dimensional mineable data from medical images, is foreseen as an important prognostic tool for cancer risk assessment. However, conventional radiomics methods that depend heavily on hand-crafted features (e.g. shape and texture) and prior knowledge (e.g. tuning of many parameters) therefore cannot fully represent the semantic information of the image. In addition, convolutional neural networks (CNN) based radiomics methods present capabilities to improve, but currently, they are mainly designed for single modality e.g., CT or a particular body region e.g., lung structure. In this work, we propose a deep multi-modality collaborative learning to iteratively derive optimal ensembled deep and conventional features from PET-CT images. In addition, we introduce an end-to-end volumetric deep learning architecture to learn complementary PET-CT features optimised for image radiomics. Our experimental results using public PET-CT dataset of STS patients demonstrate that our method has better performance when compared with the state-of-the-art methods.


Subject(s)
Deep Learning , Positron Emission Tomography Computed Tomography , Sarcoma/diagnostic imaging , Humans , Neural Networks, Computer
SELECTION OF CITATIONS
SEARCH DETAIL