Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21.068
Filtrar
1.
IEEE J Biomed Health Inform ; 28(7): 3997-4009, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954559

RESUMEN

Magnetic resonance imaging (MRI)-based deep neural networks (DNN) have been widely developed to perform prostate cancer (PCa) classification. However, in real-world clinical situations, prostate MRIs can be easily impacted by rectal artifacts, which have been found to lead to incorrect PCa classification. Existing DNN-based methods typically do not consider the interference of rectal artifacts on PCa classification, and do not design specific strategy to address this problem. In this study, we proposed a novel Targeted adversarial training with Proprietary Adversarial Samples (TPAS) strategy to defend the PCa classification model against the influence of rectal artifacts. Specifically, based on clinical prior knowledge, we generated proprietary adversarial samples with rectal artifact-pattern adversarial noise, which can severely mislead PCa classification models optimized by the ordinary training strategy. We then jointly exploited the generated proprietary adversarial samples and original samples to train the models. To demonstrate the effectiveness of our strategy, we conducted analytical experiments on multiple PCa classification models. Compared with ordinary training strategy, TPAS can effectively improve the single- and multi-parametric PCa classification at patient, slice and lesion level, and bring substantial gains to recent advanced models. In conclusion, TPAS strategy can be identified as a valuable way to mitigate the influence of rectal artifacts on deep learning models for PCa classification.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Neoplasias de la Próstata , Recto , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Recto/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Profundo
2.
IEEE J Biomed Health Inform ; 28(7): 3798-3809, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954560

RESUMEN

Major depressive disorder (MDD) is a chronic mental illness which affects people's well-being and is often detected at a later stage of depression with a likelihood of suicidal ideation. Early detection of MDD is thus necessary to reduce the impact, however, it requires monitoring vitals in daily living conditions. EEG is generally multi-channel and due to difficulty in signal acquisition, it is unsuitable for home-based monitoring, whereas, wearable sensors can collect single-channel ECG. Classical machine-learning based MDD detection studies commonly use various heart rate variability features. Feature generation, which requires domain knowledge, is often challenging, and requires computation power, often unsuitable for real time processing, MDDBranchNet is a proposed parallel-branch deep learning model for MDD binary classification from a single channel ECG which uses additional ECG-derived signals such as R-R signal and degree distribution time series of horizontal visibility graph. The use of derived branches was able to increase the model's accuracy by around 7%. An optimal 20-second overlapped segmentation of ECG recording was found to be beneficial with a 70% prediction threshold for maximum MDD detection with a minimum false positive rate. The proposed model evaluated MDD prediction from signal excerpts, irrespective of location (first, middle or last one-third of the recording), instead of considering the entire ECG signal with minimal performance variation stressing the idea that MDD phenomena are likely to manifest uniformly throughout the recording.


Asunto(s)
Aprendizaje Profundo , Trastorno Depresivo Mayor , Electrocardiografía , Procesamiento de Señales Asistido por Computador , Humanos , Electrocardiografía/métodos , Trastorno Depresivo Mayor/fisiopatología , Trastorno Depresivo Mayor/diagnóstico , Algoritmos , Adulto , Masculino
3.
IEEE J Biomed Health Inform ; 28(7): 4170-4183, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954557

RESUMEN

Efficient medical image segmentation aims to provide accurate pixel-wise predictions with a lightweight implementation framework. However, existing lightweight networks generally overlook the generalizability of the cross-domain medical segmentation tasks. In this paper, we propose Generalizable Knowledge Distillation (GKD), a novel framework for enhancing the performance of lightweight networks on cross-domain medical segmentation by generalizable knowledge distillation from powerful teacher networks. Considering the domain gaps between different medical datasets, we propose the Model-Specific Alignment Networks (MSAN) to obtain the domain-invariant representations. Meanwhile, a customized Alignment Consistency Training (ACT) strategy is designed to promote the MSAN training. Based on the domain-invariant vectors in MSAN, we propose two generalizable distillation schemes, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD). In DCGD, two implicit contrastive graphs are designed to model the intra-coupling and inter-coupling semantic correlations. Then, in DICD, the domain-invariant semantic vectors are reconstructed from two networks (i.e., teacher and student) with a crossover manner to achieve simultaneous generalization of lightweight networks, hierarchically. Moreover, a metric named Fréchet Semantic Distance (FSD) is tailored to verify the effectiveness of the regularized domain-invariant features. Extensive experiments conducted on the Liver, Retinal Vessel and Colonoscopy segmentation datasets demonstrate the superiority of our method, in terms of performance and generalization ability on lightweight networks.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación , Bases de Datos Factuales , Aprendizaje Profundo
4.
Ultrasound Q ; 40(3)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-38958999

RESUMEN

ABSTRACT: The objective of the study was to use a deep learning model to differentiate between benign and malignant sentinel lymph nodes (SLNs) in patients with breast cancer compared to radiologists' assessments.Seventy-nine women with breast cancer were enrolled and underwent lymphosonography and contrast-enhanced ultrasound (CEUS) examination after subcutaneous injection of ultrasound contrast agent around their tumor to identify SLNs. Google AutoML was used to develop image classification model. Grayscale and CEUS images acquired during the ultrasound examination were uploaded with a data distribution of 80% for training/20% for testing. The performance metric used was area under precision/recall curve (AuPRC). In addition, 3 radiologists assessed SLNs as normal or abnormal based on a clinical established classification. Two-hundred seventeen SLNs were divided in 2 for model development; model 1 included all SLNs and model 2 had an equal number of benign and malignant SLNs. Validation results model 1 AuPRC 0.84 (grayscale)/0.91 (CEUS) and model 2 AuPRC 0.91 (grayscale)/0.87 (CEUS). The comparison between artificial intelligence (AI) and readers' showed statistical significant differences between all models and ultrasound modes; model 1 grayscale AI versus readers, P = 0.047, and model 1 CEUS AI versus readers, P < 0.001. Model 2 r grayscale AI versus readers, P = 0.032, and model 2 CEUS AI versus readers, P = 0.041.The interreader agreement overall result showed κ values of 0.20 for grayscale and 0.17 for CEUS.In conclusion, AutoML showed improved diagnostic performance in balance volume datasets. Radiologist performance was not influenced by the dataset's distribution.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Ganglio Linfático Centinela , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Ganglio Linfático Centinela/diagnóstico por imagen , Persona de Mediana Edad , Anciano , Adulto , Radiólogos/estadística & datos numéricos , Ultrasonografía Mamaria/métodos , Medios de Contraste , Metástasis Linfática/diagnóstico por imagen , Ultrasonografía/métodos , Biopsia del Ganglio Linfático Centinela/métodos , Mama/diagnóstico por imagen , Reproducibilidad de los Resultados
5.
Neurosurg Rev ; 47(1): 300, 2024 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-38951288

RESUMEN

The diagnosis of Moyamoya disease (MMD) relies heavily on imaging, which could benefit from standardized machine learning tools. This study aims to evaluate the diagnostic efficacy of deep learning (DL) algorithms for MMD by analyzing sensitivity, specificity, and the area under the curve (AUC) compared to expert consensus. We conducted a systematic search of PubMed, Embase, and Web of Science for articles published from inception to February 2024. Eligible studies were required to report diagnostic accuracy metrics such as sensitivity, specificity, and AUC, excluding those not in English or using traditional machine learning methods. Seven studies were included, comprising a sample of 4,416 patients, of whom 1,358 had MMD. The pooled sensitivity for common and random effects models was 0.89 (95% CI: 0.85 to 0.92) and 0.92 (95% CI: 0.85 to 0.96), respectively. The pooled specificity was 0.89 (95% CI: 0.86 to 0.91) in the common effects model and 0.91 (95% CI: 0.75 to 0.97) in the random effects model. Two studies reported the AUC alongside their confidence intervals. A meta-analysis synthesizing these findings aggregated a mean AUC of 0.94 (95% CI: 0.92 to 0.96) for common effects and 0.89 (95% CI: 0.76 to 1.02) for random effects models. Deep learning models significantly enhance the diagnosis of MMD by efficiently extracting and identifying complex image patterns with high sensitivity and specificity. Trial registration: CRD42024524998 https://www.crd.york.ac.uk/prospero/displayrecord.php?RecordID=524998.


Asunto(s)
Aprendizaje Profundo , Enfermedad de Moyamoya , Enfermedad de Moyamoya/diagnóstico , Humanos , Algoritmos , Sensibilidad y Especificidad
6.
Cell Metab ; 36(7): 1482-1493.e7, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38959862

RESUMEN

Although human core body temperature is known to decrease with age, the age dependency of facial temperature and its potential to indicate aging rate or aging-related diseases remains uncertain. Here, we collected thermal facial images of 2,811 Han Chinese individuals 20-90 years old, developed the ThermoFace method to automatically process and analyze images, and then generated thermal age and disease prediction models. The ThermoFace deep learning model for thermal facial age has a mean absolute deviation of about 5 years in cross-validation and 5.18 years in an independent cohort. The difference between predicted and chronological age is highly associated with metabolic parameters, sleep time, and gene expression pathways like DNA repair, lipolysis, and ATPase in the blood transcriptome, and it is modifiable by exercise. Consistently, ThermoFace disease predictors forecast metabolic diseases like fatty liver with high accuracy (AUC > 0.80), with predicted disease probability correlated with metabolic parameters.


Asunto(s)
Envejecimiento , Cara , Enfermedades Metabólicas , Humanos , Persona de Mediana Edad , Anciano , Adulto , Masculino , Femenino , Anciano de 80 o más Años , Adulto Joven , Aprendizaje Profundo , Temperatura Corporal , Procesamiento de Imagen Asistido por Computador
7.
Vestn Oftalmol ; 140(3): 82-87, 2024.
Artículo en Ruso | MEDLINE | ID: mdl-38962983

RESUMEN

This article reviews literature on the use of artificial intelligence (AI) for screening, diagnosis, monitoring and treatment of glaucoma. The first part of the review provides information how AI methods improve the effectiveness of glaucoma screening, presents the technologies using deep learning, including neural networks, for the analysis of big data obtained by methods of ocular imaging (fundus imaging, optical coherence tomography of the anterior and posterior eye segments, digital gonioscopy, ultrasound biomicroscopy, etc.), including a multimodal approach. The results found in the reviewed literature are contradictory, indicating that improvement of the AI models requires further research and a standardized approach. The use of neural networks for timely detection of glaucoma based on multimodal imaging will reduce the risk of blindness associated with glaucoma.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Glaucoma , Redes Neurales de la Computación , Humanos , Glaucoma/diagnóstico , Tomografía de Coherencia Óptica/métodos , Tamizaje Masivo/métodos , Técnicas de Diagnóstico Oftalmológico
8.
Sci Rep ; 14(1): 15056, 2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38956075

RESUMEN

Celiac Disease (CD) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. CD negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. Therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. This study utilizes Raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. A total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. Convolutional Neural Network (CNN), Multi-Scale Convolutional Neural Network (MCNN), Residual Network (ResNet), and Deep Residual Shrinkage Network (DRSN) classification models were employed. The accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. Comparative validation results revealed that the DRSN model exhibited the best performance, with an AUC value and accuracy of 97.60% and 95%, respectively. This confirms the superiority of Raman spectroscopy combined with deep learning in the diagnosis of celiac disease.


Asunto(s)
Enfermedad Celíaca , Aprendizaje Profundo , Espectrometría Raman , Enfermedad Celíaca/diagnóstico , Enfermedad Celíaca/sangre , Humanos , Espectrometría Raman/métodos , Femenino , Masculino , Adulto , Redes Neurales de la Computación , Estudios de Casos y Controles , Persona de Mediana Edad
9.
Sci Data ; 11(1): 722, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956115

RESUMEN

Around 20% of complete blood count samples necessitate visual review using light microscopes or digital pathology scanners. There is currently no technological alternative to the visual examination of red blood cells (RBCs) morphology/shapes. True/non-artifact teardrop-shaped RBCs and schistocytes/fragmented RBCs are commonly associated with serious medical conditions that could be fatal, increased ovalocytes are associated with almost all types of anemias. 25 distinct blood smears, each from a different patient, were manually prepared, stained, and then sorted into four groups. Each group underwent imaging using different cameras integrated into light microscopes with 40X microscopic lenses resulting in total 47 K + field images/patches. Two hematologists processed cell-by-cell to provide one million + segmented RBCs with their XYWH coordinates and classified 240 K + RBCs into nine shapes. This dataset (Elsafty_RBCs_for_AI) enables the development/testing of deep learning-based (DL) automation of RBCs morphology/shapes examination, including specific normalization of blood smear stains (different from histopathology stains), detection/counting, segmentation, and classification. Two codes are provided (Elsafty_Codes_for_AI), one for semi-automated image processing and another for training/testing of a DL-based image classifier.


Asunto(s)
Eritrocitos , Eritrocitos/citología , Humanos , Microscopía , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador
10.
Sci Rep ; 14(1): 15219, 2024 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956117

RESUMEN

Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing techniques struggle to accurately segment these delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on specific operations can limit its ability to capture crucial details such as the edges of the vessel. This paper introduces LMBiS-Net, a lightweight convolutional neural network designed for the segmentation of retinal vessels. LMBiS-Net achieves exceptional performance with a remarkably low number of learnable parameters (only 0.172 million). The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. In addition, we have optimised the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimisation significantly reduces training time and improves computational efficiency. To assess LMBiS-Net's robustness and ability to generalise to unseen data, we conducted comprehensive evaluations on four publicly available datasets: DRIVE, STARE, CHASE_DB1, and HRF The proposed LMBiS-Net achieves significant performance metrics in various datasets. It obtains sensitivity values of 83.60%, 84.37%, 86.05%, and 83.48%, specificity values of 98.83%, 98.77%, 98.96%, and 98.77%, accuracy (acc) scores of 97.08%, 97.69%, 97.75%, and 96.90%, and AUC values of 98.80%, 98.82%, 98.71%, and 88.77% on the DRIVE, STARE, CHEASE_DB, and HRF datasets, respectively. In addition, it records F1 scores of 83.43%, 84.44%, 83.54%, and 78.73% on the same datasets. Our evaluations demonstrate that LMBiS-Net achieves high segmentation accuracy (acc) while exhibiting both robustness and generalisability across various retinal image datasets. This combination of qualities makes LMBiS-Net a promising tool for various clinical applications.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
11.
Sci Rep ; 14(1): 15245, 2024 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956183

RESUMEN

In hybrid automatic insulin delivery (HAID) systems, meal disturbance is compensated by feedforward control, which requires the announcement of the meal by the patient with type 1 diabetes (DM1) to achieve the desired glycemic control performance. The calculation of insulin bolus in the HAID system is based on the amount of carbohydrates (CHO) in the meal and patient-specific parameters, i.e. carbohydrate-to-insulin ratio (CR) and insulin sensitivity-related correction factor (CF). The estimation of CHO in a meal is prone to errors and is burdensome for patients. This study proposes a fully automatic insulin delivery (FAID) system that eliminates patient intervention by compensating for unannounced meals. This study exploits the deep reinforcement learning (DRL) algorithm to calculate insulin bolus for unannounced meals without utilizing the information on CHO content. The DRL bolus calculator is integrated with a closed-loop controller and a meal detector (both previously developed by our group) to implement the FAID system. An adult cohort of 68 virtual patients based on the modified UVa/Padova simulator was used for in-silico trials. The percentage of the overall duration spent in the target range of 70-180 mg/dL was 71.2 % and 76.2 % , < 70 mg/dL was 0.9 % and 0.1 % , and > 180 mg/dL was 26.7 % and 21.1 % , respectively, for the FAID system and HAID system utilizing a standard bolus calculator (SBC) including CHO misestimation. The proposed algorithm can be exploited to realize FAID systems in the future.


Asunto(s)
Aprendizaje Profundo , Diabetes Mellitus Tipo 1 , Sistemas de Infusión de Insulina , Insulina , Insulina/administración & dosificación , Humanos , Diabetes Mellitus Tipo 1/tratamiento farmacológico , Diabetes Mellitus Tipo 1/sangre , Algoritmos , Glucemia/análisis , Adulto , Hipoglucemiantes/administración & dosificación
12.
Nat Commun ; 15(1): 5566, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956442

RESUMEN

Accurately modeling the protein fitness landscapes holds great importance for protein engineering. Pre-trained protein language models have achieved state-of-the-art performance in predicting protein fitness without wet-lab experimental data, but their accuracy and interpretability remain limited. On the other hand, traditional supervised deep learning models require abundant labeled training examples for performance improvements, posing a practical barrier. In this work, we introduce FSFP, a training strategy that can effectively optimize protein language models under extreme data scarcity for fitness prediction. By combining meta-transfer learning, learning to rank, and parameter-efficient fine-tuning, FSFP can significantly boost the performance of various protein language models using merely tens of labeled single-site mutants from the target protein. In silico benchmarks across 87 deep mutational scanning datasets demonstrate FSFP's superiority over both unsupervised and supervised baselines. Furthermore, we successfully apply FSFP to engineer the Phi29 DNA polymerase through wet-lab experiments, achieving a 25% increase in the positive rate. These results underscore the potential of our approach in aiding AI-guided protein engineering.


Asunto(s)
Ingeniería de Proteínas , Ingeniería de Proteínas/métodos , Aprendizaje Profundo , Proteínas/genética , Proteínas/metabolismo , Mutación , ADN Polimerasa Dirigida por ADN/metabolismo , Simulación por Computador , Modelos Moleculares , Algoritmos
13.
BMC Med Imaging ; 24(1): 162, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38956470

RESUMEN

BACKGROUND: The image quality of computed tomography angiography (CTA) images following endovascular aneurysm repair (EVAR) is not satisfactory, since artifacts resulting from metallic implants obstruct the clear depiction of stent and isolation lumens, and also adjacent soft tissues. However, current techniques to reduce these artifacts still need further advancements due to higher radiation doses, longer processing times and so on. Thus, the aim of this study is to assess the impact of utilizing Single-Energy Metal Artifact Reduction (SEMAR) alongside a novel deep learning image reconstruction technique, known as the Advanced Intelligent Clear-IQ Engine (AiCE), on image quality of CTA follow-ups conducted after EVAR. MATERIALS: This retrospective study included 47 patients (mean age ± standard deviation: 68.6 ± 7.8 years; 37 males) who underwent CTA examinations following EVAR. Images were reconstructed using four different methods: hybrid iterative reconstruction (HIR), AiCE, the combination of HIR and SEMAR (HIR + SEMAR), and the combination of AiCE and SEMAR (AiCE + SEMAR). Two radiologists, blinded to the reconstruction techniques, independently evaluated the images. Quantitative assessments included measurements of image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), the longest length of artifacts (AL), and artifact index (AI). These parameters were subsequently compared across different reconstruction methods. RESULTS: The subjective results indicated that AiCE + SEMAR performed the best in terms of image quality. The mean image noise intensity was significantly lower in the AiCE + SEMAR group (25.35 ± 6.51 HU) than in the HIR (47.77 ± 8.76 HU), AiCE (42.93 ± 10.61 HU), and HIR + SEMAR (30.34 ± 4.87 HU) groups (p < 0.001). Additionally, AiCE + SEMAR exhibited the highest SNRs and CNRs, as well as the lowest AIs and ALs. Importantly, endoleaks and thrombi were most clearly visualized using AiCE + SEMAR. CONCLUSIONS: In comparison to other reconstruction methods, the combination of AiCE + SEMAR demonstrates superior image quality, thereby enhancing the detection capabilities and diagnostic confidence of potential complications such as early minor endleaks and thrombi following EVAR. This improvement in image quality could lead to more accurate diagnoses and better patient outcomes.


Asunto(s)
Artefactos , Angiografía por Tomografía Computarizada , Procedimientos Endovasculares , Humanos , Estudios Retrospectivos , Femenino , Angiografía por Tomografía Computarizada/métodos , Anciano , Masculino , Procedimientos Endovasculares/métodos , Persona de Mediana Edad , Aneurisma de la Aorta Abdominal/cirugía , Aneurisma de la Aorta Abdominal/diagnóstico por imagen , Aprendizaje Profundo , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Stents , Reparación Endovascular de Aneurismas
14.
BMC Med Imaging ; 24(1): 165, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956579

RESUMEN

BACKGROUND: Pneumoconiosis has a significant impact on the quality of patient survival due to its difficult staging diagnosis and poor prognosis. This study aimed to develop a computer-aided diagnostic system for the screening and staging of pneumoconiosis based on a multi-stage joint deep learning approach using X-ray chest radiographs of pneumoconiosis patients. METHODS: In this study, a total of 498 medical chest radiographs were obtained from the Department of Radiology of West China Fourth Hospital. The dataset was randomly divided into a training set and a test set at a ratio of 4:1. Following histogram equalization for image enhancement, the images were segmented using the U-Net model, and staging was predicted using a convolutional neural network classification model. We first used Efficient-Net for multi-classification staging diagnosis, but the results showed that stage I/II of pneumoconiosis was difficult to diagnose. Therefore, based on clinical practice we continued to improve the model by using the Res-Net 34 Multi-stage joint method. RESULTS: Of the 498 cases collected, the classification model using the Efficient-Net achieved an accuracy of 83% with a Quadratic Weighted Kappa (QWK) score of 0.889. The classification model using the multi-stage joint approach of Res-Net 34 achieved an accuracy of 89% with an area under the curve (AUC) of 0.98 and a high QWK score of 0.94. CONCLUSIONS: In this study, the diagnostic accuracy of pneumoconiosis staging was significantly improved by an innovative combined multi-stage approach, which provided a reference for clinical application and pneumoconiosis screening.


Asunto(s)
Aprendizaje Profundo , Neumoconiosis , Humanos , Neumoconiosis/diagnóstico por imagen , Neumoconiosis/patología , Masculino , Persona de Mediana Edad , Femenino , Radiografía Torácica/métodos , Anciano , Adulto , Redes Neurales de la Computación , China , Diagnóstico por Computador/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos
15.
BMC Med Imaging ; 24(1): 163, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38956583

RESUMEN

PURPOSE: To examine whether there is a significant difference in image quality between the deep learning reconstruction (DLR [AiCE, Advanced Intelligent Clear-IQ Engine]) and hybrid iterative reconstruction (HIR [AIDR 3D, adaptive iterative dose reduction three dimensional]) algorithms on the conventional enhanced and CE-boost (contrast-enhancement-boost) images of indirect computed tomography venography (CTV) of lower extremities. MATERIALS AND METHODS: In this retrospective study, seventy patients who underwent CTV from June 2021 to October 2022 to assess deep vein thrombosis and varicose veins were included. Unenhanced and enhanced images were reconstructed for AIDR 3D and AiCE, AIDR 3D-boost and AiCE-boost images were obtained using subtraction software. Objective and subjective image qualities were assessed, and radiation doses were recorded. RESULTS: The CT values of the inferior vena cava (IVC), femoral vein ( FV), and popliteal vein (PV) in the CE-boost images were approximately 1.3 (1.31-1.36) times higher than in those of the enhanced images. There were no significant differences in mean CT values of IVC, FV, and PV between AIDR 3D and AiCE, AIDR 3D-boost and AiCE-boost images. Noise in AiCE, AiCE-boost images was significantly lower than in AIDR 3D and AIDR 3D-boost images ( P < 0.05). The SNR (signal-to-noise ratio), CNR (contrast-to-noise ratio), and subjective scores of AiCE-boost images were the highest among 4 groups, surpassing AiCE, AIDR 3D, and AIDR 3D-boost images (all P < 0.05). CONCLUSION: In indirect CTV of the lower extremities images, DLR with the CE-boost technique could decrease the image noise and improve the CT values, SNR, CNR, and subjective image scores. AiCE-boost images received the highest subjective image quality score and were more readily accepted by radiologists.


Asunto(s)
Medios de Contraste , Aprendizaje Profundo , Extremidad Inferior , Flebografía , Humanos , Masculino , Estudios Retrospectivos , Femenino , Persona de Mediana Edad , Extremidad Inferior/irrigación sanguínea , Extremidad Inferior/diagnóstico por imagen , Anciano , Flebografía/métodos , Adulto , Algoritmos , Trombosis de la Vena/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Vena Poplítea/diagnóstico por imagen , Várices/diagnóstico por imagen , Vena Cava Inferior/diagnóstico por imagen , Vena Femoral/diagnóstico por imagen , Dosis de Radiación , Angiografía por Tomografía Computarizada/métodos , Anciano de 80 o más Años , Intensificación de Imagen Radiográfica/métodos
16.
Radiat Oncol ; 19(1): 87, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956690

RESUMEN

BACKGROUND AND PURPOSE: Various deep learning auto-segmentation (DLAS) models have been proposed, some of which have been commercialized. However, the issue of performance degradation is notable when pretrained models are deployed in the clinic. This study aims to enhance precision of a popular commercial DLAS product in rectal cancer radiotherapy by localized fine-tuning, addressing challenges in practicality and generalizability in real-world clinical settings. MATERIALS AND METHODS: A total of 120 Stage II/III mid-low rectal cancer patients were retrospectively enrolled and divided into three datasets: training (n = 60), external validation (ExVal, n = 30), and generalizability evaluation (GenEva, n = 30) datasets respectively. The patients in the training and ExVal dataset were acquired on the same CT simulator, while those in GenEva were on a different CT simulator. The commercial DLAS software was first localized fine-tuned (LFT) for clinical target volume (CTV) and organs-at-risk (OAR) using the training data, and then validated on ExVal and GenEva respectively. Performance evaluation involved comparing the LFT model with the vendor-provided pretrained model (VPM) against ground truth contours, using metrics like Dice similarity coefficient (DSC), 95th Hausdorff distance (95HD), sensitivity and specificity. RESULTS: LFT significantly improved CTV delineation accuracy (p < 0.05) with LFT outperforming VPM in target volume, DSC, 95HD and specificity. Both models exhibited adequate accuracy for bladder and femoral heads, and LFT demonstrated significant enhancement in segmenting the more complex small intestine. We did not identify performance degradation when LFT and VPM models were applied in the GenEva dataset. CONCLUSIONS: The necessity and potential benefits of LFT DLAS towards institution-specific model adaption is underscored. The commercial DLAS software exhibits superior accuracy once localized fine-tuned, and is highly robust to imaging equipment changes.


Asunto(s)
Aprendizaje Profundo , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador , Neoplasias del Recto , Humanos , Neoplasias del Recto/radioterapia , Neoplasias del Recto/patología , Órganos en Riesgo/efectos de la radiación , Estudios Retrospectivos , Planificación de la Radioterapia Asistida por Computador/métodos , Femenino , Masculino , Persona de Mediana Edad , Anciano , Dosificación Radioterapéutica , Tomografía Computarizada por Rayos X , Adulto , Radioterapia de Intensidad Modulada/métodos
17.
PLoS One ; 19(7): e0305864, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38959272

RESUMEN

This research aims to establish a practical stress detection framework by integrating physiological indicators and deep learning techniques. Utilizing a virtual reality (VR) interview paradigm mirroring real-world scenarios, our focus is on classifying stress states through accessible single-channel electroencephalogram (EEG) and galvanic skin response (GSR) data. Thirty participants underwent stress-inducing VR interviews, with biosignals recorded for deep learning models. Five convolutional neural network (CNN) architectures and one Vision Transformer model, including a multiple-column structure combining EEG and GSR features, showed heightened predictive capabilities and an enhanced area under the receiver operating characteristic curve (AUROC) in stress prediction compared to single-column models. Our experimental protocol effectively elicited stress responses, observed through fluctuations in stress visual analogue scale (VAS), EEG, and GSR metrics. In the single-column architecture, ResNet-152 excelled with a GSR AUROC of 0.944 (±0.027), while the Vision Transformer performed well in EEG, achieving peak AUROC values of 0.886 (±0.069) respectively. Notably, the multiple-column structure, based on ResNet-50, achieved the highest AUROC value of 0.954 (±0.018) in stress classification. Through VR-based simulated interviews, our study induced social stress responses, leading to significant modifications in GSR and EEG measurements. Deep learning models precisely classified stress levels, with the multiple-column strategy demonstrating superiority. Additionally, discreetly placing single-channel EEG measurements behind the ear enhances the convenience and accuracy of stress detection in everyday situations.


Asunto(s)
Aprendizaje Profundo , Electroencefalografía , Respuesta Galvánica de la Piel , Estrés Psicológico , Realidad Virtual , Humanos , Electroencefalografía/métodos , Femenino , Masculino , Adulto , Estrés Psicológico/fisiopatología , Estrés Psicológico/diagnóstico , Respuesta Galvánica de la Piel/fisiología , Adulto Joven , Curva ROC , Redes Neurales de la Computación
18.
F1000Res ; 13: 691, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38962692

RESUMEN

Background: Non-contrast Computed Tomography (NCCT) plays a pivotal role in assessing central nervous system disorders and is a crucial diagnostic method. Iterative reconstruction (IR) methods have enhanced image quality (IQ) but may result in a blotchy appearance and decreased resolution for subtle contrasts. The deep-learning image reconstruction (DLIR) algorithm, which integrates a convolutional neural network (CNN) into the reconstruction process, generates high-quality images with minimal noise. Hence, the objective of this study was to assess the IQ of the Precise Image (DLIR) and the IR technique (iDose 4) for the NCCT brain. Methods: This is a prospective study. Thirty patients who underwent NCCT brain were included. The images were reconstructed using DLIR-standard and iDose 4. Qualitative IQ analysis parameters, such as overall image quality (OQ), subjective image noise (SIN), and artifacts, were measured. Quantitative IQ analysis parameters such as Computed Tomography (CT) attenuation (HU), image noise (IN), posterior fossa index (PFI), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in the basal ganglia (BG) and centrum-semiovale (CSO) were measured. Paired t-tests were performed for qualitative and quantitative IQ analyses between the iDose 4 and DLIR-standard. Kappa statistics were used to assess inter-observer agreement for qualitative analysis. Results: Quantitative IQ analysis showed significant differences (p<0.05) in IN, SNR, and CNR between the iDose 4 and DLIR-standard at the BG and CSO levels. IN was reduced (41.8-47.6%), SNR (65-82%), and CNR (68-78.8%) were increased with DLIR-standard. PFI was reduced (27.08%) the DLIR-standard. Qualitative IQ analysis showed significant differences (p<0.05) in OQ, SIN, and artifacts between the DLIR standard and iDose 4. The DLIR standard showed higher qualitative IQ scores than the iDose 4. Conclusion: DLIR standard yielded superior quantitative and qualitative IQ compared to the IR technique (iDose4). The DLIR-standard significantly reduced the IN and artifacts compared to iDose 4 in the NCCT brain.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Proyectos Piloto , Femenino , Tomografía Computarizada por Rayos X/métodos , Masculino , Estudios Prospectivos , Persona de Mediana Edad , Encéfalo/diagnóstico por imagen , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Relación Señal-Ruido , Algoritmos
20.
Sci Rep ; 14(1): 15433, 2024 07 04.
Artículo en Inglés | MEDLINE | ID: mdl-38965354

RESUMEN

The COVID-19 pandemic continues to challenge healthcare systems globally, necessitating advanced tools for clinical decision support. Amidst the complexity of COVID-19 symptomatology and disease severity prediction, there is a critical need for robust decision support systems to aid healthcare professionals in timely and informed decision-making. In response to this pressing demand, we introduce BayesCovid, a novel decision support system integrating Bayesian network models and deep learning techniques. BayesCovid automates data preprocessing and leverages advanced computational methods to unravel intricate patterns in COVID-19 symptom dynamics. By combining Bayesian networks and Bayesian deep learning models, BayesCovid offers a comprehensive solution for uncovering hidden relationships between symptoms and predicting disease severity. Experimental validation demonstrates BayesCovid 's high prediction accuracy (83.52-98.97%). Our work represents a significant stride in addressing the urgent need for clinical decision support systems tailored to the complexities of managing COVID-19 cases. By providing healthcare professionals with actionable insights derived from sophisticated computational analysis, BayesCovid aims to enhance clinical decision-making, optimise resource allocation, and improve patient outcomes in the ongoing battle against the COVID-19 pandemic.


Asunto(s)
Teorema de Bayes , COVID-19 , Aprendizaje Profundo , Pandemias , COVID-19/epidemiología , COVID-19/virología , Humanos , SARS-CoV-2/aislamiento & purificación , Sistemas de Apoyo a Decisiones Clínicas , Inteligencia Artificial
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA