Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20.746
Filtrar
1.
J Med Primatol ; 53(4): e12722, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38949157

RESUMEN

BACKGROUND: Tuberculosis (TB) kills approximately 1.6 million people yearly despite the fact anti-TB drugs are generally curative. Therefore, TB-case detection and monitoring of therapy, need a comprehensive approach. Automated radiological analysis, combined with clinical, microbiological, and immunological data, by machine learning (ML), can help achieve it. METHODS: Six rhesus macaques were experimentally inoculated with pathogenic Mycobacterium tuberculosis in the lung. Data, including Computed Tomography (CT), were collected at 0, 2, 4, 8, 12, 16, and 20 weeks. RESULTS: Our ML-based CT analysis (TB-Net) efficiently and accurately analyzed disease progression, performing better than standard deep learning model (LLM OpenAI's CLIP Vi4). TB-Net based results were more consistent than, and confirmed independently by, blinded manual disease scoring by two radiologists and exhibited strong correlations with blood biomarkers, TB-lesion volumes, and disease-signs during disease pathogenesis. CONCLUSION: The proposed approach is valuable in early disease detection, monitoring efficacy of therapy, and clinical decision making.


Asunto(s)
Biomarcadores , Aprendizaje Profundo , Macaca mulatta , Mycobacterium tuberculosis , Tomografía Computarizada por Rayos X , Animales , Biomarcadores/sangre , Tomografía Computarizada por Rayos X/veterinaria , Tuberculosis/veterinaria , Tuberculosis/diagnóstico por imagen , Modelos Animales de Enfermedad , Tuberculosis Pulmonar/diagnóstico por imagen , Masculino , Femenino , Pulmón/diagnóstico por imagen , Pulmón/patología , Pulmón/microbiología , Enfermedades de los Monos/diagnóstico por imagen , Enfermedades de los Monos/microbiología
2.
Neurosurg Rev ; 47(1): 300, 2024 Jun 29.
Artículo en Inglés | MEDLINE | ID: mdl-38951288

RESUMEN

The diagnosis of Moyamoya disease (MMD) relies heavily on imaging, which could benefit from standardized machine learning tools. This study aims to evaluate the diagnostic efficacy of deep learning (DL) algorithms for MMD by analyzing sensitivity, specificity, and the area under the curve (AUC) compared to expert consensus. We conducted a systematic search of PubMed, Embase, and Web of Science for articles published from inception to February 2024. Eligible studies were required to report diagnostic accuracy metrics such as sensitivity, specificity, and AUC, excluding those not in English or using traditional machine learning methods. Seven studies were included, comprising a sample of 4,416 patients, of whom 1,358 had MMD. The pooled sensitivity for common and random effects models was 0.89 (95% CI: 0.85 to 0.92) and 0.92 (95% CI: 0.85 to 0.96), respectively. The pooled specificity was 0.89 (95% CI: 0.86 to 0.91) in the common effects model and 0.91 (95% CI: 0.75 to 0.97) in the random effects model. Two studies reported the AUC alongside their confidence intervals. A meta-analysis synthesizing these findings aggregated a mean AUC of 0.94 (95% CI: 0.92 to 0.96) for common effects and 0.89 (95% CI: 0.76 to 1.02) for random effects models. Deep learning models significantly enhance the diagnosis of MMD by efficiently extracting and identifying complex image patterns with high sensitivity and specificity. Trial registration: CRD42024524998 https://www.crd.york.ac.uk/prospero/displayrecord.php?RecordID=524998.


Asunto(s)
Aprendizaje Profundo , Enfermedad de Moyamoya , Enfermedad de Moyamoya/diagnóstico , Humanos , Algoritmos , Sensibilidad y Especificidad
3.
Trop Anim Health Prod ; 56(6): 192, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38954103

RESUMEN

Accurate breed identification in dairy cattle is essential for optimizing herd management and improving genetic standards. A smart method for correctly identifying phenotypically similar breeds can empower farmers to enhance herd productivity. A convolutional neural network (CNN) based model was developed for the identification of Sahiwal and Red Sindhi cows. To increase the classification accuracy, first, cows's pixels were segmented from the background using CNN model. Using this segmented image, a masked image was produced by retaining cows' pixels from the original image while eliminating the background. To improve the classification accuracy, models were trained on four different images of each cow: front view, side view, grayscale front view, and grayscale side view. The masked images of these views were fed to the multi-input CNN model which predicts the class of input images. The segmentation model achieved intersection-over-union (IoU) and F1-score values of 81.75% and 85.26%, respectively with an inference time of 296 ms. For the classification task, multiple variants of MobileNet and EfficientNet models were used as the backbone along with pre-trained weights. The MobileNet model achieved 80.0% accuracy for both breeds, while MobileNetV2 and MobileNetV3 reached 82.0% accuracy. CNN models with EfficientNet as backbones outperformed MobileNet models, with accuracy ranging from 84.0% to 86.0%. The F1-scores for these models were found to be above 83.0%, indicating effective breed classification with fewer false positives and negatives. Thus, the present study demonstrates that deep learning models can be used effectively to identify phenotypically similar-looking cattle breeds. To accurately identify zebu breeds, this study will reduce the dependence of farmers on experts.


Asunto(s)
Aprendizaje Profundo , Fenotipo , Animales , Bovinos , Cruzamiento , Redes Neurales de la Computación , Femenino , Industria Lechera/métodos
4.
IEEE J Biomed Health Inform ; 28(7): 3997-4009, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954559

RESUMEN

Magnetic resonance imaging (MRI)-based deep neural networks (DNN) have been widely developed to perform prostate cancer (PCa) classification. However, in real-world clinical situations, prostate MRIs can be easily impacted by rectal artifacts, which have been found to lead to incorrect PCa classification. Existing DNN-based methods typically do not consider the interference of rectal artifacts on PCa classification, and do not design specific strategy to address this problem. In this study, we proposed a novel Targeted adversarial training with Proprietary Adversarial Samples (TPAS) strategy to defend the PCa classification model against the influence of rectal artifacts. Specifically, based on clinical prior knowledge, we generated proprietary adversarial samples with rectal artifact-pattern adversarial noise, which can severely mislead PCa classification models optimized by the ordinary training strategy. We then jointly exploited the generated proprietary adversarial samples and original samples to train the models. To demonstrate the effectiveness of our strategy, we conducted analytical experiments on multiple PCa classification models. Compared with ordinary training strategy, TPAS can effectively improve the single- and multi-parametric PCa classification at patient, slice and lesion level, and bring substantial gains to recent advanced models. In conclusion, TPAS strategy can be identified as a valuable way to mitigate the influence of rectal artifacts on deep learning models for PCa classification.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Neoplasias de la Próstata , Recto , Humanos , Masculino , Neoplasias de la Próstata/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Recto/diagnóstico por imagen , Redes Neurales de la Computación , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Profundo
5.
IEEE J Biomed Health Inform ; 28(7): 3798-3809, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954560

RESUMEN

Major depressive disorder (MDD) is a chronic mental illness which affects people's well-being and is often detected at a later stage of depression with a likelihood of suicidal ideation. Early detection of MDD is thus necessary to reduce the impact, however, it requires monitoring vitals in daily living conditions. EEG is generally multi-channel and due to difficulty in signal acquisition, it is unsuitable for home-based monitoring, whereas, wearable sensors can collect single-channel ECG. Classical machine-learning based MDD detection studies commonly use various heart rate variability features. Feature generation, which requires domain knowledge, is often challenging, and requires computation power, often unsuitable for real time processing, MDDBranchNet is a proposed parallel-branch deep learning model for MDD binary classification from a single channel ECG which uses additional ECG-derived signals such as R-R signal and degree distribution time series of horizontal visibility graph. The use of derived branches was able to increase the model's accuracy by around 7%. An optimal 20-second overlapped segmentation of ECG recording was found to be beneficial with a 70% prediction threshold for maximum MDD detection with a minimum false positive rate. The proposed model evaluated MDD prediction from signal excerpts, irrespective of location (first, middle or last one-third of the recording), instead of considering the entire ECG signal with minimal performance variation stressing the idea that MDD phenomena are likely to manifest uniformly throughout the recording.


Asunto(s)
Aprendizaje Profundo , Trastorno Depresivo Mayor , Electrocardiografía , Procesamiento de Señales Asistido por Computador , Humanos , Electrocardiografía/métodos , Trastorno Depresivo Mayor/fisiopatología , Trastorno Depresivo Mayor/diagnóstico , Algoritmos , Adulto , Masculino
6.
IEEE J Biomed Health Inform ; 28(7): 4170-4183, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38954557

RESUMEN

Efficient medical image segmentation aims to provide accurate pixel-wise predictions with a lightweight implementation framework. However, existing lightweight networks generally overlook the generalizability of the cross-domain medical segmentation tasks. In this paper, we propose Generalizable Knowledge Distillation (GKD), a novel framework for enhancing the performance of lightweight networks on cross-domain medical segmentation by generalizable knowledge distillation from powerful teacher networks. Considering the domain gaps between different medical datasets, we propose the Model-Specific Alignment Networks (MSAN) to obtain the domain-invariant representations. Meanwhile, a customized Alignment Consistency Training (ACT) strategy is designed to promote the MSAN training. Based on the domain-invariant vectors in MSAN, we propose two generalizable distillation schemes, Dual Contrastive Graph Distillation (DCGD) and Domain-Invariant Cross Distillation (DICD). In DCGD, two implicit contrastive graphs are designed to model the intra-coupling and inter-coupling semantic correlations. Then, in DICD, the domain-invariant semantic vectors are reconstructed from two networks (i.e., teacher and student) with a crossover manner to achieve simultaneous generalization of lightweight networks, hierarchically. Moreover, a metric named Fréchet Semantic Distance (FSD) is tailored to verify the effectiveness of the regularized domain-invariant features. Extensive experiments conducted on the Liver, Retinal Vessel and Colonoscopy segmentation datasets demonstrate the superiority of our method, in terms of performance and generalization ability on lightweight networks.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Redes Neurales de la Computación , Bases de Datos Factuales , Aprendizaje Profundo
7.
BMC Med Inform Decis Mak ; 24(1): 187, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38951831

RESUMEN

BACKGROUND: Accurate measurement of hemoglobin concentration is essential for various medical scenarios, including preoperative evaluations and determining blood loss. Traditional invasive methods are inconvenient and not suitable for rapid, point-of-care testing. Moreover, current models, due to their complex parameters, are not well-suited for mobile medical settings, which limits the ability to conduct frequent and rapid testing. This study aims to introduce a novel, compact, and efficient system that leverages deep learning and smartphone technology to accurately estimate hemoglobin levels, thereby facilitating rapid and accessible medical assessments. METHODS: The study employed a smartphone application to capture images of the eye, which were subsequently analyzed by a deep neural network trained on data from invasive blood test data. Specifically, the EGE-Unet model was utilized for eyelid segmentation, while the DHA(C3AE) model was employed for hemoglobin level prediction. The performance of the EGE-Unet was evaluated using statistical metrics including mean intersection over union (MIOU), F1 Score, accuracy, specificity, and sensitivity. The DHA(C3AE) model's performance was assessed using mean absolute error (MAE), mean-square error (MSE), root mean square error (RMSE), and R^2. RESULTS: The EGE-Unet model demonstrated robust performance in eyelid segmentation, achieving an MIOU of 0.78, an F1 Score of 0.87, an accuracy of 0.97, a specificity of 0.98, and a sensitivity of 0.86. The DHA(C3AE) model for hemoglobin level prediction yielded promising outcomes with an MAE of 1.34, an MSE of 2.85, an RMSE of 1.69, and an R^2 of 0.34. The overall size of the model is modest at 1.08 M, with a computational complexity of 0.12 FLOPs (G). CONCLUSIONS: This system presents a groundbreaking approach that eliminates the need for supplementary devices, providing a cost-effective, swift, and accurate method for healthcare professionals to enhance treatment planning and improve patient care in perioperative environments. The proposed system has the potential to enable frequent and rapid testing of hemoglobin levels, which can be particularly beneficial in mobile medical settings. TRIAL REGISTRATION: The clinical trial was registered on the Chinese Clinical Trial Registry (No. ChiCTR2100044138) on 20/02/2021.


Asunto(s)
Aprendizaje Profundo , Hemoglobinas , Teléfono Inteligente , Humanos , Hemoglobinas/análisis , Persona de Mediana Edad , Masculino , Aplicaciones Móviles , Femenino
8.
Proc Natl Acad Sci U S A ; 121(28): e2320870121, 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38959033

RESUMEN

Efficient storage and sharing of massive biomedical data would open up their wide accessibility to different institutions and disciplines. However, compressors tailored for natural photos/videos are rapidly limited for biomedical data, while emerging deep learning-based methods demand huge training data and are difficult to generalize. Here, we propose to conduct Biomedical data compRession with Implicit nEural Function (BRIEF) by representing the target data with compact neural networks, which are data specific and thus have no generalization issues. Benefiting from the strong representation capability of implicit neural function, BRIEF achieves 2[Formula: see text]3 orders of magnitude compression on diverse biomedical data at significantly higher fidelity than existing techniques. Besides, BRIEF is of consistent performance across the whole data volume, and supports customized spatially varying fidelity. BRIEF's multifold advantageous features also serve reliable downstream tasks at low bandwidth. Our approach will facilitate low-bandwidth data sharing and promote collaboration and progress in the biomedical field.


Asunto(s)
Difusión de la Información , Redes Neurales de la Computación , Humanos , Difusión de la Información/métodos , Compresión de Datos/métodos , Aprendizaje Profundo , Investigación Biomédica/métodos
9.
Cell Metab ; 36(7): 1482-1493.e7, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38959862

RESUMEN

Although human core body temperature is known to decrease with age, the age dependency of facial temperature and its potential to indicate aging rate or aging-related diseases remains uncertain. Here, we collected thermal facial images of 2,811 Han Chinese individuals 20-90 years old, developed the ThermoFace method to automatically process and analyze images, and then generated thermal age and disease prediction models. The ThermoFace deep learning model for thermal facial age has a mean absolute deviation of about 5 years in cross-validation and 5.18 years in an independent cohort. The difference between predicted and chronological age is highly associated with metabolic parameters, sleep time, and gene expression pathways like DNA repair, lipolysis, and ATPase in the blood transcriptome, and it is modifiable by exercise. Consistently, ThermoFace disease predictors forecast metabolic diseases like fatty liver with high accuracy (AUC > 0.80), with predicted disease probability correlated with metabolic parameters.


Asunto(s)
Envejecimiento , Cara , Enfermedades Metabólicas , Humanos , Persona de Mediana Edad , Anciano , Adulto , Masculino , Femenino , Anciano de 80 o más Años , Adulto Joven , Aprendizaje Profundo , Temperatura Corporal , Procesamiento de Imagen Asistido por Computador
10.
Ultrasound Q ; 40(3)2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-38958999

RESUMEN

ABSTRACT: The objective of the study was to use a deep learning model to differentiate between benign and malignant sentinel lymph nodes (SLNs) in patients with breast cancer compared to radiologists' assessments.Seventy-nine women with breast cancer were enrolled and underwent lymphosonography and contrast-enhanced ultrasound (CEUS) examination after subcutaneous injection of ultrasound contrast agent around their tumor to identify SLNs. Google AutoML was used to develop image classification model. Grayscale and CEUS images acquired during the ultrasound examination were uploaded with a data distribution of 80% for training/20% for testing. The performance metric used was area under precision/recall curve (AuPRC). In addition, 3 radiologists assessed SLNs as normal or abnormal based on a clinical established classification. Two-hundred seventeen SLNs were divided in 2 for model development; model 1 included all SLNs and model 2 had an equal number of benign and malignant SLNs. Validation results model 1 AuPRC 0.84 (grayscale)/0.91 (CEUS) and model 2 AuPRC 0.91 (grayscale)/0.87 (CEUS). The comparison between artificial intelligence (AI) and readers' showed statistical significant differences between all models and ultrasound modes; model 1 grayscale AI versus readers, P = 0.047, and model 1 CEUS AI versus readers, P < 0.001. Model 2 r grayscale AI versus readers, P = 0.032, and model 2 CEUS AI versus readers, P = 0.041.The interreader agreement overall result showed κ values of 0.20 for grayscale and 0.17 for CEUS.In conclusion, AutoML showed improved diagnostic performance in balance volume datasets. Radiologist performance was not influenced by the dataset's distribution.


Asunto(s)
Neoplasias de la Mama , Aprendizaje Profundo , Ganglio Linfático Centinela , Humanos , Femenino , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Ganglio Linfático Centinela/diagnóstico por imagen , Persona de Mediana Edad , Anciano , Adulto , Radiólogos/estadística & datos numéricos , Ultrasonografía Mamaria/métodos , Medios de Contraste , Metástasis Linfática/diagnóstico por imagen , Ultrasonografía/métodos , Biopsia del Ganglio Linfático Centinela/métodos , Mama/diagnóstico por imagen , Reproducibilidad de los Resultados
11.
Vestn Oftalmol ; 140(3): 82-87, 2024.
Artículo en Ruso | MEDLINE | ID: mdl-38962983

RESUMEN

This article reviews literature on the use of artificial intelligence (AI) for screening, diagnosis, monitoring and treatment of glaucoma. The first part of the review provides information how AI methods improve the effectiveness of glaucoma screening, presents the technologies using deep learning, including neural networks, for the analysis of big data obtained by methods of ocular imaging (fundus imaging, optical coherence tomography of the anterior and posterior eye segments, digital gonioscopy, ultrasound biomicroscopy, etc.), including a multimodal approach. The results found in the reviewed literature are contradictory, indicating that improvement of the AI models requires further research and a standardized approach. The use of neural networks for timely detection of glaucoma based on multimodal imaging will reduce the risk of blindness associated with glaucoma.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Glaucoma , Redes Neurales de la Computación , Humanos , Glaucoma/diagnóstico , Tomografía de Coherencia Óptica/métodos , Tamizaje Masivo/métodos , Técnicas de Diagnóstico Oftalmológico
12.
Sci Rep ; 14(1): 15317, 2024 07 03.
Artículo en Inglés | MEDLINE | ID: mdl-38961218

RESUMEN

The hippocampus is a critical component of the brain and is associated with many neurological disorders. It can be further subdivided into several subfields, and accurate segmentation of these subfields is of great significance for diagnosis and research. However, the structures of hippocampal subfields are irregular and have complex boundaries, and their voxel values are close to surrounding brain tissues, making the segmentation task highly challenging. Currently, many automatic segmentation tools exist for hippocampal subfield segmentation, but they suffer from high time costs and low segmentation accuracy. In this paper, we propose a new dual-branch segmentation network structure (DSnet) based on deep learning for hippocampal subfield segmentation. While traditional convolutional neural network-based methods are effective in capturing hierarchical structures, they struggle to establish long-term dependencies. The DSnet integrates the Transformer architecture and a hybrid attention mechanism, enhancing the network's global perceptual capabilities. Moreover, the dual-branch structure of DSnet leverages the segmentation results of the hippocampal region to facilitate the segmentation of its subfields. We validate the efficacy of our algorithm on the public Kulaga-Yoskovitz dataset. Experimental results indicate that our method is more effective in segmenting hippocampal subfields than conventional single-branch network structures. Compared to the classic 3D U-Net, our proposed DSnet improves the average Dice accuracy of hippocampal subfield segmentation by 0.57%.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Hipocampo , Redes Neurales de la Computación , Hipocampo/diagnóstico por imagen , Hipocampo/anatomía & histología , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos
14.
PLoS One ; 19(7): e0305864, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38959272

RESUMEN

This research aims to establish a practical stress detection framework by integrating physiological indicators and deep learning techniques. Utilizing a virtual reality (VR) interview paradigm mirroring real-world scenarios, our focus is on classifying stress states through accessible single-channel electroencephalogram (EEG) and galvanic skin response (GSR) data. Thirty participants underwent stress-inducing VR interviews, with biosignals recorded for deep learning models. Five convolutional neural network (CNN) architectures and one Vision Transformer model, including a multiple-column structure combining EEG and GSR features, showed heightened predictive capabilities and an enhanced area under the receiver operating characteristic curve (AUROC) in stress prediction compared to single-column models. Our experimental protocol effectively elicited stress responses, observed through fluctuations in stress visual analogue scale (VAS), EEG, and GSR metrics. In the single-column architecture, ResNet-152 excelled with a GSR AUROC of 0.944 (±0.027), while the Vision Transformer performed well in EEG, achieving peak AUROC values of 0.886 (±0.069) respectively. Notably, the multiple-column structure, based on ResNet-50, achieved the highest AUROC value of 0.954 (±0.018) in stress classification. Through VR-based simulated interviews, our study induced social stress responses, leading to significant modifications in GSR and EEG measurements. Deep learning models precisely classified stress levels, with the multiple-column strategy demonstrating superiority. Additionally, discreetly placing single-channel EEG measurements behind the ear enhances the convenience and accuracy of stress detection in everyday situations.


Asunto(s)
Aprendizaje Profundo , Electroencefalografía , Respuesta Galvánica de la Piel , Estrés Psicológico , Realidad Virtual , Humanos , Electroencefalografía/métodos , Femenino , Masculino , Adulto , Estrés Psicológico/fisiopatología , Estrés Psicológico/diagnóstico , Respuesta Galvánica de la Piel/fisiología , Adulto Joven , Curva ROC , Redes Neurales de la Computación
15.
F1000Res ; 13: 691, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38962692

RESUMEN

Background: Non-contrast Computed Tomography (NCCT) plays a pivotal role in assessing central nervous system disorders and is a crucial diagnostic method. Iterative reconstruction (IR) methods have enhanced image quality (IQ) but may result in a blotchy appearance and decreased resolution for subtle contrasts. The deep-learning image reconstruction (DLIR) algorithm, which integrates a convolutional neural network (CNN) into the reconstruction process, generates high-quality images with minimal noise. Hence, the objective of this study was to assess the IQ of the Precise Image (DLIR) and the IR technique (iDose 4) for the NCCT brain. Methods: This is a prospective study. Thirty patients who underwent NCCT brain were included. The images were reconstructed using DLIR-standard and iDose 4. Qualitative IQ analysis parameters, such as overall image quality (OQ), subjective image noise (SIN), and artifacts, were measured. Quantitative IQ analysis parameters such as Computed Tomography (CT) attenuation (HU), image noise (IN), posterior fossa index (PFI), signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in the basal ganglia (BG) and centrum-semiovale (CSO) were measured. Paired t-tests were performed for qualitative and quantitative IQ analyses between the iDose 4 and DLIR-standard. Kappa statistics were used to assess inter-observer agreement for qualitative analysis. Results: Quantitative IQ analysis showed significant differences (p<0.05) in IN, SNR, and CNR between the iDose 4 and DLIR-standard at the BG and CSO levels. IN was reduced (41.8-47.6%), SNR (65-82%), and CNR (68-78.8%) were increased with DLIR-standard. PFI was reduced (27.08%) the DLIR-standard. Qualitative IQ analysis showed significant differences (p<0.05) in OQ, SIN, and artifacts between the DLIR standard and iDose 4. The DLIR standard showed higher qualitative IQ scores than the iDose 4. Conclusion: DLIR standard yielded superior quantitative and qualitative IQ compared to the IR technique (iDose4). The DLIR-standard significantly reduced the IN and artifacts compared to iDose 4 in the NCCT brain.


Asunto(s)
Encéfalo , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Humanos , Proyectos Piloto , Femenino , Tomografía Computarizada por Rayos X/métodos , Masculino , Estudios Prospectivos , Persona de Mediana Edad , Encéfalo/diagnóstico por imagen , Adulto , Procesamiento de Imagen Asistido por Computador/métodos , Anciano , Relación Señal-Ruido , Algoritmos
16.
PLoS One ; 19(7): e0306090, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38954714

RESUMEN

Diabetes is a chronic disease, which is characterized by abnormally high blood sugar levels. It may affect various organs and tissues, and even lead to life-threatening complications. Accurate prediction of diabetes can significantly reduce its incidence. However, the current prediction methods struggle to accurately capture the essential characteristics of nonlinear data, and the black-box nature of these methods hampers its clinical application. To address these challenges, we propose KCCAM_DNN, a diabetes prediction method that integrates Kendall's correlation coefficient and an attention mechanism within a deep neural network. In the KCCAM_DNN, Kendall's correlation coefficient is initially employed for feature selection, which effectively filters out key features influencing diabetes prediction. For missing values in the data, polynomial regression is utilized for imputation, ensuring data completeness. Subsequently, we construct a deep neural network (KCCAM_DNN) based on the self-attention mechanism, which assigns greater weight to crucial features affecting diabetes and enhances the model's predictive performance. Finally, we employ the SHAP model to analyze the impact of each feature on diabetes prediction, augmenting the model's interpretability. Experimental results show that KCCAM_DNN exhibits superior performance on both PIMA Indian and LMCH diabetes datasets, achieving test accuracies of 99.090% and 99.333%, respectively, approximately 2% higher than the best existing method. These results suggest that KCCAM_DNN is proficient in diabetes prediction, providing a foundation for informed decision-making in the diagnosis and prevention of diabetes.


Asunto(s)
Redes Neurales de la Computación , Humanos , Diabetes Mellitus/diagnóstico , Aprendizaje Profundo , Glucemia/análisis
17.
Nat Commun ; 15(1): 5538, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956032

RESUMEN

The dynamics of proteins are crucial for understanding their mechanisms. However, computationally predicting protein dynamic information has proven challenging. Here, we propose a neural network model, RMSF-net, which outperforms previous methods and produces the best results in a large-scale protein dynamics dataset; this model can accurately infer the dynamic information of a protein in only a few seconds. By learning effectively from experimental protein structure data and cryo-electron microscopy (cryo-EM) data integration, our approach is able to accurately identify the interactive bidirectional constraints and supervision between cryo-EM maps and PDB models in maximizing the dynamic prediction efficacy. Rigorous 5-fold cross-validation on the dataset demonstrates that RMSF-net achieves test correlation coefficients of 0.746 ± 0.127 at the voxel level and 0.765 ± 0.109 at the residue level, showcasing its ability to deliver dynamic predictions closely approximating molecular dynamics simulations. Additionally, it offers real-time dynamic inference with minimal storage overhead on the order of megabytes. RMSF-net is a freely accessible tool and is anticipated to play an essential role in the study of protein dynamics.


Asunto(s)
Microscopía por Crioelectrón , Aprendizaje Profundo , Conformación Proteica , Proteínas , Microscopía por Crioelectrón/métodos , Proteínas/química , Simulación de Dinámica Molecular , Redes Neurales de la Computación , Bases de Datos de Proteínas , Biología Computacional/métodos
18.
Sci Rep ; 14(1): 15056, 2024 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-38956075

RESUMEN

Celiac Disease (CD) is a primary malabsorption syndrome resulting from the interplay of genetic, immune, and dietary factors. CD negatively impacts daily activities and may lead to conditions such as osteoporosis, malignancies in the small intestine, ulcerative jejunitis, and enteritis, ultimately causing severe malnutrition. Therefore, an effective and rapid differentiation between healthy individuals and those with celiac disease is crucial for early diagnosis and treatment. This study utilizes Raman spectroscopy combined with deep learning models to achieve a non-invasive, rapid, and accurate diagnostic method for celiac disease and healthy controls. A total of 59 plasma samples, comprising 29 celiac disease cases and 30 healthy controls, were collected for experimental purposes. Convolutional Neural Network (CNN), Multi-Scale Convolutional Neural Network (MCNN), Residual Network (ResNet), and Deep Residual Shrinkage Network (DRSN) classification models were employed. The accuracy rates for these models were found to be 86.67%, 90.76%, 86.67% and 95.00%, respectively. Comparative validation results revealed that the DRSN model exhibited the best performance, with an AUC value and accuracy of 97.60% and 95%, respectively. This confirms the superiority of Raman spectroscopy combined with deep learning in the diagnosis of celiac disease.


Asunto(s)
Enfermedad Celíaca , Aprendizaje Profundo , Espectrometría Raman , Enfermedad Celíaca/diagnóstico , Enfermedad Celíaca/sangre , Humanos , Espectrometría Raman/métodos , Femenino , Masculino , Adulto , Redes Neurales de la Computación , Estudios de Casos y Controles , Persona de Mediana Edad
19.
Sci Data ; 11(1): 722, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956115

RESUMEN

Around 20% of complete blood count samples necessitate visual review using light microscopes or digital pathology scanners. There is currently no technological alternative to the visual examination of red blood cells (RBCs) morphology/shapes. True/non-artifact teardrop-shaped RBCs and schistocytes/fragmented RBCs are commonly associated with serious medical conditions that could be fatal, increased ovalocytes are associated with almost all types of anemias. 25 distinct blood smears, each from a different patient, were manually prepared, stained, and then sorted into four groups. Each group underwent imaging using different cameras integrated into light microscopes with 40X microscopic lenses resulting in total 47 K + field images/patches. Two hematologists processed cell-by-cell to provide one million + segmented RBCs with their XYWH coordinates and classified 240 K + RBCs into nine shapes. This dataset (Elsafty_RBCs_for_AI) enables the development/testing of deep learning-based (DL) automation of RBCs morphology/shapes examination, including specific normalization of blood smear stains (different from histopathology stains), detection/counting, segmentation, and classification. Two codes are provided (Elsafty_Codes_for_AI), one for semi-automated image processing and another for training/testing of a DL-based image classifier.


Asunto(s)
Eritrocitos , Eritrocitos/citología , Humanos , Microscopía , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador
20.
Sci Rep ; 14(1): 15219, 2024 07 02.
Artículo en Inglés | MEDLINE | ID: mdl-38956117

RESUMEN

Blinding eye diseases are often related to changes in retinal structure, which can be detected by analysing retinal blood vessels in fundus images. However, existing techniques struggle to accurately segment these delicate vessels. Although deep learning has shown promise in medical image segmentation, its reliance on specific operations can limit its ability to capture crucial details such as the edges of the vessel. This paper introduces LMBiS-Net, a lightweight convolutional neural network designed for the segmentation of retinal vessels. LMBiS-Net achieves exceptional performance with a remarkably low number of learnable parameters (only 0.172 million). The network used multipath feature extraction blocks and incorporates bidirectional skip connections for the information flow between the encoder and decoder. In addition, we have optimised the efficiency of the model by carefully selecting the number of filters to avoid filter overlap. This optimisation significantly reduces training time and improves computational efficiency. To assess LMBiS-Net's robustness and ability to generalise to unseen data, we conducted comprehensive evaluations on four publicly available datasets: DRIVE, STARE, CHASE_DB1, and HRF The proposed LMBiS-Net achieves significant performance metrics in various datasets. It obtains sensitivity values of 83.60%, 84.37%, 86.05%, and 83.48%, specificity values of 98.83%, 98.77%, 98.96%, and 98.77%, accuracy (acc) scores of 97.08%, 97.69%, 97.75%, and 96.90%, and AUC values of 98.80%, 98.82%, 98.71%, and 88.77% on the DRIVE, STARE, CHEASE_DB, and HRF datasets, respectively. In addition, it records F1 scores of 83.43%, 84.44%, 83.54%, and 78.73% on the same datasets. Our evaluations demonstrate that LMBiS-Net achieves high segmentation accuracy (acc) while exhibiting both robustness and generalisability across various retinal image datasets. This combination of qualities makes LMBiS-Net a promising tool for various clinical applications.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Vasos Retinianos , Vasos Retinianos/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...