Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 19.688
Filtrar
1.
Sci Rep ; 14(1): 10560, 2024 05 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720020

RESUMEN

The research on video analytics especially in the area of human behavior recognition has become increasingly popular recently. It is widely applied in virtual reality, video surveillance, and video retrieval. With the advancement of deep learning algorithms and computer hardware, the conventional two-dimensional convolution technique for training video models has been replaced by three-dimensional convolution, which enables the extraction of spatio-temporal features. Specifically, the use of 3D convolution in human behavior recognition has been the subject of growing interest. However, the increased dimensionality has led to challenges such as the dramatic increase in the number of parameters, increased time complexity, and a strong dependence on GPUs for effective spatio-temporal feature extraction. The training speed can be considerably slow without the support of powerful GPU hardware. To address these issues, this study proposes an Adaptive Time Compression (ATC) module. Functioning as an independent component, ATC can be seamlessly integrated into existing architectures and achieves data compression by eliminating redundant frames within video data. The ATC module effectively reduces GPU computing load and time complexity with negligible loss of accuracy, thereby facilitating real-time human behavior recognition.


Asunto(s)
Algoritmos , Compresión de Datos , Grabación en Video , Humanos , Compresión de Datos/métodos , Actividades Humanas , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
2.
Platelets ; 35(1): 2344512, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38722090

RESUMEN

The last decade has seen increasing use of advanced imaging techniques in platelet research. However, there has been a lag in the development of image analysis methods, leaving much of the information trapped in images. Herein, we present a robust analytical pipeline for finding and following individual platelets over time in growing thrombi. Our pipeline covers four steps: detection, tracking, estimation of tracking accuracy, and quantification of platelet metrics. We detect platelets using a deep learning network for image segmentation, which we validated with proofreading by multiple experts. We then track platelets using a standard particle tracking algorithm and validate the tracks with custom image sampling - essential when following platelets within a dense thrombus. We show that our pipeline is more accurate than previously described methods. To demonstrate the utility of our analytical platform, we use it to show that in vivo thrombus formation is much faster than that ex vivo. Furthermore, platelets in vivo exhibit less passive movement in the direction of blood flow. Our tools are free and open source and written in the popular and user-friendly Python programming language. They empower researchers to accurately find and follow platelets in fluorescence microscopy experiments.


In this paper we describe computational tools to find and follow individual platelets in blood clots recorded with fluorescence microscopy. Our tools work in a diverse range of conditions, both in living animals and in artificial flow chamber models of thrombosis. Our work uses deep learning methods to achieve excellent accuracy. We also provide tools for visualizing data and estimating error rates, so you don't have to just trust the output. Our workflow measures platelet density, shape, and speed, which we use to demonstrate differences in the kinetics of clotting in living vessels versus a synthetic environment. The tools we wrote are open source, written in the popular Python programming language, and freely available to all. We hope they will be of use to other platelet researchers.


Asunto(s)
Plaquetas , Aprendizaje Profundo , Trombosis , Plaquetas/metabolismo , Trombosis/sangre , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Ratones , Algoritmos
3.
Radiol Cardiothorac Imaging ; 6(3): e230177, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38722232

RESUMEN

Purpose To develop a deep learning model for increasing cardiac cine frame rate while maintaining spatial resolution and scan time. Materials and Methods A transformer-based model was trained and tested on a retrospective sample of cine images from 5840 patients (mean age, 55 years ± 19 [SD]; 3527 male patients) referred for clinical cardiac MRI from 2003 to 2021 at nine centers; images were acquired using 1.5- and 3-T scanners from three vendors. Data from three centers were used for training and testing (4:1 ratio). The remaining data were used for external testing. Cines with downsampled frame rates were restored using linear, bicubic, and model-based interpolation. The root mean square error between interpolated and original cine images was modeled using ordinary least squares regression. In a prospective study of 49 participants referred for clinical cardiac MRI (mean age, 56 years ± 13; 25 male participants) and 12 healthy participants (mean age, 51 years ± 16; eight male participants), the model was applied to cines acquired at 25 frames per second (fps), thereby doubling the frame rate, and these interpolated cines were compared with actual 50-fps cines. The preference of two readers based on perceived temporal smoothness and image quality was evaluated using a noninferiority margin of 10%. Results The model generated artifact-free interpolated images. Ordinary least squares regression analysis accounting for vendor and field strength showed lower error (P < .001) with model-based interpolation compared with linear and bicubic interpolation in internal and external test sets. The highest proportion of reader choices was "no preference" (84 of 122) between actual and interpolated 50-fps cines. The 90% CI for the difference between reader proportions favoring collected (15 of 122) and interpolated (23 of 122) high-frame-rate cines was -0.01 to 0.14, indicating noninferiority. Conclusion A transformer-based deep learning model increased cardiac cine frame rates while preserving both spatial resolution and scan time, resulting in images with quality comparable to that of images obtained at actual high frame rates. Keywords: Functional MRI, Heart, Cardiac, Deep Learning, High Frame Rate Supplemental material is available for this article. © RSNA, 2024.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Cinemagnética , Humanos , Masculino , Imagen por Resonancia Cinemagnética/métodos , Persona de Mediana Edad , Femenino , Estudios Prospectivos , Estudios Retrospectivos , Corazón/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos
4.
Neurosurg Rev ; 47(1): 200, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38722409

RESUMEN

Appropriate needle manipulation to avoid abrupt deformation of fragile vessels is a critical determinant of the success of microvascular anastomosis. However, no study has yet evaluated the area changes in surgical objects using surgical videos. The present study therefore aimed to develop a deep learning-based semantic segmentation algorithm to assess the area change of vessels during microvascular anastomosis for objective surgical skill assessment with regard to the "respect for tissue." The semantic segmentation algorithm was trained based on a ResNet-50 network using microvascular end-to-side anastomosis training videos with artificial blood vessels. Using the created model, video parameters during a single stitch completion task, including the coefficient of variation of vessel area (CV-VA), relative change in vessel area per unit time (ΔVA), and the number of tissue deformation errors (TDE), as defined by a ΔVA threshold, were compared between expert and novice surgeons. A high validation accuracy (99.1%) and Intersection over Union (0.93) were obtained for the auto-segmentation model. During the single-stitch task, the expert surgeons displayed lower values of CV-VA (p < 0.05) and ΔVA (p < 0.05). Additionally, experts committed significantly fewer TDEs than novices (p < 0.05), and completed the task in a shorter time (p < 0.01). Receiver operating curve analyses indicated relatively strong discriminative capabilities for each video parameter and task completion time, while the combined use of the task completion time and video parameters demonstrated complete discriminative power between experts and novices. In conclusion, the assessment of changes in the vessel area during microvascular anastomosis using a deep learning-based semantic segmentation algorithm is presented as a novel concept for evaluating microsurgical performance. This will be useful in future computer-aided devices to enhance surgical education and patient safety.


Asunto(s)
Algoritmos , Anastomosis Quirúrgica , Aprendizaje Profundo , Humanos , Anastomosis Quirúrgica/métodos , Proyectos Piloto , Microcirugia/métodos , Microcirugia/educación , Agujas , Competencia Clínica , Semántica , Procedimientos Quirúrgicos Vasculares/métodos , Procedimientos Quirúrgicos Vasculares/educación
5.
Environ Monit Assess ; 196(6): 527, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38722419

RESUMEN

Understanding the connections between human activities and the natural environment depends heavily on information about land use and land cover (LULC) in the form of accurate LULC maps. Environmental monitoring using deep learning (DL) is rapidly growing to preserve a sustainable environment in the long term. For establishing effective policies, regulations, and implementation, DL can be a valuable tool for assessing environmental conditions and natural resources that will positively impact the ecosystem. This paper presents the assessment of land use and land cover change detection (LULCCD) and prediction using DL techniques for the southwestern coastal region, Goa, also known as the tourist destination of India. It consists of three components: (i) change detection (CD), (ii) quantification of LULC changes, and (iii) prediction. A new CD assessment framework, Spatio-Temporal Encoder-Decoder Self Attention Network (STEDSAN), is proposed for the LULCCD process. A dual branch encoder-decoder network is constructed using strided convolution with downsampling for the encoder and transpose convolution with upsampling for the decoder to assess the bitemporal images spatially. The self-attention (SA) mechanism captures the complex global spatial-temporal (ST) interactions between individual pixels over space-time to produce more distinct features. Each branch accepts the LULC map of 2 years as one of its inputs to determine binary and multiclass changes among the bitemporal images. The STEDSAN model determines the patterns, trends, and conversion from one LULC type to another for the assessment period from 2005 to 2018. The binary change maps were also compared with the existing state of the art (SOTA) CD methods, with STEDSAN having an overall accuracy of 94.93%. The prediction was made using an recurrent neural network (RNN) known as long short term memory network (LSTM) for the year 2025. Experiments were conducted to determine area-wise changes in several LULC classes, such as built-up (BU), crops (kharif crop (KC), rabi crop (RC), zaid crop (ZC), double/triple (D/T C)), current fallow (CF), plantation (PL), forests (evergreen forest (EF), deciduous forest (DF), degraded/scurb forest (D/SF) ), littoral swamp (LS), grassland (GL), wasteland (WL), waterbodies max (Wmx), and waterbodies min (Wmn). As per the analysis, over the period of 13 years, there has been a net increase in the amount of BU (1.25%), RC (1.17%), and D/TC( 2.42%) and a net decrease in DF (3.29%) and WL(1.44%) being the most dominant classes being changed. These findings will offer a thorough description of identifying trends in coastal areas that may incorporate methodological hints for future studies. This study will also promote handling the spatial and temporal complexity of remotely sensed data employed in categorizing the coastal LULC of a heterogeneous landscape.


Asunto(s)
Conservación de los Recursos Naturales , Aprendizaje Profundo , Monitoreo del Ambiente , India , Monitoreo del Ambiente/métodos , Conservación de los Recursos Naturales/métodos , Ecosistema , Agricultura/métodos
6.
Science ; 384(6696): eadm7168, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38723062

RESUMEN

Despite a half-century of advancements, global magnetic resonance imaging (MRI) accessibility remains limited and uneven, hindering its full potential in health care. Initially, MRI development focused on low fields around 0.05 Tesla, but progress halted after the introduction of the 1.5 Tesla whole-body superconducting scanner in 1983. Using a permanent 0.05 Tesla magnet and deep learning for electromagnetic interference elimination, we developed a whole-body scanner that operates using a standard wall power outlet and without radiofrequency and magnetic shielding. We demonstrated its wide-ranging applicability for imaging various anatomical structures. Furthermore, we developed three-dimensional deep learning reconstruction to boost image quality by harnessing extensive high-field MRI data. These advances pave the way for affordable deep learning-powered ultra-low-field MRI scanners, addressing unmet clinical needs in diverse health care settings worldwide.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Imagen de Cuerpo Entero , Imagen por Resonancia Magnética/métodos , Imagen de Cuerpo Entero/métodos , Humanos , Imagenología Tridimensional/métodos
7.
Water Sci Technol ; 89(9): 2326-2341, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38747952

RESUMEN

In this paper, we address the critical task of 24-h streamflow forecasting using advanced deep-learning models, with a primary focus on the transformer architecture which has seen limited application in this specific task. We compare the performance of five different models, including persistence, long short-term memory (LSTM), Seq2Seq, GRU, and transformer, across four distinct regions. The evaluation is based on three performance metrics: Nash-Sutcliffe Efficiency (NSE), Pearson's r, and normalized root mean square error (NRMSE). Additionally, we investigate the impact of two data extension methods: zero-padding and persistence, on the model's predictive capabilities. Our findings highlight the transformer's superiority in capturing complex temporal dependencies and patterns in the streamflow data, outperforming all other models in terms of both accuracy and reliability. Specifically, the transformer model demonstrated a substantial improvement in NSE scores by up to 20% compared to other models. The study's insights emphasize the significance of leveraging advanced deep learning techniques, such as the transformer, in hydrological modeling and streamflow forecasting for effective water resource management and flood prediction.


Asunto(s)
Hidrología , Modelos Teóricos , Hidrología/métodos , Ríos , Movimientos del Agua , Predicción/métodos , Aprendizaje Profundo
8.
J Chem Phys ; 160(17)2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38748013

RESUMEN

Several enhanced sampling techniques rely on the definition of collective variables to effectively explore free energy landscapes. The existing variables that describe the progression along a reactive pathway offer an elegant solution but face a number of limitations. In this paper, we address these challenges by introducing a new path-like collective variable called the "deep-locally non-linear-embedding," which is inspired by principles of the locally linear embedding technique and is trained on a reactive trajectory. The variable mimics the ideal reaction coordinate by automatically generating a non-linear combination of features through a differentiable generalized autoencoder that combines a neural network with a continuous k-nearest neighbor selection. Among the key advantages of this method is its capability to automatically choose the metric for searching neighbors and to learn the path from state A to state B without the need to handpick landmarks a priori. We demonstrate the effectiveness of DeepLNE by showing that the progression along the path variable closely approximates the ideal reaction coordinate in toy models, such as the Müller-Brown potential and alanine dipeptide. Then, we use it in the molecular dynamics simulations of an RNA tetraloop, where we highlight its capability to accelerate transitions and estimate the free energy of folding.


Asunto(s)
Aprendizaje Profundo , Simulación de Dinámica Molecular , ARN/química , Termodinámica , Dipéptidos/química
9.
Sci Data ; 11(1): 475, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38724595

RESUMEN

InsectSound1000 is a dataset comprising more than 169000 labelled sound samples of 12 insects. The insect sound level spans from very loud (Bombus terrestris) to inaudible to human ears (Aphidoletes aphidimyza). The samples were extracted from more than 1000 h of recordings made in an anechoic box with a four-channel low-noise measurement microphone array. Each sample is a four-channel wave-file of 2500 kHz length, at 16 kHz sample rate and 32 bit resolution. Acoustic insect recognition holds great potential to form the basis of a digital insect sensor. Such sensors are desperately needed to automate pest monitoring and ecological monitoring. With its significant size and high-quality recordings, InsectSound1000 can be used to train data-hungry deep learning models. Used to pretrain models, it can also be leveraged to enable the development of acoustic insect recognition systems on different hardware or for different insects. Further, the methodology employed to create the dataset is presented in detail to allow for the extension of the published dataset.


Asunto(s)
Acústica , Aprendizaje Profundo , Sonido , Animales , Insectos
10.
BMC Med Imaging ; 24(1): 107, 2024 May 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734629

RESUMEN

This study addresses the critical challenge of detecting brain tumors using MRI images, a pivotal task in medical diagnostics that demands high accuracy and interpretability. While deep learning has shown remarkable success in medical image analysis, there remains a substantial need for models that are not only accurate but also interpretable to healthcare professionals. The existing methodologies, predominantly deep learning-based, often act as black boxes, providing little insight into their decision-making process. This research introduces an integrated approach using ResNet50, a deep learning model, combined with Gradient-weighted Class Activation Mapping (Grad-CAM) to offer a transparent and explainable framework for brain tumor detection. We employed a dataset of MRI images, enhanced through data augmentation, to train and validate our model. The results demonstrate a significant improvement in model performance, with a testing accuracy of 98.52% and precision-recall metrics exceeding 98%, showcasing the model's effectiveness in distinguishing tumor presence. The application of Grad-CAM provides insightful visual explanations, illustrating the model's focus areas in making predictions. This fusion of high accuracy and explainability holds profound implications for medical diagnostics, offering a pathway towards more reliable and interpretable brain tumor detection tools.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Imagen por Resonancia Magnética , Humanos , Neoplasias Encefálicas/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Interpretación de Imagen Asistida por Computador/métodos
11.
Sci Rep ; 14(1): 10812, 2024 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734714

RESUMEN

Cervical cancer, the second most prevalent cancer affecting women, arises from abnormal cell growth in the cervix, a crucial anatomical structure within the uterus. The significance of early detection cannot be overstated, prompting the use of various screening methods such as Pap smears, colposcopy, and Human Papillomavirus (HPV) testing to identify potential risks and initiate timely intervention. These screening procedures encompass visual inspections, Pap smears, colposcopies, biopsies, and HPV-DNA testing, each demanding the specialized knowledge and skills of experienced physicians and pathologists due to the inherently subjective nature of cancer diagnosis. In response to the imperative for efficient and intelligent screening, this article introduces a groundbreaking methodology that leverages pre-trained deep neural network models, including Alexnet, Resnet-101, Resnet-152, and InceptionV3, for feature extraction. The fine-tuning of these models is accompanied by the integration of diverse machine learning algorithms, with ResNet152 showcasing exceptional performance, achieving an impressive accuracy rate of 98.08%. It is noteworthy that the SIPaKMeD dataset, publicly accessible and utilized in this study, contributes to the transparency and reproducibility of our findings. The proposed hybrid methodology combines aspects of DL and ML for cervical cancer classification. Most intricate and complicated features from images can be extracted through DL. Further various ML algorithms can be implemented on extracted features. This innovative approach not only holds promise for significantly improving cervical cancer detection but also underscores the transformative potential of intelligent automation within the realm of medical diagnostics, paving the way for more accurate and timely interventions.


Asunto(s)
Aprendizaje Profundo , Detección Precoz del Cáncer , Neoplasias del Cuello Uterino , Humanos , Neoplasias del Cuello Uterino/diagnóstico , Neoplasias del Cuello Uterino/patología , Femenino , Detección Precoz del Cáncer/métodos , Redes Neurales de la Computación , Algoritmos , Prueba de Papanicolaou/métodos , Colposcopía/métodos
12.
Sci Rep ; 14(1): 10801, 2024 05 11.
Artículo en Inglés | MEDLINE | ID: mdl-38734727

RESUMEN

The non-perfusion area (NPA) of the retina is an important indicator in the visual prognosis of patients with branch retinal vein occlusion (BRVO). However, the current evaluation method of NPA, fluorescein angiography (FA), is invasive and burdensome. In this study, we examined the use of deep learning models for detecting NPA in color fundus images, bypassing the need for FA, and we also investigated the utility of synthetic FA generated from color fundus images. The models were evaluated using the Dice score and Monte Carlo dropout uncertainty. We retrospectively collected 403 sets of color fundus and FA images from 319 BRVO patients. We trained three deep learning models on FA, color fundus images, and synthetic FA. As a result, though the FA model achieved the highest score, the other two models also performed comparably. We found no statistical significance in median Dice scores between the models. However, the color fundus model showed significantly higher uncertainty than the other models (p < 0.05). In conclusion, deep learning models can detect NPAs from color fundus images with reasonable accuracy, though with somewhat less prediction stability. Synthetic FA stabilizes the prediction and reduces misleading uncertainty estimates by enhancing image quality.


Asunto(s)
Aprendizaje Profundo , Angiografía con Fluoresceína , Fondo de Ojo , Oclusión de la Vena Retiniana , Humanos , Angiografía con Fluoresceína/métodos , Estudios Retrospectivos , Oclusión de la Vena Retiniana/diagnóstico por imagen , Masculino , Femenino , Anciano , Persona de Mediana Edad , Procesamiento de Imagen Asistido por Computador/métodos
13.
Radiat Oncol ; 19(1): 55, 2024 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-38735947

RESUMEN

BACKGROUND: Currently, automatic esophagus segmentation remains a challenging task due to its small size, low contrast, and large shape variation. We aimed to improve the performance of esophagus segmentation in deep learning by applying a strategy that involves locating the object first and then performing the segmentation task. METHODS: A total of 100 cases with thoracic computed tomography scans from two publicly available datasets were used in this study. A modified CenterNet, an object location network, was employed to locate the center of the esophagus for each slice. Subsequently, the 3D U-net and 2D U-net_coarse models were trained to segment the esophagus based on the predicted object center. A 2D U-net_fine model was trained based on the updated object center according to the 3D U-net model. The dice similarity coefficient and the 95% Hausdorff distance were used as quantitative evaluation indexes for the delineation performance. The characteristics of the automatically delineated esophageal contours by the 2D U-net and 3D U-net models were summarized. Additionally, the impact of the accuracy of object localization on the delineation performance was analyzed. Finally, the delineation performance in different segments of the esophagus was also summarized. RESULTS: The mean dice coefficient of the 3D U-net, 2D U-net_coarse, and 2D U-net_fine models were 0.77, 0.81, and 0.82, respectively. The 95% Hausdorff distance for the above models was 6.55, 3.57, and 3.76, respectively. Compared with the 2D U-net, the 3D U-net has a lower incidence of delineating wrong objects and a higher incidence of missing objects. After using the fine object center, the average dice coefficient was improved by 5.5% in the cases with a dice coefficient less than 0.75, while that value was only 0.3% in the cases with a dice coefficient greater than 0.75. The dice coefficients were lower for the esophagus between the orifice of the inferior and the pulmonary bifurcation compared with the other regions. CONCLUSION: The 3D U-net model tended to delineate fewer incorrect objects but also miss more objects. Two-stage strategy with accurate object location could enhance the robustness of the segmentation model and significantly improve the esophageal delineation performance, especially for cases with poor delineation results.


Asunto(s)
Aprendizaje Profundo , Esófago , Humanos , Esófago/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos
14.
J Transl Med ; 22(1): 438, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720336

RESUMEN

BACKGROUND: Advanced unresectable gastric cancer (GC) patients were previously treated with chemotherapy alone as the first-line therapy. However, with the Food and Drug Administration's (FDA) 2022 approval of programmed cell death protein 1 (PD-1) inhibitor combined with chemotherapy as the first-li ne treatment for advanced unresectable GC, patients have significantly benefited. However, the significant costs and potential adverse effects necessitate precise patient selection. In recent years, the advent of deep learning (DL) has revolutionized the medical field, particularly in predicting tumor treatment responses. Our study utilizes DL to analyze pathological images, aiming to predict first-line PD-1 combined chemotherapy response for advanced-stage GC. METHODS: In this multicenter retrospective analysis, Hematoxylin and Eosin (H&E)-stained slides were collected from advanced GC patients across four medical centers. Treatment response was evaluated according to iRECIST 1.1 criteria after a comprehensive first-line PD-1 immunotherapy combined with chemotherapy. Three DL models were employed in an ensemble approach to create the immune checkpoint inhibitors Response Score (ICIsRS) as a novel histopathological biomarker derived from Whole Slide Images (WSIs). RESULTS: Analyzing 148,181 patches from 313 WSIs of 264 advanced GC patients, the ensemble model exhibited superior predictive accuracy, leading to the creation of ICIsNet. The model demonstrated robust performance across four testing datasets, achieving AUC values of 0.92, 0.95, 0.96, and 1 respectively. The boxplot, constructed from the ICIsRS, reveals statistically significant disparities between the well response and poor response (all p-values < = 0.001). CONCLUSION: ICIsRS, a DL-derived biomarker from WSIs, effectively predicts advanced GC patients' responses to PD-1 combined chemotherapy, offering a novel approach for personalized treatment planning and allowing for more individualized and potentially effective treatment strategies based on a patient's unique response situations.


Asunto(s)
Aprendizaje Profundo , Inhibidores de Puntos de Control Inmunológico , Receptor de Muerte Celular Programada 1 , Neoplasias Gástricas , Humanos , Neoplasias Gástricas/tratamiento farmacológico , Neoplasias Gástricas/patología , Masculino , Femenino , Resultado del Tratamiento , Persona de Mediana Edad , Inhibidores de Puntos de Control Inmunológico/uso terapéutico , Receptor de Muerte Celular Programada 1/antagonistas & inhibidores , Anciano , Estudios Retrospectivos , Curva ROC , Adulto
15.
J Transl Med ; 22(1): 434, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720370

RESUMEN

BACKGROUND: Cardiometabolic disorders pose significant health risks globally. Metabolic syndrome, characterized by a cluster of potentially reversible metabolic abnormalities, is a known risk factor for these disorders. Early detection and intervention for individuals with metabolic abnormalities can help mitigate the risk of developing more serious cardiometabolic conditions. This study aimed to develop an image-derived phenotype (IDP) for metabolic abnormality from unenhanced abdominal computed tomography (CT) scans using deep learning. We used this IDP to classify individuals with metabolic syndrome and predict future occurrence of cardiometabolic disorders. METHODS: A multi-stage deep learning approach was used to extract the IDP from the liver region of unenhanced abdominal CT scans. In a cohort of over 2,000 individuals the IDP was used to classify individuals with metabolic syndrome. In a subset of over 1,300 individuals, the IDP was used to predict future occurrence of hypertension, type II diabetes, and fatty liver disease. RESULTS: For metabolic syndrome (MetS) classification, we compared the performance of the proposed IDP to liver attenuation and visceral adipose tissue area (VAT). The proposed IDP showed the strongest performance (AUC 0.82) compared to attenuation (AUC 0.70) and VAT (AUC 0.80). For disease prediction, we compared the performance of the IDP to baseline MetS diagnosis. The models including the IDP outperformed MetS for type II diabetes (AUCs 0.91 and 0.90) and fatty liver disease (AUCs 0.67 and 0.62) prediction and performed comparably for hypertension prediction (AUCs of 0.77). CONCLUSIONS: This study demonstrated the superior performance of a deep learning IDP compared to traditional radiomic features to classify individuals with metabolic syndrome. Additionally, the IDP outperformed the clinical definition of metabolic syndrome in predicting future morbidities. Our findings underscore the utility of data-driven imaging phenotypes as valuable tools in the assessment and management of metabolic syndrome and cardiometabolic disorders.


Asunto(s)
Aprendizaje Profundo , Síndrome Metabólico , Fenotipo , Humanos , Síndrome Metabólico/diagnóstico por imagen , Síndrome Metabólico/complicaciones , Femenino , Masculino , Persona de Mediana Edad , Tomografía Computarizada por Rayos X , Enfermedades Cardiovasculares/diagnóstico por imagen , Adulto , Procesamiento de Imagen Asistido por Computador/métodos
16.
Cancer Imaging ; 24(1): 60, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38720391

RESUMEN

BACKGROUND: This study systematically compares the impact of innovative deep learning image reconstruction (DLIR, TrueFidelity) to conventionally used iterative reconstruction (IR) on nodule volumetry and subjective image quality (IQ) at highly reduced radiation doses. This is essential in the context of low-dose CT lung cancer screening where accurate volumetry and characterization of pulmonary nodules in repeated CT scanning are indispensable. MATERIALS AND METHODS: A standardized CT dataset was established using an anthropomorphic chest phantom (Lungman, Kyoto Kaguku Inc., Kyoto, Japan) containing a set of 3D-printed lung nodules including six diameters (4 to 9 mm) and three morphology classes (lobular, spiculated, smooth), with an established ground truth. Images were acquired at varying radiation doses (6.04, 3.03, 1.54, 0.77, 0.41 and 0.20 mGy) and reconstructed with combinations of reconstruction kernels (soft and hard kernel) and reconstruction algorithms (ASIR-V and DLIR at low, medium and high strength). Semi-automatic volumetry measurements and subjective image quality scores recorded by five radiologists were analyzed with multiple linear regression and mixed-effect ordinal logistic regression models. RESULTS: Volumetric errors of nodules imaged with DLIR are up to 50% lower compared to ASIR-V, especially at radiation doses below 1 mGy and when reconstructed with a hard kernel. Also, across all nodule diameters and morphologies, volumetric errors are commonly lower with DLIR. Furthermore, DLIR renders higher subjective IQ, especially at the sub-mGy doses. Radiologists were up to nine times more likely to score the highest IQ-score to these images compared to those reconstructed with ASIR-V. Lung nodules with irregular margins and small diameters also had an increased likelihood (up to five times more likely) to be ascribed the best IQ scores when reconstructed with DLIR. CONCLUSION: We observed that DLIR performs as good as or even outperforms conventionally used reconstruction algorithms in terms of volumetric accuracy and subjective IQ of nodules in an anthropomorphic chest phantom. As such, DLIR potentially allows to lower the radiation dose to participants of lung cancer screening without compromising accurate measurement and characterization of lung nodules.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Nódulos Pulmonares Múltiples , Fantasmas de Imagen , Dosis de Radiación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Nódulos Pulmonares Múltiples/patología , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/patología , Nódulo Pulmonar Solitario/diagnóstico por imagen , Nódulo Pulmonar Solitario/patología , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Procesamiento de Imagen Asistido por Computador/métodos
17.
Microbiome ; 12(1): 84, 2024 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-38725076

RESUMEN

BACKGROUND: Emergence of antibiotic resistance in bacteria is an important threat to global health. Antibiotic resistance genes (ARGs) are some of the key components to define bacterial resistance and their spread in different environments. Identification of ARGs, particularly from high-throughput sequencing data of the specimens, is the state-of-the-art method for comprehensively monitoring their spread and evolution. Current computational methods to identify ARGs mainly rely on alignment-based sequence similarities with known ARGs. Such approaches are limited by choice of reference databases and may potentially miss novel ARGs. The similarity thresholds are usually simple and could not accommodate variations across different gene families and regions. It is also difficult to scale up when sequence data are increasing. RESULTS: In this study, we developed ARGNet, a deep neural network that incorporates an unsupervised learning autoencoder model to identify ARGs and a multiclass classification convolutional neural network to classify ARGs that do not depend on sequence alignment. This approach enables a more efficient discovery of both known and novel ARGs. ARGNet accepts both amino acid and nucleotide sequences of variable lengths, from partial (30-50 aa; 100-150 nt) sequences to full-length protein or genes, allowing its application in both target sequencing and metagenomic sequencing. Our performance evaluation showed that ARGNet outperformed other deep learning models including DeepARG and HMD-ARG in most of the application scenarios especially quasi-negative test and the analysis of prediction consistency with phylogenetic tree. ARGNet has a reduced inference runtime by up to 57% relative to DeepARG. CONCLUSIONS: ARGNet is flexible, efficient, and accurate at predicting a broad range of ARGs from the sequencing data. ARGNet is freely available at https://github.com/id-bioinfo/ARGNet , with an online service provided at https://ARGNet.hku.hk . Video Abstract.


Asunto(s)
Bacterias , Redes Neurales de la Computación , Bacterias/genética , Bacterias/efectos de los fármacos , Bacterias/clasificación , Farmacorresistencia Bacteriana/genética , Antibacterianos/farmacología , Secuenciación de Nucleótidos de Alto Rendimiento/métodos , Biología Computacional/métodos , Genes Bacterianos/genética , Farmacorresistencia Microbiana/genética , Humanos , Aprendizaje Profundo
18.
F1000Res ; 13: 274, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38725640

RESUMEN

Background: The most recent advances in Computed Tomography (CT) image reconstruction technology are Deep learning image reconstruction (DLIR) algorithms. Due to drawbacks in Iterative reconstruction (IR) techniques such as negative image texture and nonlinear spatial resolutions, DLIRs are gradually replacing them. However, the potential use of DLIR in Head and Chest CT has to be examined further. Hence, the purpose of the study is to review the influence of DLIR on Radiation dose (RD), Image noise (IN), and outcomes of the studies compared with IR and FBP in Head and Chest CT examinations. Methods: We performed a detailed search in PubMed, Scopus, Web of Science, Cochrane Library, and Embase to find the articles reported using DLIR for Head and Chest CT examinations between 2017 to 2023. Data were retrieved from the short-listed studies using Preferred Reporting Items for Systematic Reviews and Meta-analysis (PRISMA) guidelines. Results: Out of 196 articles searched, 15 articles were included. A total of 1292 sample size was included. 14 articles were rated as high and 1 article as moderate quality. All studies compared DLIR to IR techniques. 5 studies compared DLIR with IR and FBP. The review showed that DLIR improved IQ, and reduced RD and IN for CT Head and Chest examinations. Conclusions: DLIR algorithm have demonstrated a noted enhancement in IQ with reduced IN for CT Head and Chest examinations at lower dose compared with IR and FBP. DLIR showed potential for enhancing patient care by reducing radiation risks and increasing diagnostic accuracy.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Cabeza , Dosis de Radiación , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Cabeza/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tórax/diagnóstico por imagen , Radiografía Torácica/métodos , Relación Señal-Ruido
19.
J Med Internet Res ; 26: e49848, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38728685

RESUMEN

BACKGROUND: Acute myocardial infarction (AMI) is one of the most severe cardiovascular diseases and is associated with a high risk of in-hospital mortality. However, the current deep learning models for in-hospital mortality prediction lack interpretability. OBJECTIVE: This study aims to establish an explainable deep learning model to provide individualized in-hospital mortality prediction and risk factor assessment for patients with AMI. METHODS: In this retrospective multicenter study, we used data for consecutive patients hospitalized with AMI from the Chongqing University Central Hospital between July 2016 and December 2022 and the Electronic Intensive Care Unit Collaborative Research Database. These patients were randomly divided into training (7668/10,955, 70%) and internal test (3287/10,955, 30%) data sets. In addition, data of patients with AMI from the Medical Information Mart for Intensive Care database were used for external validation. Deep learning models were used to predict in-hospital mortality in patients with AMI, and they were compared with linear and tree-based models. The Shapley Additive Explanations method was used to explain the model with the highest area under the receiver operating characteristic curve in both the internal test and external validation data sets to quantify and visualize the features that drive predictions. RESULTS: A total of 10,955 patients with AMI who were admitted to Chongqing University Central Hospital or included in the Electronic Intensive Care Unit Collaborative Research Database were randomly divided into a training data set of 7668 (70%) patients and an internal test data set of 3287 (30%) patients. A total of 9355 patients from the Medical Information Mart for Intensive Care database were included for independent external validation. In-hospital mortality occurred in 8.74% (670/7668), 8.73% (287/3287), and 9.12% (853/9355) of the patients in the training, internal test, and external validation cohorts, respectively. The Self-Attention and Intersample Attention Transformer model performed best in both the internal test data set and the external validation data set among the 9 prediction models, with the highest area under the receiver operating characteristic curve of 0.86 (95% CI 0.84-0.88) and 0.85 (95% CI 0.84-0.87), respectively. Older age, high heart rate, and low body temperature were the 3 most important predictors of increased mortality, according to the explanations of the Self-Attention and Intersample Attention Transformer model. CONCLUSIONS: The explainable deep learning model that we developed could provide estimates of mortality and visual contribution of the features to the prediction for a patient with AMI. The explanations suggested that older age, unstable vital signs, and metabolic disorders may increase the risk of mortality in patients with AMI.


Asunto(s)
Aprendizaje Profundo , Mortalidad Hospitalaria , Infarto del Miocardio , Humanos , Infarto del Miocardio/mortalidad , Femenino , Masculino , Estudios Retrospectivos , Persona de Mediana Edad , Anciano , Algoritmos , Factores de Riesgo , Curva ROC
20.
PLoS One ; 19(5): e0299696, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38728335

RESUMEN

The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) caused the COVID-19 disease, which represents a new life-threatening disaster. Regarding viral infection, many therapeutics have been investigated to alleviate the epidemiology such as vaccines and receptor decoys. However, the continuous mutating coronavirus, especially the variants of Delta and Omicron, are tended to invalidate the therapeutic biological product. Thus, it is necessary to develop molecular entities as broad-spectrum antiviral drugs. Coronavirus replication is controlled by the viral 3-chymotrypsin-like cysteine protease (3CLpro) enzyme, which is required for the virus's life cycle. In the cases of severe acute respiratory syndrome coronavirus (SARS-CoV) and middle east respiratory syndrome coronavirus (MERS-CoV), 3CLpro has been shown to be a promising therapeutic development target. Here we proposed an attention-based deep learning framework for molecular graphs and sequences, training from the BindingDB 3CLpro dataset (114,555 compounds). After construction of such model, we conducted large-scale screening the in vivo/vitro dataset (276,003 compounds) from Zinc Database and visualize the candidate compounds with attention score. geometric-based affinity prediction was employed for validation. Finally, we established a 3CLpro-specific deep learning framework, namely GraphDPI-3CL (AUROC: 0.958) achieved superior performance beyond the existing state of the art model and discovered 10 molecules with a high binding affinity of 3CLpro and superior binding mode.


Asunto(s)
Antivirales , Tratamiento Farmacológico de COVID-19 , Aprendizaje Profundo , SARS-CoV-2 , SARS-CoV-2/efectos de los fármacos , SARS-CoV-2/metabolismo , SARS-CoV-2/genética , Antivirales/farmacología , Antivirales/uso terapéutico , Humanos , Proteasas 3C de Coronavirus/metabolismo , Proteasas 3C de Coronavirus/antagonistas & inhibidores , Unión Proteica , COVID-19/virología , Simulación del Acoplamiento Molecular
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA