Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 16.009
Filtrar
1.
Food Chem ; 431: 137109, 2024 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-37582325

RESUMO

Blended vegetable oils are highly prized by consumers for their comprehensive nutritional profile. Therefore, there is an urgent need for a rapid and accurate method to identify the true content of blended oils. This study combined Raman spectroscopy with three deep learning models (CNN-LSTM, improved AlexNet, and ResNet) to simultaneously quantify extra virgin olive oil (EVOO), soybean oil, and sunflower oil in olive blended oil. The results demonstrate that all three deep learning models exhibited superior predictive ability compared to traditional chemometric methods. Specifically, the CNN-LSTM model achieved a coefficient of determination (R2p) of over 0.995 for each oil in the quantitative analysis of three-component blended oils, with a mean square error of prediction (RMSEP) of less than 2%. This study presents a novel approach for the simultaneous quantitative analysis of multi-component blended oils, providing a rapid and accurate method for the identification of falsely labeled blended oils.


Assuntos
Aprendizado Profundo , Análise Espectral Raman , Azeite de Oliva/química , Óleos de Plantas/química , Óleo de Soja/análise , Óleo de Girassol
2.
Methods Mol Biol ; 2714: 215-234, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-37676602

RESUMO

Identification and optimization of small molecules that bind to and modulate protein function is a crucial step in the early stages of drug development. For decades, this process has benefitted greatly from the use of computational models that can provide insights into molecular binding affinity and optimization. Over the past several years, various types of deep learning models have shown great potential in improving and enhancing the performance of traditional computational methods. In this chapter, we provide an overview of recent deep learning-based developments with applications in drug discovery. We classify these methods into four subcategories dependent on the task each method is aiming to solve. For each subcategory, we provide the general framework of the approach and discuss individual methods.


Assuntos
Aprendizado Profundo , Desenho de Fármacos , Desenvolvimento de Medicamentos , Descoberta de Drogas
3.
Medicina (Kaunas) ; 59(9)2023 Sep 17.
Artigo em Inglês | MEDLINE | ID: mdl-37763796

RESUMO

Background and Objectives: We attempted to determine the optimal radiation dose to maintain image quality using a deep learning application in a physical human phantom. Materials and Methods: Three 5 × 5 × 5 mm3 uric acid stones were placed in a physical human phantom in various locations. Three tube voltages (120, 100, and 80 kV) and four current-time products (100, 70, 30, and 15 mAs) were implemented in 12 scans. Each scan was reconstructed with filtered back projection (FBP), statistical iterative reconstruction (IR, iDose), and knowledge-based iterative model reconstruction (IMR). By applying deep learning to each image, we took 12 more scans. Objective image assessments were calculated using the standard deviation of the Hounsfield unit (HU). Subjective image assessments were performed by one radiologist and one urologist. Two radiologists assessed the subjective assessment and found the stone under the absence of information. We used this data to calculate the diagnostic accuracy. Results: Objective image noise was decreased after applying a deep learning tool in all images of FBP, iDose, and IMR. There was no statistical difference between iDose and deep learning-applied FBP images (10.1 ± 11.9, 9.5 ± 18.5 HU, p = 0.583, respectively). At a 100 kV-30 mAs setting, deep learning-applied FBP obtained a similar objective noise in approximately one third of the radiation doses compared to FBP. In radiation doses with settings lower than 100 kV-30 mAs, the subject image assessment (image quality, confidence level, and noise) showed deteriorated scores. Diagnostic accuracy was increased when the deep learning setting was lower than 100 kV-30 mAs, except for at 80 kV-15 mAs. Conclusions: At the setting of 100 kV-30 mAs or higher, deep learning-applied FBP did not differ in image quality compared to IR. At the setting of 100 kV-30 mAs, the radiation dose can decrease by about one third while maintaining objective noise.


Assuntos
Aprendizado Profundo , Urolitíase , Humanos , Urolitíase/diagnóstico por imagem , Processos Mentais , Tomografia Computadorizada por Raios X
4.
Sensors (Basel) ; 23(18)2023 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-37766026

RESUMO

Historically, individuals with hearing impairments have faced neglect, lacking the necessary tools to facilitate effective communication. However, advancements in modern technology have paved the way for the development of various tools and software aimed at improving the quality of life for hearing-disabled individuals. This research paper presents a comprehensive study employing five distinct deep learning models to recognize hand gestures for the American Sign Language (ASL) alphabet. The primary objective of this study was to leverage contemporary technology to bridge the communication gap between hearing-impaired individuals and individuals with no hearing impairment. The models utilized in this research include AlexNet, ConvNeXt, EfficientNet, ResNet-50, and VisionTransformer were trained and tested using an extensive dataset comprising over 87,000 images of the ASL alphabet hand gestures. Numerous experiments were conducted, involving modifications to the architectural design parameters of the models to obtain maximum recognition accuracy. The experimental results of our study revealed that ResNet-50 achieved an exceptional accuracy rate of 99.98%, the highest among all models. EfficientNet attained an accuracy rate of 99.95%, ConvNeXt achieved 99.51% accuracy, AlexNet attained 99.50% accuracy, while VisionTransformer yielded the lowest accuracy of 88.59%.


Assuntos
Aprendizado Profundo , Língua de Sinais , Humanos , Estados Unidos , Qualidade de Vida , Gestos , Tecnologia
5.
J Vis ; 23(11): 29, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37733549

RESUMO

INTRODUCTION: Multiple Sclerosis (MS) is a chronic immune-mediated inflammatory disease (IMID) of the central nervous system (CNS). Early identification of MS, especially as a screening method for at-risk individuals, is crucial to delay disease progression and improve patient outcomes by preventing future irreversible neurologic damage. In this work, we utilize well-validated tracking scanning laser ophthalmoscope (TSLO) image to predict MS compared to the unaffected controls. While traditional Machine Learning (ML) methods, such as Logistic Regression (LR), have demonstrated a strong predictive power [Mauro F. Pinto et al., 2020] in disease identification, we propose the use of a novel DL based model. Though the use of Deep Neural Network (DNN), this model can have a much higher learning capacity to capture latent features embedded in the retinal images. We hypothesize that such latent information, often hidden in ML feature engineering processes, plays an important role for the prediction of disease and can be well represented by DL models. OBJECTIVES: To establish a DL based model capable of learning latent image features to provide predictive power for the presence of MS. AIMS: Utilize a deep convolutional neural network to extract the retinal coding and implement a recurrent neural network to learn the temporal correlations in video sequences. METHODS: Our approaches were tested using a 250-subject MS/control database collected at the UCSF. Patients with Expanded Disability Status Scale (EDSS)< 4 are compared to healthy subjects. Both raw retinal images and the frequency and spatial patterns of the eye motion are combined to construct a hybrid image, denoted as "retinal coding", and directly fed to the DL model for training and testing. RESULTS: Preliminary results on predictive power were measured using Area-under-Curve (AUC) of the Receiver Operating Characteristics (ROC) curve, sensitivity, and specificity, as well as an F-1 score. We observe an AUC of 0.920, sensitivity of 0.90, specificity of 0.89 and F-1 score of 0.89 using the DL model to distinguish MS from controls, which outperforms the baseline LR model by 24%. CONCLUSIONS: This work can be considered as a proof-of-concept concerning the possibility of identifying MS disease using a DL based approach. The results demonstrate the possibility of predicting early-stage MS and understanding disease's dynamics. Such end-to-end model could be generalizable and trained on other disease states.


Assuntos
Aprendizado Profundo , Esclerose Múltipla , Humanos , Esclerose Múltipla/diagnóstico por imagem , Oftalmoscopia , Oftalmoscópios , Lasers
6.
J Vis ; 23(11): 4, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37733574

RESUMO

Visual neuroprostheses are emerging as a promising technology to restore a rudimentary form of vision to people living with incurable blindness. However, phosphenes elicited by current devices often appear artificial and distorted. Although current computational models can predict the neural or perceptual response to an electrical stimulus, an optimal stimulation strategy needs to solve the inverse problem: what is the required stimulus to produce a desired response? Here we frame this as an end-to-end optimization problem, where a deep neural network encoder is trained to invert a psychophysically validated phosphene model that predicts phosphene appearance as a function of stimulus amplitude, frequency, and pulse duration. As a proof of concept, we show that our strategy can produce high-fidelity, patient-specific stimuli representing handwritten digits and segmented images of everyday objects that drastically outperform conventional encoding strategies by relying on smaller stimulus amplitudes at the expense of higher frequencies and longer pulse durations. Overall, this work is an important first step towards improving visual outcomes in visual prosthesis users across a wide range of stimuli.


Assuntos
Aprendizado Profundo , Próteses Visuais , Humanos , Cegueira/terapia , Redes Neurais de Computação
7.
Sci Rep ; 13(1): 15719, 2023 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-37735599

RESUMO

Surface-enhanced Raman spectroscopy (SERS), as a rapid, non-invasive and reliable spectroscopic detection technique, has promising applications in disease screening and diagnosis. In this paper, an annealed silver nanoparticles/porous silicon Bragg reflector (AgNPs/PSB) composite SERS substrate with high sensitivity and strong stability was prepared by immersion plating and heat treatment using porous silicon Bragg reflector (PSB) as the substrate. The substrate combines the five deep learning algorithms of the improved AlexNet, ResNet, SqueezeNet, temporal convolutional network (TCN) and multiscale fusion convolutional neural network (MCNN). We constructed rapid screening models for patients with primary Sjögren's syndrome (pSS) and healthy controls (HC), diabetic nephropathy patients (DN) and healthy controls (HC), respectively. The results showed that the annealed AgNPs/PSB composite SERS substrates performed well in diagnosing. Among them, the MCNN model had the best classification effect in the two groups of experiments, with an accuracy rate of 94.7% and 92.0%, respectively. Previous studies have indicated that the AgNPs/PSB composite SERS substrate, combined with machine learning algorithms, has achieved promising classification results in disease diagnosis. This study shows that SERS technology based on annealed AgNPs/PSB composite substrate combined with deep learning algorithm has a greater developmental prospect and research value in the early identification and screening of immune diseases and chronic kidney disease, providing reference ideas for non-invasive and rapid clinical medical diagnosis of patients.


Assuntos
Aprendizado Profundo , Doenças do Sistema Imunitário , Nanopartículas Metálicas , Insuficiência Renal Crônica , Humanos , Silício , Prata , Algoritmos , Análise Espectral Raman , Insuficiência Renal Crônica/diagnóstico
8.
PLoS One ; 18(9): e0291415, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37738269

RESUMO

This work presents the Multi-Bees-Tracker (MBT3D) algorithm, a Python framework implementing a deep association tracker for Tracking-By-Detection, to address the challenging task of tracking flight paths of bumblebees in a social group. While tracking algorithms for bumblebees exist, they often come with intensive restrictions, such as the need for sufficient lighting, high contrast between the animal and background, absence of occlusion, significant user input, etc. Tracking flight paths of bumblebees in a social group is challenging. They suddenly adjust movements and change their appearance during different wing beat states while exhibiting significant similarities in their individual appearance. The MBT3D tracker, developed in this research, is an adaptation of an existing ant tracking algorithm for bumblebee tracking. It incorporates an offline trained appearance descriptor along with a Kalman Filter for appearance and motion matching. Different detector architectures for upstream detections (You Only Look Once (YOLOv5), Faster Region Proposal Convolutional Neural Network (Faster R-CNN), and RetinaNet) are investigated in a comparative study to optimize performance. The detection models were trained on a dataset containing 11359 labeled bumblebee images. YOLOv5 reaches an Average Precision of AP = 53, 8%, Faster R-CNN achieves AP = 45, 3% and RetinaNet AP = 38, 4% on the bumblebee validation dataset, which consists of 1323 labeled bumblebee images. The tracker's appearance model is trained on 144 samples. The tracker (with Faster R-CNN detections) reaches a Multiple Object Tracking Accuracy MOTA = 93, 5% and a Multiple Object Tracking Precision MOTP = 75, 6% on a validation dataset containing 2000 images, competing with state-of-the-art computer vision methods. The framework allows reliable tracking of different bumblebees in the same video stream with rarely occurring identity switches (IDS). MBT3D has much lower IDS than other commonly used algorithms, with one of the lowest false positive rates, competing with state-of-the-art animal tracking algorithms. The developed framework reconstructs the 3-dimensional (3D) flight paths of the bumblebees by triangulation. It also handles and compares two alternative stereo camera pairs if desired.


Assuntos
Aprendizado Profundo , Abelhas , Animais , Algoritmos , Redes Neurais de Computação , Iluminação , Movimento (Física)
9.
Sci Rep ; 13(1): 15879, 2023 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-37741820

RESUMO

Hematoxylin and eosin-stained biopsy slides are regularly available for colorectal cancer patients. These slides are often not used to define objective biomarkers for patient stratification and treatment selection. Standard biomarkers often pertain to costly and slow genetic tests. However, recent work has shown that relevant biomarkers can be extracted from these images using convolutional neural networks (CNNs). The CNN-based biomarkers predicted colorectal cancer patient outcomes comparably to gold standards. Extracting CNN-biomarkers is fast, automatic, and of minimal cost. CNN-based biomarkers rely on the ability of CNNs to recognize distinct tissue types from microscope whole slide images. The quality of these biomarkers (coined 'Deep Stroma') depends on the accuracy of CNNs in decomposing all relevant tissue classes. Improving tissue decomposition accuracy is essential for improving the prognostic potential of CNN-biomarkers. In this study, we implemented a novel training strategy to refine an established CNN model, which then surpassed all previous solutions . We obtained a 95.6% average accuracy in the external test set and 99.5% in the internal test set. Our approach reduced errors in biomarker-relevant classes, such as Lymphocytes, and was the first to include interpretability methods. These methods were used to better apprehend our model's limitations and capabilities.


Assuntos
Neoplasias Colorretais , Aprendizado Profundo , Humanos , Biópsia , Amarelo de Eosina-(YS) , Testes Genéticos
10.
Sci Rep ; 13(1): 15930, 2023 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-37741892

RESUMO

Human monkeypox is a very unusual virus that can devastate society. Early identification and diagnosis are essential to treat and manage an illness effectively. Human monkeypox disease detection using deep learning models has attracted increasing attention recently. The virus that causes monkeypox may be passed to people, making it a zoonotic illness. The latest monkeypox epidemic has hit more than 40 nations. Computer-assisted approaches using Deep Learning techniques for automatically identifying skin lesions have shown to be a viable alternative in light of the fast proliferation and ever-growing problems of supplying PCR (Polymerase Chain Reaction) Testing in places with limited availability. In this research, we introduce a deep learning model for detecting human monkeypoxes that is accurate and resilient by tuning its hyper-parameters. We employed a mixture of convolutional neural networks and transfer learning strategies to extract characteristics from medical photos and properly identify them. We also used hyperparameter optimization strategies to fine-tune the Model and get the best possible results. This paper proposes a Yolov5 model-based method for differentiating between chickenpox and Monkeypox lesions on skin pictures. The Roboflow skin lesion picture dataset was subjected to three different hyperparameter tuning strategies: the SDG optimizer, the Bayesian optimizer, and Learning without Forgetting. The proposed Model had the highest classification accuracy (98.18%) when applied to photos of monkeypox skin lesions. Our findings show that the suggested Model surpasses the current best-in-class models and may be used in clinical settings for actual Human Monkeypox disease detection and diagnosis.


Assuntos
Varicela , Aprendizado Profundo , Epidemias , Varíola dos Macacos , Humanos , Teorema de Bayes , Varíola dos Macacos/diagnóstico
11.
Jt Dis Relat Surg ; 34(3): 598-604, 2023 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-37750264

RESUMO

OBJECTIVES: This study aimed to detect single or multiple fractures in the ulna or radius using deep learning techniques fed on upper-extremity radiographs. MATERIALS AND METHODS: The data set used in the retrospective study consisted of different types of upper extremity radiographs obtained from an open-source dataset, with 4,480 images with fractures and 4,383 images without fractures. All fractures involved the ulna or radius. The proposed method comprises two distinct stages. The initial phase, referred to as preprocessing, involved the removal of radiographic backgrounds, followed by the elimination of nonbone tissue. In the second phase, images consisting only of bone tissue were processed using deep learning models, such as RegNetX006, EfficientNet B0, and InceptionResNetV2. Thus, whether one or more fractures of the ulna or the radius are present was determined. To measure the performance of the proposed method, raw images, images generated by background deletion, and bone tissue removal were classified separately using RegNetX006, EfficientNet B0, and InceptionResNetV2 models. Performance was assessed by accuracy, F1 score, Matthew's correlation coefficient, receiver operating characteristic area under the curve, sensitivity, specificity, and precision using 10-fold cross-validation, which is a widely accepted technique in statistical analysis. RESULTS: The best classification performance was obtained with the proposed preprocessing and RegNetX006 architecture. The values obtained for various metrics were as follows: accuracy (0.9921), F1 score (0.9918), Matthew's correlation coefficient (0.9842), area under the curve (0.9918), sensitivity (0.9974), specificity (0.9863), and precision (0.9923). CONCLUSION: The proposed preprocessing method is able to detect fractures of the ulna and radius by artificial intelligence.


Assuntos
Aprendizado Profundo , Fraturas Ósseas , Humanos , Rádio (Anatomia)/diagnóstico por imagem , Inteligência Artificial , Estudos Retrospectivos , Extremidade Superior , Ulna/diagnóstico por imagem
12.
Radiology ; 308(3): e230427, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37750774

RESUMO

Background Deep learning (DL) reconstructions can enhance image quality while decreasing MRI acquisition time. However, DL reconstruction methods combined with compressed sensing for prostate MRI have not been well studied. Purpose To use an industry-developed DL algorithm to reconstruct low-resolution T2-weighted turbo spin-echo (TSE) prostate MRI scans and compare these with standard sequences. Materials and Methods In this prospective study, participants with suspected prostate cancer underwent prostate MRI with a Cartesian standard-resolution T2-weighted TSE sequence (T2C) and non-Cartesian standard-resolution T2-weighted TSE sequence (T2NC) between August and November 2022. Additionally, a low-resolution Cartesian DL-reconstructed T2-weighted TSE sequence (T2DL) with compressed sensing DL denoising and resolution upscaling reconstruction was acquired. Image sharpness was assessed qualitatively by two readers using a five-point Likert scale (from 1 = nondiagnostic to 5 = excellent) and quantitatively by calculating edge rise distance. The Friedman test and one-way analysis of variance with post hoc Bonferroni and Tukey tests, respectively, were used for group comparisons. Prostate Imaging Reporting and Data System (PI-RADS) score agreement between sequences was compared by using Cohen κ. Results This study included 109 male participants (mean age, 68 years ± 8 [SD]). Acquisition time of T2DL was 36% and 29% lower compared with that of T2C and T2NC (mean duration, 164 seconds ± 20 vs 257 seconds ± 32 and 230 seconds ± 28; P < .001 for both). T2DL showed improved image sharpness compared with standard sequences using both qualitative (median score, 5 [IQR, 4-5] vs 4 [IQR, 3-4] for T2C and 4 [IQR, 3-4] for T2NC; P < .001 for both) and quantitative (mean edge rise distance, 0.75 mm ± 0.39 vs 1.15 mm ± 0.68 for T2C and 0.98 mm ± 0.65 for T2NC; P < .001 and P = .01) methods. PI-RADS score agreement between T2NC and T2DL was excellent (κ range, 0.92-0.94 [95% CI: 0.87, 0.98]). Conclusion DL reconstruction of low-resolution T2-weighted TSE sequences enabled accelerated acquisition times and improved image quality compared with standard acquisitions while showing excellent agreement with conventional sequences for PI-RADS ratings. Clinical trial registration no. NCT05820113 © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Turkbey in this issue.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Masculino , Idoso , Imageamento por Ressonância Magnética , Estudos Prospectivos , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/cirurgia
14.
Eur J Radiol ; 167: 111047, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37690351

RESUMO

PURPOSE: To evaluate the effectiveness of automated liver segmental volume quantification and calculation of the liver segmental volume ratio (LSVR) on a non-contrast T1-vibe Dixon liver MRI sequence using a deep learning segmentation pipeline. METHOD: A dataset of 200 liver MRI with a non-contrast 3 mm T1-vibe Dixon sequence was manually labeledslice-by-sliceby an expert for Couinaud liver segments, while portal and hepatic veins were labeled separately. A convolutional neural networkwas trainedusing 170 liver MRI for training and 30 for evaluation. Liver segmental volumes without liver vessels were retrieved and LSVR was calculated as the liver segmental volumes I-III divided by the liver segmental volumes IV-VIII. LSVR was compared with the expert manual LSVR calculation and the LSVR calculated on CT scans in 30 patients with CT and MRI within 6 months. RESULTS: Theconvolutional neural networkclassified the Couinaud segments I-VIII with an average Dice score of 0.770 ± 0.03, ranging between 0.726 ± 0.13 (segment IVb) and 0.810 ± 0.09 (segment V). The calculated mean LSVR with liver MRI unseen by the model was 0.32 ± 0.14, as compared with manually quantified LSVR of 0.33 ± 0.15, resulting in a mean absolute error (MAE) of 0.02. A comparable LSVR of 0.35 ± 0.14 with a MAE of 0.04 resulted with the LSRV retrieved from the CT scans. The automated LSVR showed significant correlation with the manual MRI LSVR (Spearman r = 0.97, p < 0.001) and CT LSVR (Spearman r = 0.95, p < 0.001). CONCLUSIONS: A convolutional neural network allowed for accurate automated liver segmental volume quantification and calculation of LSVR based on a non-contrast T1-vibe Dixon sequence.


Assuntos
Aprendizado Profundo , Humanos , Fígado/diagnóstico por imagem , Radiografia , Cintilografia , Imageamento por Ressonância Magnética
15.
PLoS Comput Biol ; 19(9): e1011444, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37695793

RESUMO

Different genes form complex networks within cells to carry out critical cellular functions, while network alterations in this process can potentially introduce downstream transcriptome perturbations and phenotypic variations. Therefore, developing efficient and interpretable methods to quantify network changes and pinpoint driver genes across conditions is crucial. We propose a hierarchical graph representation learning method, called iHerd. Given a set of networks, iHerd first hierarchically generates a series of coarsened sub-graphs in a data-driven manner, representing network modules at different resolutions (e.g., the level of signaling pathways). Then, it sequentially learns low-dimensional node representations at all hierarchical levels via efficient graph embedding. Lastly, iHerd projects separate gene embeddings onto the same latent space in its graph alignment module to calculate a rewiring index for driver gene prioritization. To demonstrate its effectiveness, we applied iHerd on a tumor-to-normal GRN rewiring analysis and cell-type-specific GCN analysis using single-cell multiome data of the brain. We showed that iHerd can effectively pinpoint novel and well-known risk genes in different diseases. Distinct from existing models, iHerd's graph coarsening for hierarchical learning allows us to successfully classify network driver genes into early and late divergent genes (EDGs and LDGs), emphasizing genes with extensive network changes across and within signaling pathway levels. This unique approach for driver gene classification can provide us with deeper molecular insights. The code is freely available at https://github.com/aicb-ZhangLabs/iHerd. All other relevant data are within the manuscript and supporting information files.


Assuntos
Aprendizado Profundo , Encéfalo , Aprendizagem , Resolução de Problemas , Registros
16.
Sci Rep ; 13(1): 15504, 2023 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-37726378

RESUMO

Real-time and accurate estimation of surgical hemoglobin (Hb) loss is essential for fluid resuscitation management and evaluation of surgical techniques. In this study, we aimed to explore a novel surgical Hb loss estimation method using deep learning-based medical sponges image analysis. Whole blood samples of pre-measured Hb concentration were collected, and normal saline was added to simulate varying levels of Hb concentration. These blood samples were distributed across blank medical sponges to generate blood-soaked sponges. Eight hundred fifty-one blood-soaked sponges representing a wide range of blood dilutions were randomly divided 7:3 into a training group (n = 595) and a testing group (n = 256). A deep learning model based on the YOLOv5 network was used as the target region extraction and detection, and the three models (Feature extraction technology, ResNet-50, and SE-ResNet50) were trained to predict surgical Hb loss. Mean absolute error (MAE), mean absolute percentage error (MAPE), coefficient (R2) value, and the Bland-Altman analysis were calculated to evaluate the predictive performance in the testing group. The deep learning model based on SE-ResNet50 could predict surgical Hb loss with the best performance (R2 = 0.99, MAE = 11.09 mg, MAPE = 8.6%) compared with other predictive models, and Bland-Altman analysis also showed a bias of 1.343 mg with narrow limits of agreement (- 29.81 to 32.5 mg) between predictive and actual Hb loss. The interactive interface was also designed to display the real-time prediction of surgical Hb loss more intuitively. Thus, it is feasible for real-time estimation of surgical Hb loss using deep learning-based medical sponges image analysis, which was helpful for clinical decisions and technical evaluation.


Assuntos
Aprendizado Profundo , Hidratação , Hemoglobinas , Técnicas de Diluição do Indicador , Ressuscitação
17.
Sci Rep ; 13(1): 15506, 2023 09 19.
Artigo em Inglês | MEDLINE | ID: mdl-37726392

RESUMO

This study aimed to propose a fully automatic posteroanterior (PA) cephalometric landmark identification model using deep learning algorithms and compare its accuracy and reliability with those of expert human examiners. In total, 1032 PA cephalometric images were used for model training and validation. Two human expert examiners independently and manually identified 19 landmarks on 82 test set images. Similarly, the constructed artificial intelligence (AI) algorithm automatically identified the landmarks on the images. The mean radial error (MRE) and successful detection rate (SDR) were calculated to evaluate the performance of the model. The performance of the model was comparable with that of the examiners. The MRE of the model was 1.87 ± 1.53 mm, and the SDR was 34.7%, 67.5%, and 91.5% within error ranges of < 1.0, < 2.0, and < 4.0 mm, respectively. The sphenoid points and mastoid processes had the lowest MRE and highest SDR in auto-identification; the condyle points had the highest MRE and lowest SDR. Comparable with human examiners, the fully automatic PA cephalometric landmark identification model showed promising accuracy and reliability and can help clinicians perform cephalometric analysis more efficiently while saving time and effort. Future advancements in AI could further improve the model accuracy and efficiency.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Humanos , Reprodutibilidade dos Testes , Algoritmos , Cefalometria
18.
Tomography ; 9(5): 1629-1637, 2023 Aug 31.
Artigo em Inglês | MEDLINE | ID: mdl-37736983

RESUMO

This exploratory retrospective study aims to quantitatively compare the image quality of unenhanced brain computed tomography (CT) reconstructed with an iterative (AIDR-3D) and a deep learning-based (AiCE) reconstruction algorithm. After a preliminary phantom study, AIDR-3D and AiCE reconstructions (0.5 mm thickness) of 100 consecutive brain CTs acquired in the emergency setting on the same 320-detector row CT scanner were retrospectively analyzed, calculating image noise reduction attributable to the AiCE algorithm, artifact indexes in the posterior cranial fossa, and contrast-to-noise ratios (CNRs) at the cortical and thalamic levels. In the phantom study, the spatial resolution of the two datasets proved to be comparable; conversely, AIDR-3D reconstructions showed a broader noise pattern. In the human study, median image noise was lower with AiCE compared to AIDR-3D (4.7 vs. 5.3, p < 0.001, median 19.6% noise reduction), whereas AIDR-3D yielded a lower artifact index than AiCE (7.5 vs. 8.4, p < 0.001). AiCE also showed higher median CNRs at the cortical (2.5 vs. 1.8, p < 0.001) and thalamic levels (2.8 vs. 1.7, p < 0.001). These results highlight how image quality improvements granted by deep learning-based (AiCE) and iterative (AIDR-3D) image reconstruction algorithms vary according to different brain areas.


Assuntos
Aprendizado Profundo , Humanos , Estudos Retrospectivos , Tomografia Computadorizada por Raios X , Encéfalo/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
19.
PLoS Comput Biol ; 19(9): e1011428, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37672551

RESUMO

Accurate prediction of nucleic binding residues is essential for the understanding of transcription and translation processes. Integration of feature- and template-based strategies could improve the prediction of these key residues in proteins. Nevertheless, traditional hybrid algorithms have been surpassed by recently developed deep learning-based methods, and the possibility of integrating deep learning- and template-based approaches to improve performance remains to be explored. To address these issues, we developed a novel structure-based integrative algorithm called NABind that can accurately predict DNA- and RNA-binding residues. A deep learning module was built based on the diversified sequence and structural descriptors and edge aggregated graph attention networks, while a template module was constructed by transforming the alignments between the query and its multiple templates into features for supervised learning. Furthermore, the stacking strategy was adopted to integrate the above two modules for improving prediction performance. Finally, a post-processing module dependent on the random walk algorithm was proposed to further correct the integrative predictions. Extensive evaluations indicated that our approach could not only achieve excellent performance on both native and predicted structures but also outperformed existing hybrid algorithms and recent deep learning methods. The NABind server is available at http://liulab.hzau.edu.cn/NABind/.


Assuntos
Aprendizado Profundo , Ácidos Nucleicos , Algoritmos , Núcleo Celular , Caminhada
20.
Sci Adv ; 9(38): eadi9327, 2023 Sep 22.
Artigo em Inglês | MEDLINE | ID: mdl-37738341

RESUMO

In recent years, there has been an intensive development of portable ultralow-field magnetic resonance imaging (MRI) for low-cost, shielding-free, and point-of-care applications. However, its quality is poor and scan time is long. We propose a fast acquisition and deep learning reconstruction framework to accelerate brain MRI at 0.055 tesla. The acquisition consists of a single average three-dimensional (3D) encoding with 2D partial Fourier sampling, reducing the scan time of T1- and T2-weighted imaging protocols to 2.5 and 3.2 minutes, respectively. The 3D deep learning leverages the homogeneous brain anatomy available in high-field human brain data to enhance image quality, reduce artifacts and noise, and improve spatial resolution to synthetic 1.5-mm isotropic resolution. Our method successfully overcomes low-signal barrier, reconstructing fine anatomical structures that are reproducible within subjects and consistent across two protocols. It enables fast and quality whole-brain MRI at 0.055 tesla, with potential for widespread biomedical applications.


Assuntos
Aprendizado Profundo , Humanos , Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Sistemas Automatizados de Assistência Junto ao Leito
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...