Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33.757
Filtrar
1.
J Biomed Opt ; 30(Suppl 1): S13706, 2025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39295734

RESUMO

Significance: Oral cancer surgery requires accurate margin delineation to balance complete resection with post-operative functionality. Current in vivo fluorescence imaging systems provide two-dimensional margin assessment yet fail to quantify tumor depth prior to resection. Harnessing structured light in combination with deep learning (DL) may provide near real-time three-dimensional margin detection. Aim: A DL-enabled fluorescence spatial frequency domain imaging (SFDI) system trained with in silico tumor models was developed to quantify the depth of oral tumors. Approach: A convolutional neural network was designed to produce tumor depth and concentration maps from SFDI images. Three in silico representations of oral cancer lesions were developed to train the DL architecture: cylinders, spherical harmonics, and composite spherical harmonics (CSHs). Each model was validated with in silico SFDI images of patient-derived tongue tumors, and the CSH model was further validated with optical phantoms. Results: The performance of the CSH model was superior when presented with patient-derived tumors ( P -value < 0.05 ). The CSH model could predict depth and concentration within 0.4 mm and 0.4 µ g / mL , respectively, for in silico tumors with depths less than 10 mm. Conclusions: A DL-enabled SFDI system trained with in silico CSH demonstrates promise in defining the deep margins of oral tumors.


Assuntos
Simulação por Computador , Aprendizado Profundo , Neoplasias Bucais , Imagem Óptica , Imagens de Fantasmas , Cirurgia Assistida por Computador , Imagem Óptica/métodos , Humanos , Neoplasias Bucais/diagnóstico por imagem , Neoplasias Bucais/cirurgia , Neoplasias Bucais/patologia , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de Computação , Margens de Excisão
2.
Methods Mol Biol ; 2847: 121-135, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312140

RESUMO

Fundamental to the diverse biological functions of RNA are its 3D structure and conformational flexibility, which enable single sequences to adopt a variety of distinct 3D states. Currently, computational RNA design tasks are often posed as inverse problems, where sequences are designed based on adopting a single desired secondary structure without considering 3D geometry and conformational diversity. In this tutorial, we present gRNAde, a geometric RNA design pipeline operating on sets of 3D RNA backbone structures to design sequences that explicitly account for RNA 3D structure and dynamics. gRNAde is a graph neural network that uses an SE (3) equivariant encoder-decoder framework for generating RNA sequences conditioned on backbone structures where the identities of the bases are unknown. We demonstrate the utility of gRNAde for fixed-backbone re-design of existing RNA structures of interest from the PDB, including riboswitches, aptamers, and ribozymes. gRNAde is more accurate in terms of native sequence recovery while being significantly faster compared to existing physics-based tools for 3D RNA inverse design, such as Rosetta.


Assuntos
Aprendizado Profundo , Conformação de Ácido Nucleico , RNA , Software , RNA/química , RNA/genética , Biologia Computacional/métodos , RNA Catalítico/química , RNA Catalítico/genética , Modelos Moleculares , Redes Neurais de Computação
3.
Spectrochim Acta A Mol Biomol Spectrosc ; 324: 125001, 2025 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-39180971

RESUMO

Utilizing visible and near-infrared (Vis-NIR) spectroscopy in conjunction with chemometrics methods has been widespread for identifying plant diseases. However, a key obstacle involves the extraction of relevant spectral characteristics. This study aimed to enhance sugarcane disease recognition by combining convolutional neural network (CNN) with continuous wavelet transform (CWT) spectrograms for spectral features extraction within the Vis-NIR spectra (380-1400 nm) to improve the accuracy of sugarcane diseases recognition. Using 130 sugarcane leaf samples, the obtained one-dimensional CWT coefficients from Vis-NIR spectra were transformed into two-dimensional spectrograms. Employing CNN, spectrogram features were extracted and incorporated into decision tree, K-nearest neighbour, partial least squares discriminant analysis, and random forest (RF) calibration models. The RF model, integrating spectrogram-derived features, demonstrated the best performance with an average precision of 0.9111, sensitivity of 0.9733, specificity of 0.9791, and accuracy of 0.9487. This study may offer a non-destructive, rapid, and accurate means to detect sugarcane diseases, enabling farmers to receive timely and actionable insights on the crops' health, thus minimizing crop loss and optimizing yields.


Assuntos
Aprendizado Profundo , Doenças das Plantas , Saccharum , Espectroscopia de Luz Próxima ao Infravermelho , Análise de Ondaletas , Saccharum/química , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Folhas de Planta/química , Análise dos Mínimos Quadrados , Análise Discriminante
4.
Methods Mol Biol ; 2856: 357-400, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39283464

RESUMO

Three-dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, topologically associating domains (TADs), and A/B compartments, play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers and transcription factor binding site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, and TAD boundaries) and analyze their pros and cons. We also point out obstacles to the computational prediction of 3D interactions and suggest future research directions.


Assuntos
Cromatina , Aprendizado Profundo , Cromatina/genética , Cromatina/metabolismo , Humanos , Biologia Computacional/métodos , Aprendizado de Máquina , Genômica/métodos , Elementos Facilitadores Genéticos , Regiões Promotoras Genéticas , Sítios de Ligação , Genoma , Software
5.
Ophthalmol Sci ; 5(1): 100587, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39380882

RESUMO

Purpose: To apply methods for quantifying uncertainty of deep learning segmentation of geographic atrophy (GA). Design: Retrospective analysis of OCT images and model comparison. Participants: One hundred twenty-six eyes from 87 participants with GA in the SWAGGER cohort of the Nonexudative Age-Related Macular Degeneration Imaged with Swept-Source OCT (SS-OCT) study. Methods: The manual segmentations of GA lesions were conducted on structural subretinal pigment epithelium en face images from the SS-OCT images. Models were developed for 2 approximate Bayesian deep learning techniques, Monte Carlo dropout and ensemble, to assess the uncertainty of GA semantic segmentation and compared to a traditional deep learning model. Main Outcome Measures: Model performance (Dice score) was compared. Uncertainty was calculated using the formula for Shannon Entropy. Results: The output of both Bayesian technique models showed a greater number of pixels with high entropy than the standard model. Dice scores for the Monte Carlo dropout method (0.90, 95% confidence interval 0.87-0.93) and the ensemble method (0.88, 95% confidence interval 0.85-0.91) were significantly higher (P < 0.001) than for the traditional model (0.82, 95% confidence interval 0.78-0.86). Conclusions: Quantifying the uncertainty in a prediction of GA may improve trustworthiness of the models and aid clinicians in decision-making. The Bayesian deep learning techniques generated pixel-wise estimates of model uncertainty for segmentation, while also improving model performance compared with traditionally trained deep learning models. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

6.
J Colloid Interface Sci ; 677(Pt A): 273-281, 2025 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-39094488

RESUMO

Wearable electronics based on conductive hydrogels (CHs) offer remarkable flexibility, conductivity, and versatility. However, the flexibility, adhesiveness, and conductivity of traditional CHs deteriorate when they freeze, thereby limiting their utility in challenging environments. In this work, we introduce a PHEA-NaSS/G hydrogel that can be conveniently fabricated into a freeze-resistant conductive hydrogel by weakening the hydrogen bonds between water molecules. This is achieved through the synergistic interaction between the charged polar end group (-SO3-) and the glycerol-water binary solvent system. The conductive hydrogel is simultaneously endowed with tunable mechanical properties and conductive pathways by the modulation caused by varying material compositions. Due to the uniform interconnectivity of the network structure resulting from strong intermolecular interactions and the enhancement effect of charged polar end-groups, the resulting hydrogel exhibits 174 kPa tensile strength, 2105 % tensile strain, and excellent sensing ability (GF = 2.86, response time: 121 ms), and the sensor is well suited for repeatable and stable monitoring of human motion. Additionally, using the Full Convolutional Network (FCN) algorithm, the sensor can be used to recognize English letter handwriting with an accuracy of 96.4 %. This hydrogel strain sensor provides a simple method for creating multi-functional electronic devices, with significant potential in the fields of multifunctional electronics such as soft robotics, health monitoring, and human-computer interaction.

7.
Methods Mol Biol ; 2847: 63-93, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312137

RESUMO

Machine learning algorithms, and in particular deep learning approaches, have recently garnered attention in the field of molecular biology due to remarkable results. In this chapter, we describe machine learning approaches specifically developed for the design of RNAs, with a focus on the learna_tools Python package, a collection of automated deep reinforcement learning algorithms for secondary structure-based RNA design. We explain the basic concepts of reinforcement learning and its extension, automated reinforcement learning, and outline how these concepts can be successfully applied to the design of RNAs. The chapter is structured to guide through the usage of the different programs with explicit examples, highlighting particular applications of the individual tools.


Assuntos
Algoritmos , Aprendizado de Máquina , Conformação de Ácido Nucleico , RNA , Software , RNA/química , RNA/genética , Biologia Computacional/métodos , Aprendizado Profundo
8.
Methods Mol Biol ; 2847: 153-161, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312142

RESUMO

Understanding the connection between complex structural features of RNA and biological function is a fundamental challenge in evolutionary studies and in RNA design. However, building datasets of RNA 3D structures and making appropriate modeling choices remain time-consuming and lack standardization. In this chapter, we describe the use of rnaglib, to train supervised and unsupervised machine learning-based function prediction models on datasets of RNA 3D structures.


Assuntos
Biologia Computacional , Conformação de Ácido Nucleico , RNA , Software , RNA/química , RNA/genética , Biologia Computacional/métodos , Aprendizado de Máquina , Modelos Moleculares
9.
Methods Mol Biol ; 2847: 241-300, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312149

RESUMO

Nucleic acid tests (NATs) are considered as gold standard in molecular diagnosis. To meet the demand for onsite, point-of-care, specific and sensitive, trace and genotype detection of pathogens and pathogenic variants, various types of NATs have been developed since the discovery of PCR. As alternatives to traditional NATs (e.g., PCR), isothermal nucleic acid amplification techniques (INAATs) such as LAMP, RPA, SDA, HDR, NASBA, and HCA were invented gradually. PCR and most of these techniques highly depend on efficient and optimal primer and probe design to deliver accurate and specific results. This chapter starts with a discussion of traditional NATs and INAATs in concert with the description of computational tools available to aid the process of primer/probe design for NATs and INAATs. Besides briefly covering nanoparticles-assisted NATs, a more comprehensive presentation is given on the role CRISPR-based technologies have played in molecular diagnosis. Here we provide examples of a few groundbreaking CRISPR assays that have been developed to counter epidemics and pandemics and outline CRISPR biology, highlighting the role of CRISPR guide RNA and its design in any successful CRISPR-based application. In this respect, we tabularize computational tools that are available to aid the design of guide RNAs in CRISPR-based applications. In the second part of our chapter, we discuss machine learning (ML)- and deep learning (DL)-based computational approaches that facilitate the design of efficient primer and probe for NATs/INAATs and guide RNAs for CRISPR-based applications. Given the role of microRNA (miRNAs) as potential future biomarkers of disease diagnosis, we have also discussed ML/DL-based computational approaches for miRNA-target predictions. Our chapter presents the evolution of nucleic acid-based diagnosis techniques from PCR and INAATs to more advanced CRISPR/Cas-based methodologies in concert with the evolution of deep learning (DL)- and machine learning (ml)-based computational tools in the most relevant application domains.


Assuntos
Aprendizado Profundo , Humanos , Sistemas CRISPR-Cas , Técnicas de Diagnóstico Molecular/métodos , Técnicas de Amplificação de Ácido Nucleico/métodos , RNA/genética , Aprendizado de Máquina , Repetições Palindrômicas Curtas Agrupadas e Regularmente Espaçadas/genética
10.
Methods Mol Biol ; 2834: 3-39, 2025.
Artigo em Inglês | MEDLINE | ID: mdl-39312158

RESUMO

Quantitative structure-activity relationships (QSAR) is a method for predicting the physical and biological properties of small molecules; it is in use in industry and public services. However, as any scientific method, it is challenged by more and more requests, especially considering its possible role in assessing the safety of new chemicals. To answer the question whether QSAR, by exploiting available knowledge, can build new knowledge, the chapter reviews QSAR methods in search of a QSAR epistemology. QSAR stands on tree pillars, i.e., biological data, chemical knowledge, and modeling algorithms. Usually the biological data, resulting from good experimental practice, are taken as a true picture of the world; chemical knowledge has scientific bases; so if a QSAR model is not working, blame modeling. The role of modeling in developing scientific theories, and in producing knowledge, is so analyzed. QSAR is a mature technology and is part of a large body of in silico methods and other computational methods. The active debate about the acceptability of the QSAR models, about the way to communicate them, and the explanation to provide accompanies the development of today QSAR models. An example about predicting possible endocrine-disrupting chemicals (EDC) shows the many faces of modern QSAR methods.


Assuntos
Relação Quantitativa Estrutura-Atividade , Algoritmos , Humanos , Disruptores Endócrinos/química
11.
Integr Zool ; 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39350466

RESUMO

Facial expressions in nonhuman primates are complex processes involving psychological, emotional, and physiological factors, and may use subtle signals to communicate significant information. However, uncertainty surrounds the functional significance of subtle facial expressions in animals. Using artificial intelligence (AI), this study found that nonhuman primates exhibit subtle facial expressions that are undetectable by human observers. We focused on the golden snub-nosed monkeys (Rhinopithecus roxellana), a primate species with a multilevel society. We collected 3427 front-facing images of monkeys from 275 video clips captured in both wild and laboratory settings. Three deep learning models, EfficientNet, RepMLP, and Tokens-To-Token ViT, were utilized for AI recognition. To compare the accuracy of human performance, two groups were recruited: one with prior animal observation experience and one without any such experience. The results showed human observers to correctly detect facial expressions (32.1% for inexperienced humans and 45.0% for experienced humans on average with a chance level of 33%). In contrast, the AI deep learning models achieved significantly higher accuracy rates. The best-performing model achieved an accuracy of 94.5%. Our results provide evidence that golden snub-nosed monkeys exhibit subtle facial expressions. The results further our understanding of animal facial expressions and also how such modes of communication may contribute to the origin of complex primate social systems.

12.
Clin Transl Oncol ; 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39354269

RESUMO

PURPOSE: The aim of this study was to develop a radiomics model based on magnetic resonance imaging (MRI) for predicting metastasis in soft tissue sarcomas (STSs) treated with surgery. METHODS/PATIENTS: MRI and clinical data of 73 patients with STSs of the extremities and trunk were obtained from TCIA database and Jiangsu Cancer Hospital as the training set, data of other 40 patients were retrospectively collected at our institution as the external validation set. Radiomics features were extracted from both intratumoral and peritumoral regions of fat-suppressed T2-weighted images (FS-T2WIs) of patients, and 3D ResNet10 was used to extract deep learning features. Recursive feature elimination (RFE) and least absolute shrinkage and selection operator (LASSO) algorithms were used for the selection of features. Based on 4 different sets of features, 5 machine learning algorithms were used to construct intratumor, peritumor, combined intratumor and peritumor radiomics models and deep learning radiomics (DLR) model. The area under the ROC curve (AUC) and Decision curve analysis (DCA) were used to evaluate the ability of models to predict metastasis. RESULTS AND CONCLUSIONS: Based on 20 selected features from the deep-learning and radiomics features set, the DLR model was able to predict metastasis in the validation dataset, with an AUC of 0.9770. The DCA and Hosmer-Lemeshow test revealed that the DLR model had good clinical benefit and consistency. By getting richer information from MRI, The DLR model is a noninvasive, low-cost method for predicting the risk of metastasis in STSs, and can help develop appropriate treatment programs.

13.
Radiat Oncol J ; 42(3): 181-191, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39354821

RESUMO

PURPOSE: To generate and investigate a supervised deep learning algorithm for creating synthetic computed tomography (sCT) images from kilovoltage cone-beam computed tomography (kV-CBCT) images for adaptive radiation therapy (ART) in head and neck cancer (HNC). MATERIALS AND METHODS: This study generated the supervised U-Net deep learning model using 3,491 image pairs from planning computed tomography (pCT) and kV-CBCT datasets obtained from 40 HNC patients. The dataset was split into 80% for training and 20% for testing. The evaluation of the sCT images compared to pCT images focused on three aspects: Hounsfield units accuracy, assessed using mean absolute error (MAE) and root mean square error (RMSE); image quality, evaluated using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) between sCT and pCT images; and dosimetric accuracy, encompassing 3D gamma passing rates for dose distribution and percentage dose difference. RESULTS: MAE, RMSE, PSNR, and SSIM showed improvements from their initial values of 53.15 ± 40.09, 153.99 ± 79.78, 47.91 ± 4.98 dB, and 0.97 ± 0.02 to 41.47 ± 30.59, 130.39 ± 78.06, 49.93 ± 6.00 dB, and 0.98 ± 0.02, respectively. Regarding dose evaluation, 3D gamma passing rates for dose distribution within sCT images under 2%/2 mm, 3%/2 mm, and 3%/3 mm criteria, yielded passing rates of 92.1% ± 3.8%, 93.8% ± 3.0%, and 96.9% ± 2.0%, respectively. The sCT images exhibited minor variations in the percentage dose distribution of the investigated target and structure volumes. However, it is worth noting that the sCT images exhibited anatomical variations when compared to the pCT images. CONCLUSION: These findings highlight the potential of the supervised U-Net deep learningmodel in generating kV-CBCT-based sCT images for ART in patients with HNC.

14.
Front Artif Intell ; 7: 1387936, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39355147

RESUMO

Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.

15.
Med Image Anal ; 99: 103356, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39378568

RESUMO

Breast cancer is a significant global public health concern, with various treatment options available based on tumor characteristics. Pathological examination of excision specimens after surgery provides essential information for treatment decisions. However, the manual selection of representative sections for histological examination is laborious and subjective, leading to potential sampling errors and variability, especially in carcinomas that have been previously treated with chemotherapy. Furthermore, the accurate identification of residual tumors presents significant challenges, emphasizing the need for systematic or assisted methods to address this issue. In order to enable the development of deep-learning algorithms for automated cancer detection on radiology images, it is crucial to perform radiology-pathology registration, which ensures the generation of accurately labeled ground truth data. The alignment of radiology and histopathology images plays a critical role in establishing reliable cancer labels for training deep-learning algorithms on radiology images. However, aligning these images is challenging due to their content and resolution differences, tissue deformation, artifacts, and imprecise correspondence. We present a novel deep learning-based pipeline for the affine registration of faxitron images, the x-ray representations of macrosections of ex-vivo breast tissue, and their corresponding histopathology images of tissue segments. The proposed model combines convolutional neural networks and vision transformers, allowing it to effectively capture both local and global information from the entire tissue macrosection as well as its segments. This integrated approach enables simultaneous registration and stitching of image segments, facilitating segment-to-macrosection registration through a puzzling-based mechanism. To address the limitations of multi-modal ground truth data, we tackle the problem by training the model using synthetic mono-modal data in a weakly supervised manner. The trained model demonstrated successful performance in multi-modal registration, yielding registration results with an average landmark error of 1.51 mm (±2.40), and stitching distance of 1.15 mm (±0.94). The results indicate that the model performs significantly better than existing baselines, including both deep learning-based and iterative models, and it is also approximately 200 times faster than the iterative approach. This work bridges the gap in the current research and clinical workflow and has the potential to improve efficiency and accuracy in breast cancer evaluation and streamline pathology workflow.

16.
Med Image Anal ; 99: 103359, 2024 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-39378569

RESUMO

Multi-contrast magnetic resonance imaging (MRI) reflects information about human tissues from different perspectives and has wide clinical applications. By utilizing the auxiliary information from reference images (Refs) in the easy-to-obtain modality, multi-contrast MRI super-resolution (SR) methods can synthesize high-resolution (HR) images from their low-resolution (LR) counterparts in the hard-to-obtain modality. In this study, we systematically discussed the potential impacts caused by cross-modal misalignments between LRs and Refs and, based on this discussion, proposed a novel deep-learning-based method with Deformable Attention and Neighborhood-based feature aggregation to be Computationally Efficient (DANCE) and insensitive to misalignments. Our method has been evaluated in two public MRI datasets, i.e., IXI and FastMRI, and an in-house MR metabolic imaging dataset with amide proton transfer weighted (APTW) images. Experimental results reveal that our method consistently outperforms baselines in various scenarios, with significant superiority observed in the misaligned group of IXI dataset and the prospective study of the clinical dataset. The robustness study proves that our method is insensitive to misalignments, maintaining an average PSNR of 30.67 dB when faced with a maximum range of ±9°and ±9 pixels of rotation and translation on Refs. Given our method's desirable comprehensive performance, good robustness, and moderate computational complexity, it possesses substantial potential for clinical applications.

17.
Comput Biol Med ; 183: 109246, 2024 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-39378580

RESUMO

Difficult tracheal intubation is a major cause of anesthesia-related injuries, including brain damage and death. While deep neural networks have improved difficult airways (DA) predictions over traditional assessment methods, existing models are often black boxes, making them difficult to trust in critical medical settings. Traditional DA assessment relies on facial and neck features, but detecting neck landmarks is particularly challenging. This paper introduces a novel semi-supervised method for landmark prediction, namely G2LCPS, which leverages hierarchical filters and cross-supervised signals. The novelty lies in ensuring that the networks select good unlabeled samples at the image level and generate high-quality pseudo heatmaps at the pixel level for cross-pseudo supervision. The extended versions of the public AFLW, CFP, CPLFW and CASIA-3D FaceV1 face datasets and show that G2LCPS achieves superior performance compared to other state-of-the-art semi-supervised methods, achieving the lowest normalized mean error (NME) of 3.588 when only 1/8 of data is labeled. Notably, the inclusion of the local filter improved the prediction by at least 0.199 NME, whereas the global filter contributed an additional improvement of at least 0.216 NME. These findings underscore the effectiveness of our approach, particularly in scenarios with limited labeled data, and suggest that G2LCPS can significantly enhance the reliability and accuracy of DA predictions in clinical practice. The results highlight the potential of our method to improve patient safety by providing more trustworthy and precise predictions for difficult airway management.

18.
Comput Biol Med ; 183: 109237, 2024 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-39378581

RESUMO

Ensuring accurate predictions of inpatient length of stay (LoS) and mortality rates is essential for enhancing hospital service efficiency, particularly in light of the constraints posed by limited healthcare resources. Integrative analysis of heterogeneous clinic record data from different sources can hold great promise for improving the prognosis and diagnosis level of LoS and mortality. Currently, most existing studies solely focus on single data modality or tend to single-task learning, i.e., training LoS and mortality tasks separately. This limits the utilization of available multi-modal data and prevents the sharing of feature representations that could capture correlations between different tasks, ultimately hindering the model's performance. To address the challenge, this study proposes a novel Multi-Modal Multi-Task learning model, termed as M3T-LM, to integrate clinic records to predict inpatients' LoS and mortality simultaneously. The M3T-LM framework incorporates multiple data modalities by constructing sub-models tailored to each modality. Specifically, a novel attention-embedded one-dimensional (1D) convolutional neural network (CNN) is designed to handle numerical data. For clinical notes, they are converted into sequence data, and then two long short-term memory (LSTM) networks are exploited to model on textual sequence data. A two-dimensional (2D) CNN architecture, noted as CRXMDL, is designed to extract high-level features from chest X-ray (CXR) images. Subsequently, multiple sub-models are integrated to formulate the M3T-LM to capture the correlations between patient LoS and modality prediction tasks. The efficiency of the proposed method is validated on the MIMIC-IV dataset. The proposed method attained a test MAE of 5.54 for LoS prediction and a test F1 of 0.876 for mortality prediction. The experimental results demonstrate that our approach outperforms state-of-the-art (SOTA) methods in tackling mixed regression and classification tasks.

19.
Comput Biol Med ; 183: 109221, 2024 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-39378579

RESUMO

Diagnosing dental caries poses a significant challenge in dentistry, necessitating precise and early detection for effective management. This study utilizes Self-Supervised Learning (SSL) tasks to improve the classification of dental caries in Cone Beam Computed Tomography (CBCT) images, employing the International Caries Detection and Assessment System (ICDAS). Faced with the challenge of scarce annotated medical images, our research employs SSL to utilize unlabeled data, thereby improving model performance. We have developed a pipeline incorporating unlabeled data extraction from CBCT exams and subsequent model training using SSL tasks. A distinctive aspect of our approach is the integration of image processing techniques with SSL tasks, along with exploring the necessity for unlabeled data. Our research aims to identify the most effective image processing techniques for data extraction, the most efficient deep learning architectures for caries classification, the impact of unlabeled dataset sizes on model performance, and the comparative effectiveness of different SSL approaches in this domain. Among the tested architectures, ResNet-18, combined with the SimCLR task, demonstrated an average F1-score macro of 88.42%, Precision macro of 90.44%, and Sensitivity macro of 86.67%, reaching a 5.5% increase in F1-score compared to models using only deep learning architecture. These results suggest that SSL can significantly enhance the accuracy and efficiency of caries classification in CBCT images.

20.
J Hazard Mater ; 480: 136003, 2024 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-39378597

RESUMO

Chronic exposure to arsenic is linked to the development of cancers in the skin, lungs, and bladder. Arsenic exposure manifests as variegated pigmentation and characteristic pitted keratosis on the hands and feet, which often precede the onset of internal cancers. Traditionally, human arsenic exposure is estimated through arsenic levels in biological tissues; however, these methods are invasive and time-consuming. This study aims to develop a noninvasive approach to predict arsenic exposure using artificial intelligence (AI) to analyze photographs of hands and feet. By incorporating well water consumption data and arsenic concentration levels, we developed an AI algorithm trained on 9988 hand and foot photographs from 2497 subjects. This algorithm correlates visual features of palmoplantar hyperkeratosis with arsenic exposure levels. Four pictures per patient, capturing both ventral and dorsal aspects of hands and feet, were analyzed. The AI model utilized existing arsenic exposure data, including arsenic concentration (AC) and cumulative arsenic exposure (CAE), to make binary predictions of high and low arsenic exposure. The AI model achieved an optimal area under the curve (AUC) values of 0.813 for AC and 0.779 for CAE. Recall and precision metrics were 0.729 and 0.705 for CAE, and 0.750 and 0.763 for AC, respectively. While biomarkers have traditionally been used to assess arsenic exposure, efficient noninvasive methods are lacking. To our knowledge, this is the first study to leverage deep learning for noninvasive arsenic exposure assessment. Despite challenges with binary classification due to imbalanced and sparse data, this approach demonstrates the potential for noninvasive estimation of arsenic concentration. Future studies should focus on increasing data volume and categorizing arsenic concentration statistics to enhance model accuracy. This rapid estimation method could significantly contribute to epidemiological studies and aid physicians in diagnosis.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...