Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 34.259
Filter
1.
J Biomed Opt ; 30(Suppl 1): S13706, 2025 Jan.
Article in English | MEDLINE | ID: mdl-39295734

ABSTRACT

Significance: Oral cancer surgery requires accurate margin delineation to balance complete resection with post-operative functionality. Current in vivo fluorescence imaging systems provide two-dimensional margin assessment yet fail to quantify tumor depth prior to resection. Harnessing structured light in combination with deep learning (DL) may provide near real-time three-dimensional margin detection. Aim: A DL-enabled fluorescence spatial frequency domain imaging (SFDI) system trained with in silico tumor models was developed to quantify the depth of oral tumors. Approach: A convolutional neural network was designed to produce tumor depth and concentration maps from SFDI images. Three in silico representations of oral cancer lesions were developed to train the DL architecture: cylinders, spherical harmonics, and composite spherical harmonics (CSHs). Each model was validated with in silico SFDI images of patient-derived tongue tumors, and the CSH model was further validated with optical phantoms. Results: The performance of the CSH model was superior when presented with patient-derived tumors ( P -value < 0.05 ). The CSH model could predict depth and concentration within 0.4 mm and 0.4 µ g / mL , respectively, for in silico tumors with depths less than 10 mm. Conclusions: A DL-enabled SFDI system trained with in silico CSH demonstrates promise in defining the deep margins of oral tumors.


Subject(s)
Computer Simulation , Deep Learning , Mouth Neoplasms , Optical Imaging , Phantoms, Imaging , Surgery, Computer-Assisted , Optical Imaging/methods , Humans , Mouth Neoplasms/diagnostic imaging , Mouth Neoplasms/surgery , Mouth Neoplasms/pathology , Surgery, Computer-Assisted/methods , Image Processing, Computer-Assisted/methods , Neural Networks, Computer , Margins of Excision
2.
Methods Mol Biol ; 2847: 121-135, 2025.
Article in English | MEDLINE | ID: mdl-39312140

ABSTRACT

Fundamental to the diverse biological functions of RNA are its 3D structure and conformational flexibility, which enable single sequences to adopt a variety of distinct 3D states. Currently, computational RNA design tasks are often posed as inverse problems, where sequences are designed based on adopting a single desired secondary structure without considering 3D geometry and conformational diversity. In this tutorial, we present gRNAde, a geometric RNA design pipeline operating on sets of 3D RNA backbone structures to design sequences that explicitly account for RNA 3D structure and dynamics. gRNAde is a graph neural network that uses an SE (3) equivariant encoder-decoder framework for generating RNA sequences conditioned on backbone structures where the identities of the bases are unknown. We demonstrate the utility of gRNAde for fixed-backbone re-design of existing RNA structures of interest from the PDB, including riboswitches, aptamers, and ribozymes. gRNAde is more accurate in terms of native sequence recovery while being significantly faster compared to existing physics-based tools for 3D RNA inverse design, such as Rosetta.


Subject(s)
Deep Learning , Nucleic Acid Conformation , RNA , Software , RNA/chemistry , RNA/genetics , Computational Biology/methods , RNA, Catalytic/chemistry , RNA, Catalytic/genetics , Models, Molecular , Neural Networks, Computer
3.
Methods Mol Biol ; 2856: 357-400, 2025.
Article in English | MEDLINE | ID: mdl-39283464

ABSTRACT

Three-dimensional (3D) chromatin interactions, such as enhancer-promoter interactions (EPIs), loops, topologically associating domains (TADs), and A/B compartments, play critical roles in a wide range of cellular processes by regulating gene expression. Recent development of chromatin conformation capture technologies has enabled genome-wide profiling of various 3D structures, even with single cells. However, current catalogs of 3D structures remain incomplete and unreliable due to differences in technology, tools, and low data resolution. Machine learning methods have emerged as an alternative to obtain missing 3D interactions and/or improve resolution. Such methods frequently use genome annotation data (ChIP-seq, DNAse-seq, etc.), DNA sequencing information (k-mers and transcription factor binding site (TFBS) motifs), and other genomic properties to learn the associations between genomic features and chromatin interactions. In this review, we discuss computational tools for predicting three types of 3D interactions (EPIs, chromatin interactions, and TAD boundaries) and analyze their pros and cons. We also point out obstacles to the computational prediction of 3D interactions and suggest future research directions.


Subject(s)
Chromatin , Deep Learning , Chromatin/genetics , Chromatin/metabolism , Humans , Computational Biology/methods , Machine Learning , Genomics/methods , Enhancer Elements, Genetic , Promoter Regions, Genetic , Binding Sites , Genome , Software
4.
Methods Mol Biol ; 2847: 63-93, 2025.
Article in English | MEDLINE | ID: mdl-39312137

ABSTRACT

Machine learning algorithms, and in particular deep learning approaches, have recently garnered attention in the field of molecular biology due to remarkable results. In this chapter, we describe machine learning approaches specifically developed for the design of RNAs, with a focus on the learna_tools Python package, a collection of automated deep reinforcement learning algorithms for secondary structure-based RNA design. We explain the basic concepts of reinforcement learning and its extension, automated reinforcement learning, and outline how these concepts can be successfully applied to the design of RNAs. The chapter is structured to guide through the usage of the different programs with explicit examples, highlighting particular applications of the individual tools.


Subject(s)
Algorithms , Machine Learning , Nucleic Acid Conformation , RNA , Software , RNA/chemistry , RNA/genetics , Computational Biology/methods , Deep Learning
5.
Methods Mol Biol ; 2847: 153-161, 2025.
Article in English | MEDLINE | ID: mdl-39312142

ABSTRACT

Understanding the connection between complex structural features of RNA and biological function is a fundamental challenge in evolutionary studies and in RNA design. However, building datasets of RNA 3D structures and making appropriate modeling choices remain time-consuming and lack standardization. In this chapter, we describe the use of rnaglib, to train supervised and unsupervised machine learning-based function prediction models on datasets of RNA 3D structures.


Subject(s)
Computational Biology , Nucleic Acid Conformation , RNA , Software , RNA/chemistry , RNA/genetics , Computational Biology/methods , Machine Learning , Models, Molecular
6.
Methods Mol Biol ; 2847: 241-300, 2025.
Article in English | MEDLINE | ID: mdl-39312149

ABSTRACT

Nucleic acid tests (NATs) are considered as gold standard in molecular diagnosis. To meet the demand for onsite, point-of-care, specific and sensitive, trace and genotype detection of pathogens and pathogenic variants, various types of NATs have been developed since the discovery of PCR. As alternatives to traditional NATs (e.g., PCR), isothermal nucleic acid amplification techniques (INAATs) such as LAMP, RPA, SDA, HDR, NASBA, and HCA were invented gradually. PCR and most of these techniques highly depend on efficient and optimal primer and probe design to deliver accurate and specific results. This chapter starts with a discussion of traditional NATs and INAATs in concert with the description of computational tools available to aid the process of primer/probe design for NATs and INAATs. Besides briefly covering nanoparticles-assisted NATs, a more comprehensive presentation is given on the role CRISPR-based technologies have played in molecular diagnosis. Here we provide examples of a few groundbreaking CRISPR assays that have been developed to counter epidemics and pandemics and outline CRISPR biology, highlighting the role of CRISPR guide RNA and its design in any successful CRISPR-based application. In this respect, we tabularize computational tools that are available to aid the design of guide RNAs in CRISPR-based applications. In the second part of our chapter, we discuss machine learning (ML)- and deep learning (DL)-based computational approaches that facilitate the design of efficient primer and probe for NATs/INAATs and guide RNAs for CRISPR-based applications. Given the role of microRNA (miRNAs) as potential future biomarkers of disease diagnosis, we have also discussed ML/DL-based computational approaches for miRNA-target predictions. Our chapter presents the evolution of nucleic acid-based diagnosis techniques from PCR and INAATs to more advanced CRISPR/Cas-based methodologies in concert with the evolution of deep learning (DL)- and machine learning (ml)-based computational tools in the most relevant application domains.


Subject(s)
Deep Learning , Humans , CRISPR-Cas Systems , Molecular Diagnostic Techniques/methods , Nucleic Acid Amplification Techniques/methods , RNA/genetics , Machine Learning , Clustered Regularly Interspaced Short Palindromic Repeats/genetics
7.
Methods Mol Biol ; 2834: 3-39, 2025.
Article in English | MEDLINE | ID: mdl-39312158

ABSTRACT

Quantitative structure-activity relationships (QSAR) is a method for predicting the physical and biological properties of small molecules; it is in use in industry and public services. However, as any scientific method, it is challenged by more and more requests, especially considering its possible role in assessing the safety of new chemicals. To answer the question whether QSAR, by exploiting available knowledge, can build new knowledge, the chapter reviews QSAR methods in search of a QSAR epistemology. QSAR stands on tree pillars, i.e., biological data, chemical knowledge, and modeling algorithms. Usually the biological data, resulting from good experimental practice, are taken as a true picture of the world; chemical knowledge has scientific bases; so if a QSAR model is not working, blame modeling. The role of modeling in developing scientific theories, and in producing knowledge, is so analyzed. QSAR is a mature technology and is part of a large body of in silico methods and other computational methods. The active debate about the acceptability of the QSAR models, about the way to communicate them, and the explanation to provide accompanies the development of today QSAR models. An example about predicting possible endocrine-disrupting chemicals (EDC) shows the many faces of modern QSAR methods.


Subject(s)
Quantitative Structure-Activity Relationship , Algorithms , Humans , Endocrine Disruptors/chemistry
8.
Spectrochim Acta A Mol Biomol Spectrosc ; 324: 125001, 2025 Jan 05.
Article in English | MEDLINE | ID: mdl-39180971

ABSTRACT

Utilizing visible and near-infrared (Vis-NIR) spectroscopy in conjunction with chemometrics methods has been widespread for identifying plant diseases. However, a key obstacle involves the extraction of relevant spectral characteristics. This study aimed to enhance sugarcane disease recognition by combining convolutional neural network (CNN) with continuous wavelet transform (CWT) spectrograms for spectral features extraction within the Vis-NIR spectra (380-1400 nm) to improve the accuracy of sugarcane diseases recognition. Using 130 sugarcane leaf samples, the obtained one-dimensional CWT coefficients from Vis-NIR spectra were transformed into two-dimensional spectrograms. Employing CNN, spectrogram features were extracted and incorporated into decision tree, K-nearest neighbour, partial least squares discriminant analysis, and random forest (RF) calibration models. The RF model, integrating spectrogram-derived features, demonstrated the best performance with an average precision of 0.9111, sensitivity of 0.9733, specificity of 0.9791, and accuracy of 0.9487. This study may offer a non-destructive, rapid, and accurate means to detect sugarcane diseases, enabling farmers to receive timely and actionable insights on the crops' health, thus minimizing crop loss and optimizing yields.


Subject(s)
Deep Learning , Plant Diseases , Saccharum , Spectroscopy, Near-Infrared , Wavelet Analysis , Saccharum/chemistry , Spectroscopy, Near-Infrared/methods , Plant Leaves/chemistry , Least-Squares Analysis , Discriminant Analysis
9.
J Colloid Interface Sci ; 677(Pt A): 273-281, 2025 Jan.
Article in English | MEDLINE | ID: mdl-39094488

ABSTRACT

Wearable electronics based on conductive hydrogels (CHs) offer remarkable flexibility, conductivity, and versatility. However, the flexibility, adhesiveness, and conductivity of traditional CHs deteriorate when they freeze, thereby limiting their utility in challenging environments. In this work, we introduce a PHEA-NaSS/G hydrogel that can be conveniently fabricated into a freeze-resistant conductive hydrogel by weakening the hydrogen bonds between water molecules. This is achieved through the synergistic interaction between the charged polar end group (-SO3-) and the glycerol-water binary solvent system. The conductive hydrogel is simultaneously endowed with tunable mechanical properties and conductive pathways by the modulation caused by varying material compositions. Due to the uniform interconnectivity of the network structure resulting from strong intermolecular interactions and the enhancement effect of charged polar end-groups, the resulting hydrogel exhibits 174 kPa tensile strength, 2105 % tensile strain, and excellent sensing ability (GF = 2.86, response time: 121 ms), and the sensor is well suited for repeatable and stable monitoring of human motion. Additionally, using the Full Convolutional Network (FCN) algorithm, the sensor can be used to recognize English letter handwriting with an accuracy of 96.4 %. This hydrogel strain sensor provides a simple method for creating multi-functional electronic devices, with significant potential in the fields of multifunctional electronics such as soft robotics, health monitoring, and human-computer interaction.

10.
Ophthalmol Sci ; 5(1): 100587, 2025.
Article in English | MEDLINE | ID: mdl-39380882

ABSTRACT

Purpose: To apply methods for quantifying uncertainty of deep learning segmentation of geographic atrophy (GA). Design: Retrospective analysis of OCT images and model comparison. Participants: One hundred twenty-six eyes from 87 participants with GA in the SWAGGER cohort of the Nonexudative Age-Related Macular Degeneration Imaged with Swept-Source OCT (SS-OCT) study. Methods: The manual segmentations of GA lesions were conducted on structural subretinal pigment epithelium en face images from the SS-OCT images. Models were developed for 2 approximate Bayesian deep learning techniques, Monte Carlo dropout and ensemble, to assess the uncertainty of GA semantic segmentation and compared to a traditional deep learning model. Main Outcome Measures: Model performance (Dice score) was compared. Uncertainty was calculated using the formula for Shannon Entropy. Results: The output of both Bayesian technique models showed a greater number of pixels with high entropy than the standard model. Dice scores for the Monte Carlo dropout method (0.90, 95% confidence interval 0.87-0.93) and the ensemble method (0.88, 95% confidence interval 0.85-0.91) were significantly higher (P < 0.001) than for the traditional model (0.82, 95% confidence interval 0.78-0.86). Conclusions: Quantifying the uncertainty in a prediction of GA may improve trustworthiness of the models and aid clinicians in decision-making. The Bayesian deep learning techniques generated pixel-wise estimates of model uncertainty for segmentation, while also improving model performance compared with traditionally trained deep learning models. Financial Disclosures: Proprietary or commercial disclosure may be found in the Footnotes and Disclosures at the end of this article.

11.
Integr Zool ; 2024 Sep 30.
Article in English | MEDLINE | ID: mdl-39350466

ABSTRACT

Facial expressions in nonhuman primates are complex processes involving psychological, emotional, and physiological factors, and may use subtle signals to communicate significant information. However, uncertainty surrounds the functional significance of subtle facial expressions in animals. Using artificial intelligence (AI), this study found that nonhuman primates exhibit subtle facial expressions that are undetectable by human observers. We focused on the golden snub-nosed monkeys (Rhinopithecus roxellana), a primate species with a multilevel society. We collected 3427 front-facing images of monkeys from 275 video clips captured in both wild and laboratory settings. Three deep learning models, EfficientNet, RepMLP, and Tokens-To-Token ViT, were utilized for AI recognition. To compare the accuracy of human performance, two groups were recruited: one with prior animal observation experience and one without any such experience. The results showed human observers to correctly detect facial expressions (32.1% for inexperienced humans and 45.0% for experienced humans on average with a chance level of 33%). In contrast, the AI deep learning models achieved significantly higher accuracy rates. The best-performing model achieved an accuracy of 94.5%. Our results provide evidence that golden snub-nosed monkeys exhibit subtle facial expressions. The results further our understanding of animal facial expressions and also how such modes of communication may contribute to the origin of complex primate social systems.

12.
Eye Vis (Lond) ; 11(1): 38, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39350240

ABSTRACT

BACKGROUND: In recent years, ophthalmology has emerged as a new frontier in medical artificial intelligence (AI) with multi-modal AI in ophthalmology garnering significant attention across interdisciplinary research. This integration of various types and data models holds paramount importance as it enables the provision of detailed and precise information for diagnosing eye and vision diseases. By leveraging multi-modal ophthalmology AI techniques, clinicians can enhance the accuracy and efficiency of diagnoses, and thus reduce the risks associated with misdiagnosis and oversight while also enabling more precise management of eye and vision health. However, the widespread adoption of multi-modal ophthalmology poses significant challenges. MAIN TEXT: In this review, we first summarize comprehensively the concept of modalities in the field of ophthalmology, the forms of fusion between modalities, and the progress of multi-modal ophthalmic AI technology. Finally, we discuss the challenges of current multi-modal AI technology applications in ophthalmology and future feasible research directions. CONCLUSION: In the field of ophthalmic AI, evidence suggests that when utilizing multi-modal data, deep learning-based multi-modal AI technology exhibits excellent diagnostic efficacy in assisting the diagnosis of various ophthalmic diseases. Particularly, in the current era marked by the proliferation of large-scale models, multi-modal techniques represent the most promising and advantageous solution for addressing the diagnosis of various ophthalmic diseases from a comprehensive perspective. However, it must be acknowledged that there are still numerous challenges associated with the application of multi-modal techniques in ophthalmic AI before they can be effectively employed in the clinical setting.

13.
Plant Methods ; 20(1): 153, 2024 Sep 30.
Article in English | MEDLINE | ID: mdl-39350264

ABSTRACT

Accurate monitoring of wheat phenological stages is essential for effective crop management and informed agricultural decision-making. Traditional methods often rely on labour-intensive field surveys, which are prone to subjective bias and limited temporal resolution. To address these challenges, this study explores the potential of near-surface cameras combined with an advanced deep-learning approach to derive wheat phenological stages from high-quality, real-time RGB image series. Three deep learning models based on three different spatiotemporal feature fusion methods, namely sequential fusion, synchronous fusion, and parallel fusion, were constructed and evaluated for deriving wheat phenological stages with these near-surface RGB image series. Moreover, the impact of different image resolutions, capture perspectives, and model training strategies on the performance of deep learning models was also investigated. The results indicate that the model using the sequential fusion method is optimal, with an overall accuracy (OA) of 0.935, a mean absolute error (MAE) of 0.069, F1-score (F1) of 0.936, and kappa coefficients (Kappa) of 0.924 in wheat phenological stages. Besides, the enhanced image resolution of 512 × 512 pixels and a suitable image capture perspective, specifically a sensor viewing angle of 40° to 60° vertically, introduce more effective features for phenological stage detection, thereby enhancing the model's accuracy. Furthermore, concerning the model training, applying a two-step fine-tuning strategy will also enhance the model's robustness to random variations in perspective. This research introduces an innovative approach for real-time phenological stage detection and provides a solid foundation for precision agriculture. By accurately deriving critical phenological stages, the methodology developed in this study supports the optimization of crop management practices, which may result in improved resource efficiency and sustainability across diverse agricultural settings. The implications of this work extend beyond wheat, offering a scalable solution that can be adapted to monitor other crops, thereby contributing to more efficient and sustainable agricultural systems.

14.
Cancer Imaging ; 24(1): 129, 2024 Sep 30.
Article in English | MEDLINE | ID: mdl-39350284

ABSTRACT

BACKGROUND: Lung cancer (LC) is a leading cause of cancer-related mortality, and immunotherapy (IO) has shown promise in treating advanced-stage LC. However, identifying patients likely to benefit from IO and monitoring treatment response remains challenging. This study aims to develop a predictive model for progression-free survival (PFS) in LC patients with IO based on clinical features and advanced imaging biomarkers. MATERIALS AND METHODS: A retrospective analysis was conducted on a cohort of 206 LC patients receiving IO treatment. Pre-treatment computed tomography images were used to extract advanced imaging biomarkers, including intratumoral and peritumoral-vasculature radiomics. Clinical features, including age, gene status, hematology, and staging, were also collected. Key radiomic and clinical features for predicting IO outcomes were identified using a two-step feature selection process, including univariate Cox regression and chi-squared test, followed by sequential forward selection. The DeepSurv model was constructed to predict PFS based on clinical and radiomic features. Model performance was evaluated using the area under the time-dependent receiver operating characteristic curve (AUC) and concordance index (C-index). RESULTS: Combining radiomics of intratumoral heterogeneity and peritumoral-vasculature with clinical features demonstrated a significant enhancement (p < 0.001) in predicting IO response. The proposed DeepSurv model exhibited a prediction performance with AUCs ranging from 0.76 to 0.80 and a C-index of 0.83. Furthermore, the predicted personalized PFS curves revealed a significant difference (p < 0.05) between patients with favorable and unfavorable prognoses. CONCLUSIONS: Integrating intratumoral and peritumoral-vasculature radiomics with clinical features enabled the development of a predictive model for PFS in LC patients with IO. The proposed model's capability to estimate individualized PFS probability and differentiate the prognosis status held promise to facilitate personalized medicine and improve patient outcomes in LC.


Subject(s)
Deep Learning , Immunotherapy , Lung Neoplasms , Precision Medicine , Tomography, X-Ray Computed , Humans , Lung Neoplasms/diagnostic imaging , Lung Neoplasms/pathology , Lung Neoplasms/therapy , Retrospective Studies , Male , Female , Middle Aged , Aged , Immunotherapy/methods , Precision Medicine/methods , Tomography, X-Ray Computed/methods , Progression-Free Survival , Radiomics
15.
Brief Bioinform ; 25(6)2024 Sep 23.
Article in English | MEDLINE | ID: mdl-39350339

ABSTRACT

Single-cell RNA sequencing (scRNA-seq) technologies can generate transcriptomic profiles at a single-cell resolution in large patient cohorts, facilitating discovery of gene and cellular biomarkers for disease. Yet, when the number of biomarker genes is large, the translation to clinical applications is challenging due to prohibitive sequencing costs. Here, we introduce scPanel, a computational framework designed to bridge the gap between biomarker discovery and clinical application by identifying a sparse gene panel for patient classification from the cell population(s) most responsive to perturbations (e.g. diseases/drugs). scPanel incorporates a data-driven way to automatically determine a minimal number of informative biomarker genes. Patient-level classification is achieved by aggregating the prediction probabilities of cells associated with a patient using the area under the curve score. Application of scPanel to scleroderma, colorectal cancer, and COVID-19 datasets resulted in high patient classification accuracy using only a small number of genes (<20), automatically selected from the entire transcriptome. In the COVID-19 case study, we demonstrated cross-dataset generalizability in predicting disease state in an external patient cohort. scPanel outperforms other state-of-the-art gene selection methods for patient classification and can be used to identify parsimonious sets of reliable biomarker candidates for clinical translation.


Subject(s)
COVID-19 , Single-Cell Analysis , Humans , COVID-19/genetics , COVID-19/virology , Single-Cell Analysis/methods , Computational Biology/methods , Transcriptome , RNA-Seq/methods , Colorectal Neoplasms/genetics , Colorectal Neoplasms/classification , Gene Expression Profiling/methods , SARS-CoV-2/genetics , Sequence Analysis, RNA/methods , Software , Single-Cell Gene Expression Analysis
16.
Front Neurol ; 15: 1396513, 2024.
Article in English | MEDLINE | ID: mdl-39350970

ABSTRACT

Objective: The primary aim of this investigation was to devise an intelligent approach for interpreting and measuring the spatial orientation of semicircular canals based on cranial MRI. The ultimate objective is to employ this intelligent method to construct a precise mathematical model that accurately represents the spatial orientation of the semicircular canals. Methods: Using a dataset of 115 cranial MRI scans, this study employed the nnDetection deep learning algorithm to perform automated segmentation of the semicircular canals and the eyeballs (left and right). The center points of each semicircular canal were organized into an ordered structure using point characteristic analysis. Subsequently, a point-by-point plane fit was performed along these centerlines, and the normal vector of the semicircular canals was computed using the singular value decomposition method and calibrated to a standard spatial coordinate system whose transverse planes were the top of the common crus and the bottom of the eyeballs. Results: The nnDetection target recognition segmentation algorithm achieved Dice values of 0.9585 and 0.9663. The direction angles of the unit normal vectors for the left anterior, lateral, and posterior semicircular canal planes were [80.19°, 124.32°, 36.08°], [169.88°, 100.04°, 91.32°], and [79.33°, 130.63°, 137.4°], respectively. For the right side, the angles were [79.03°, 125.41°, 142.42°], [171.45°, 98.53°, 89.43°], and [80.12°, 132.42°, 44.11°], respectively. Conclusion: This study successfully achieved real-time automated understanding and measurement of the spatial orientation of semicircular canals, providing a solid foundation for personalized diagnosis and treatment optimization of vestibular diseases. It also establishes essential tools and a theoretical basis for future research into vestibular function and related diseases.

17.
EClinicalMedicine ; 76: 102802, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39351025

ABSTRACT

Background: As differentiating between lipomas and atypical lipomatous tumors (ALTs) based on imaging is challenging and requires biopsies, radiomics has been proposed to aid the diagnosis. This study aimed to externally and prospectively validate a radiomics model differentiating between lipomas and ALTs on MRI in three large, multi-center cohorts, and extend it with automatic and minimally interactive segmentation methods to increase clinical feasibility. Methods: Three study cohorts were formed, two for external validation containing data from medical centers in the United States (US) collected from 2008 until 2018 and the United Kingdom (UK) collected from 2011 until 2017, and one for prospective validation consisting of data collected from 2020 until 2021 in the Netherlands. Patient characteristics, MDM2 amplification status, and MRI scans were collected. An automatic segmentation method was developed to segment all tumors on T1-weighted MRI scans of the validation cohorts. Segmentations were subsequently quality scored. In case of insufficient quality, an interactive segmentation method was used. Radiomics performance was evaluated for all cohorts and compared to two radiologists. Findings: The validation cohorts included 150 (54% ALT), 208 (37% ALT), and 86 patients (28% ALT) from the US, UK and NL. Of the 444 cases, 78% were automatically segmented. For 22%, interactive segmentation was necessary due to insufficient quality, with only 3% of all patients requiring manual adjustment. External validation resulted in an AUC of 0.74 (95% CI: 0.66, 0.82) in US data and 0.86 (0.80, 0.92) in UK data. Prospective validation resulted in an AUC of 0.89 (0.83, 0.96). The radiomics model performed similar to the two radiologists (US: 0.79 and 0.76, UK: 0.86 and 0.86, NL: 0.82 and 0.85). Interpretation: The radiomics model extended with automatic and minimally interactive segmentation methods accurately differentiated between lipomas and ALTs in two large, multi-center external cohorts, and in prospective validation, performing similar to expert radiologists, possibly limiting the need for invasive diagnostics. Funding: Hanarth fonds.

18.
Front Oncol ; 14: 1431912, 2024.
Article in English | MEDLINE | ID: mdl-39351364

ABSTRACT

Introduction: The rapid advancement of science and technology has significantly expanded the capabilities of artificial intelligence, enhancing diagnostic accuracy for gastric cancer. Methods: This research aims to utilize endoscopic images to identify various gastric disorders using an advanced Convolutional Neural Network (CNN) model. The Kvasir dataset, comprising images of normal Z-line, normal pylorus, ulcerative colitis, stool, and polyps, was used. Images were pre-processed and graphically analyzed to understand pixel intensity patterns, followed by feature extraction using adaptive thresholding and contour analysis for morphological values. Five deep transfer learning models-NASNetMobile, EfficientNetB5, EfficientNetB6, InceptionV3, DenseNet169-and a hybrid model combining EfficientNetB6 and DenseNet169 were evaluated using various performance metrics. Results & discussion: For the complete images of gastric cancer, EfficientNetB6 computed the top performance with 99.88% accuracy on a loss of 0.049. Additionally, InceptionV3 achieved the highest testing accuracy of 97.94% for detecting normal pylorus, while EfficientNetB6 excelled in detecting ulcerative colitis and normal Z-line with accuracies of 98.8% and 97.85%, respectively. EfficientNetB5 performed best for polyps and stool with accuracies of 98.40% and 96.86%, respectively.The study demonstrates that deep transfer learning techniques can effectively predict and classify different types of gastric cancer at early stages, aiding experts in diagnosis and detection.

19.
Med Phys ; 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-39353140

ABSTRACT

BACKGROUND: Cone beam computed tomography (CBCT) is a widely available modality, but its clinical utility has been limited by low detail conspicuity and quantitative accuracy. Convenient post-reconstruction denoising is subject to back projected patterned residual, but joint denoise-reconstruction is typically computationally expensive and complex. PURPOSE: In this study, we develop and evaluate a novel Metric-learning guided wavelet transform reconstruction (MEGATRON) approach to enhance image domain quality with projection-domain processing. METHODS: Projection domain based processing has the benefit of being simple, efficient, and compatible with various reconstruction toolkit and vendor platforms. However, they also typically show inferior performance in the final reconstructed image, because the denoising goals in projection and image domains do not necessarily align. Motivated by these observations, this work aims to translate the demand for quality enhancement from the quantitative image domain to the more easily operable projection domain. Specifically, the proposed paradigm consists of a metric learning module and a denoising network module. Via metric learning, enhancement objectives on the wavelet encoded sinogram domain data are defined to reflect post-reconstruction image discrepancy. The denoising network maps measured cone-beam projection to its enhanced version, driven by the learnt objective. In doing so, the denoiser operates in the convenient sinogram to sinogram fashion but reflects improvement in reconstructed image as the final goal. Implementation-wise, metric learning was formalized as optimizing the weighted fitting of wavelet subbands, and a res-Unet, which is a Unet structure with residual blocks, was used for denoising. To access quantitative reference, cone-beam projections were simulated using the X-ray based Cancer Imaging Simulation Toolkit (XCIST). In both learning modules, a data set of 123 human thoraxes, which was from Open-Source Imaging Consortium (OSIC) Pulmonary Fibrosis Progression challenge, was used. Reconstructed CBCT thoracic images were compared against ground truth FB and performance was assessed in root mean square error (RMSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM). RESULTS: MEGATRON achieved RMSE in HU value, PSNR, and SSIM were 30.97 ± 4.25, 37.45 ± 1.78, and 93.23 ± 1.62, respectively. These values are on par with reported results from sophisticated physics-driven CBCT enhancement, demonstrating promise and utility of the proposed MEGATRON method. CONCLUSION: We have demonstrated that incorporating the proposed metric learning into sinogram denoising introduces awareness of reconstruction goal and improves final quantitative performance. The proposed approach is compatible with a wide range of denoiser network structures and reconstruction modules, to suit customized need or further improve performance.

20.
Comput Biol Med ; 182: 109088, 2024 Sep 30.
Article in English | MEDLINE | ID: mdl-39353296

ABSTRACT

Feature attribution methods can visually highlight specific input regions containing influential aspects affecting a deep learning model's prediction. Recently, the use of feature attribution methods in electrocardiogram (ECG) classification has been sharply increasing, as they assist clinicians in understanding the model's decision-making process and assessing the model's reliability. However, a careful study to identify suitable methods for ECG datasets has been lacking, leading researchers to select methods without a thorough understanding of their appropriateness. In this work, we conduct a large-scale assessment by considering eleven popular feature attribution methods across five large ECG datasets using a model based on the ResNet-18 architecture. Our experiments include both automatic evaluations and human evaluations. Annotated datasets were utilized for automatic evaluations and three cardiac experts were involved for human evaluations. We found that Guided Grad-CAM, particularly when its absolute values are utilized, achieves the best performance. When Guided Grad-CAM was utilized as the feature attribution method, cardiac experts confirmed that it can identify diagnostically relevant electrophysiological characteristics, although its effectiveness varied across the 17 different diagnoses that we have investigated.

SELECTION OF CITATIONS
SEARCH DETAIL