Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
1.
Global Spine J ; : 21925682231154543, 2023 Jan 28.
Article in English | MEDLINE | ID: mdl-36708281

ABSTRACT

STUDY DESIGN: Retrospective, mono-centric cohort research study. OBJECTIVES: The purpose of this study is to validate a novel artificial intelligence (AI)-based algorithm against human-generated ground truth for radiographic parameters of adolescent idiopathic scoliosis (AIS). METHODS: An AI-algorithm was developed that is capable of detecting anatomical structures of interest (clavicles, cervical, thoracic, lumbar spine and sacrum) and calculate essential radiographic parameters in AP spine X-rays fully automatically. The evaluated parameters included T1-tilt, clavicle angle (CA), coronal balance (CB), lumbar modifier, and Cobb angles in the proximal thoracic (C-PT), thoracic, and thoracolumbar regions. Measurements from 2 experienced physicians on 100 preoperative AP full spine X-rays of AIS patients were used as ground truth and to evaluate inter-rater and intra-rater reliability. The agreement between human raters and AI was compared by means of single measure Intra-class Correlation Coefficients (ICC; absolute agreement; >.75 rated as excellent), mean error and additional statistical metrics. RESULTS: The comparison between human raters resulted in excellent ICC values for intra- (range: .97-1) and inter-rater (.85-.99) reliability. The algorithm was able to determine all parameters in 100% of images with excellent ICC values (.78-.98). Consistently with the human raters, ICC values were typically smallest for C-PT (eg, rater 1A vs AI: .78, mean error: 4.7°) and largest for CB (.96, -.5 mm) as well as CA (.98, .2°). CONCLUSIONS: The AI-algorithm shows excellent reliability and agreement with human raters for coronal parameters in preoperative full spine images. The reliability and speed offered by the AI-algorithm could contribute to the efficient analysis of large datasets (eg, registry studies) and measurements in clinical practice.

2.
Biom J ; 65(6): e2100379, 2023 08.
Article in English | MEDLINE | ID: mdl-36494091

ABSTRACT

In many medical applications, interpretable models with high prediction performance are sought. Often, those models are required to handle semistructured data like tabular and image data. We show how to apply deep transformation models (DTMs) for distributional regression that fulfill these requirements. DTMs allow the data analyst to specify (deep) neural networks for different input modalities making them applicable to various research questions. Like statistical models, DTMs can provide interpretable effect estimates while achieving the state-of-the-art prediction performance of deep neural networks. In addition, the construction of ensembles of DTMs that retain model structure and interpretability allows quantifying epistemic and aleatoric uncertainty. In this study, we compare several DTMs, including baseline-adjusted models, trained on a semistructured data set of 407 stroke patients with the aim to predict ordinal functional outcome three months after stroke. We follow statistical principles of model-building to achieve an adequate trade-off between interpretability and flexibility while assessing the relative importance of the involved data modalities. We evaluate the models for an ordinal and dichotomized version of the outcome as used in clinical practice. We show that both tabular clinical and brain imaging data are useful for functional outcome prediction, whereas models based on tabular data only outperform those based on imaging data only. There is no substantial evidence for improved prediction when combining both data modalities. Overall, we highlight that DTMs provide a powerful, interpretable approach to analyzing semistructured data and that they have the potential to support clinical decision-making.


Subject(s)
Ischemic Stroke , Stroke , Humans , Neural Networks, Computer , Prognosis
3.
Med Image Anal ; 65: 101790, 2020 10.
Article in English | MEDLINE | ID: mdl-32801096

ABSTRACT

At present, the majority of the proposed Deep Learning (DL) methods provide point predictions without quantifying the model's uncertainty. However, a quantification of the reliability of automated image analysis is essential, in particular in medicine when physicians rely on the results for making critical treatment decisions. In this work, we provide an entire framework to diagnose ischemic stroke patients incorporating Bayesian uncertainty into the analysis procedure. We present a Bayesian Convolutional Neural Network (CNN) yielding a probability for a stroke lesion on 2D Magnetic Resonance (MR) images with corresponding uncertainty information about the reliability of the prediction. For patient-level diagnoses, different aggregation methods are proposed and evaluated, which combine the individual image-level predictions. Those methods take advantage of the uncertainty in the image predictions and report model uncertainty at the patient-level. In a cohort of 511 patients, our Bayesian CNN achieved an accuracy of 95.33% at the image-level representing a significant improvement of 2% over a non-Bayesian counterpart. The best patient aggregation method yielded 95.89% of accuracy. Integrating uncertainty information about image predictions in aggregation models resulted in higher uncertainty measures to false patient classifications, which enabled to filter critical patient diagnoses that are supposed to be closer examined by a medical doctor. We therefore recommend using Bayesian approaches not only for improved image-level prediction and uncertainty estimation but also for the detection of uncertain aggregations at the patient-level.


Subject(s)
Neural Networks, Computer , Stroke , Bayes Theorem , Humans , Magnetic Resonance Imaging , Reproducibility of Results , Stroke/diagnostic imaging , Uncertainty
4.
Assay Drug Dev Technol ; 16(6): 343-349, 2018.
Article in English | MEDLINE | ID: mdl-30148665

ABSTRACT

Deep convolutional neural networks show outstanding performance in image-based phenotype classification given that all existing phenotypes are presented during the training of the network. However, in real-world high-content screening (HCS) experiments, it is often impossible to know all phenotypes in advance. Moreover, novel phenotype discovery itself can be an HCS outcome of interest. This aspect of HCS is not yet covered by classical deep learning approaches. When presenting an image with a novel phenotype to a trained network, it fails to indicate a novelty discovery but assigns the image to a wrong phenotype. To tackle this problem and address the need for novelty detection, we use a recently developed Bayesian approach for deep neural networks called Monte Carlo (MC) dropout to define different uncertainty measures for each phenotype prediction. With real HCS data, we show that these uncertainty measures allow us to identify novel or unclear phenotypes. In addition, we also found that the MC dropout method results in a significant improvement of classification accuracy. The proposed procedure used in our HCS case study can be easily transferred to any existing network architecture and will be beneficial in terms of accuracy and novelty detection.


Subject(s)
Bayes Theorem , Deep Learning , High-Throughput Screening Assays , Neural Networks, Computer , Monte Carlo Method , Phenotype
5.
J Biomol Screen ; 21(9): 998-1003, 2016 Oct.
Article in English | MEDLINE | ID: mdl-26950929

ABSTRACT

Deep learning methods are currently outperforming traditional state-of-the-art computer vision algorithms in diverse applications and recently even surpassed human performance in object recognition. Here we demonstrate the potential of deep learning methods to high-content screening-based phenotype classification. We trained a deep learning classifier in the form of convolutional neural networks with approximately 40,000 publicly available single-cell images from samples treated with compounds from four classes known to lead to different phenotypes. The input data consisted of multichannel images. The construction of appropriate feature definitions was part of the training and carried out by the convolutional network, without the need for expert knowledge or handcrafted features. We compare our results against the recent state-of-the-art pipeline in which predefined features are extracted from each cell using specialized software and then fed into various machine learning algorithms (support vector machine, Fisher linear discriminant, random forest) for classification. The performance of all classification approaches is evaluated on an untouched test image set with known phenotype classes. Compared to the best reference machine learning algorithm, the misclassification rate is reduced from 8.9% to 6.6%.


Subject(s)
Image Processing, Computer-Assisted/statistics & numerical data , Neural Networks, Computer , Single-Cell Analysis/methods , Software , Algorithms , Humans , Machine Learning , Support Vector Machine
6.
Clin Cancer Res ; 21(23): 5253-63, 2015 Dec 01.
Article in English | MEDLINE | ID: mdl-25922429

ABSTRACT

PURPOSE: We aimed to identify gene expression signatures associated with angiogenesis and hypoxia pathways with predictive value for treatment response to bevacizumab/erlotinib (BE) of nonsquamous advanced non-small cell lung cancer (NSCLC) patients. EXPERIMENTAL DESIGN: Whole-genome gene expression profiling was performed on 42 biopsy samples (from SAKK 19/05 trial) using Affymetrix exon arrays, and associations with the following endpoints: time-to-progression (TTP) under therapy, tumor-shrinkage (TS), and overall survival (OS) were investigated. Next, we performed gene set enrichment analyses using genes associated with the angiogenic process and hypoxia response to evaluate their predictive value for patients' outcome. RESULTS: Our analysis revealed that both the angiogenic and hypoxia response signatures were enriched within the genes predictive of BE response, TS, and OS. Higher gene expression levels (GEL) of the 10-gene angiogenesis-associated signature and lower levels of the 10-gene hypoxia response signature predicted improved TTP under BE, 7.1 months versus 2.1 months for low versus high-risk patients (P = 0.005), and median TTP 6.9 months versus 2.9 months (P = 0.016), respectively. The hypoxia response signature associated with higher TS at 12 weeks and improved OS (17.8 months vs. 9.9 months for low vs. high-risk patients, P = 0.001). CONCLUSIONS: We were able to identify gene expression signatures derived from the angiogenesis and hypoxia response pathways with predictive value for clinical outcome in advanced nonsquamous NSCLC patients. This could lead to the identification of clinically relevant biomarkers, which will allow for selecting the subset of patients who benefit from the treatment and predict drug response.


Subject(s)
Antineoplastic Combined Chemotherapy Protocols/therapeutic use , Carcinoma, Non-Small-Cell Lung/drug therapy , Carcinoma, Non-Small-Cell Lung/genetics , Lung Neoplasms/drug therapy , Lung Neoplasms/genetics , Transcriptome , Bevacizumab/administration & dosage , Biomarkers, Tumor , Biopsy , Carcinoma, Non-Small-Cell Lung/mortality , Carcinoma, Non-Small-Cell Lung/pathology , Cluster Analysis , Erlotinib Hydrochloride/administration & dosage , Female , Gene Expression Profiling , Humans , Hypoxia/genetics , Kaplan-Meier Estimate , Lung Neoplasms/mortality , Lung Neoplasms/pathology , Male , Neoplasm Staging , Neovascularization, Pathologic/drug therapy , Neovascularization, Pathologic/genetics , Prognosis , Reproducibility of Results , Treatment Outcome
7.
J Biomol Screen ; 12(8): 1042-9, 2007 Dec.
Article in English | MEDLINE | ID: mdl-18087069

ABSTRACT

Recent technological advances in high-content screening instrumentation have increased its ease of use and throughput, expanding the application of high-content screening to the early stages of drug discovery. However, high-content screens produce complex data sets, presenting a challenge for both extraction and interpretation of meaningful information. This shifts the high-content screening process bottleneck from the experimental to the analytical stage. In this article, the authors discuss different approaches of data analysis, using a phenotypic neurite outgrowth screen as an example. Distance measurements and hierarchical clustering methods lead to a profound understanding of different high-content screening readouts. In addition, the authors introduce a hit selection procedure based on machine learning methods and demonstrate that this method increases the hit verification rate significantly (up to a factor of 5), compared to conventional hit selection based on single readouts only.


Subject(s)
Neurites/metabolism , Tissue Array Analysis/standards , Cluster Analysis , Multivariate Analysis , Quality Control , Reproducibility of Results
SELECTION OF CITATIONS
SEARCH DETAIL
...