Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Entropy (Basel) ; 25(10)2023 Oct 21.
Artigo em Inglês | MEDLINE | ID: mdl-37895591

RESUMO

In cases where a client suffers from completely unlabeled data, unsupervised learning has difficulty achieving an accurate fault diagnosis. Semi-supervised federated learning with the ability for interaction between a labeled client and an unlabeled client has been developed to overcome this difficulty. However, the existing semi-supervised federated learning methods may lead to a negative transfer problem since they fail to filter out unreliable model information from the unlabeled client. Therefore, in this study, a dynamic semi-supervised federated learning fault diagnosis method with an attention mechanism (SSFL-ATT) is proposed to prevent the federation model from experiencing negative transfer. A federation strategy driven by an attention mechanism was designed to filter out the unreliable information hidden in the local model. SSFL-ATT can ensure the federation model's performance as well as render the unlabeled client capable of fault classification. In cases where there is an unlabeled client, compared to the existing semi-supervised federated learning methods, SSFL-ATT can achieve increments of 9.06% and 12.53% in fault diagnosis accuracy when datasets provided by Case Western Reserve University and Shanghai Maritime University, respectively, are used for verification.

2.
Sensors (Basel) ; 21(19)2021 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-34640886

RESUMO

Wearable sensors are widely used in activity recognition (AR) tasks with broad applicability in health and well-being, sports, geriatric care, etc. Deep learning (DL) has been at the forefront of progress in activity classification with wearable sensors. However, most state-of-the-art DL models used for AR are trained to discriminate different activity classes at high accuracy, not considering the confidence calibration of predictive output of those models. This results in probabilistic estimates that might not capture the true likelihood and is thus unreliable. In practice, it tends to produce overconfident estimates. In this paper, the problem is addressed by proposing deep time ensembles, a novel ensembling method capable of producing calibrated confidence estimates from neural network architectures. In particular, the method trains an ensemble of network models with temporal sequences extracted by varying the window size over the input time series and averaging the predictive output. The method is evaluated on four different benchmark HAR datasets and three different neural network architectures. Across all the datasets and architectures, our method shows an improvement in calibration by reducing the expected calibration error (ECE)by at least 40%, thereby providing superior likelihood estimates. In addition to providing reliable predictions our method also outperforms the state-of-the-art classification results in the WISDM, UCI HAR, and PAMAP2 datasets and performs as good as the state-of-the-art in the Skoda dataset.


Assuntos
Atividades Humanas , Redes Neurais de Computação , Idoso , Humanos , Probabilidade , Reconhecimento Psicológico , Projetos de Pesquisa
3.
J Dairy Sci ; 103(6): 5170-5182, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32253036

RESUMO

An SNP-BLUP model is computationally scalable even for large numbers of genotyped animals. When genetic variation cannot be completely captured by SNP markers, a more accurate model is obtained by fitting a residual polygenic effect (RPG) as well. However, inclusion of the RPG effect increases the size of the SNP-BLUP mixed model equations (MME) by the number of genotyped animals. Consequently, the calculation of model reliabilities requiring elements of the inverted MME coefficient matrix becomes more computationally challenging with increasing numbers of genotyped animals. We present a Monte Carlo (MC)-based sampling method to estimate the reliability of the SNP-BLUP model including the RPG effect, where the MME size depends on the number of markers and MC samples. We compared reliabilities calculated using different RPG proportions and different MC sample sizes in analyzing 2 data sets. Data set 1 (data set 2) contained 19,757 (222,619) genotyped animals, with 11,729 (50,240) SNP markers, and 231,186 (13.35 million) pedigree animals. Correlations between the correct and the MC-calculated reliabilities were above 98% even with 5,000 MC samples and an 80% RPG proportion in both data sets. However, more MC samples were needed to achieve a small maximum absolute difference and mean squared error, particularly when the RPG proportion exceeded 20%. The computing time for MC SNP-BLUP was shorter than for GBLUP. In conclusion, the MC-based approach can be an effective strategy for calculating SNP-BLUP model reliability with an RPG effect included.


Assuntos
Genoma/genética , Método de Monte Carlo , Herança Multifatorial/genética , Polimorfismo de Nucleotídeo Único/genética , Animais , Cruzamento , Genótipo , Modelos Genéticos , Linhagem , Reprodutibilidade dos Testes
4.
Epilepsy Behav ; 101(Pt B): 106581, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31761686

RESUMO

Tuberous sclerosis complex (TSC) is a neurodevelopmental disorder caused by deletions in the TSC1 or TSC2 genes that is associated with epilepsy in up to 90% of patients. Seizures are suggested to start in benign brain tumors, cortical tubers, or in the perituberal tissue making these tubers an interesting target for further research into mechanisms underlying epileptogenesis in TSC. Animal models of TSC insufficiently capture the neurodevelopmental biology of cortical tubers, and hence, human stem cell-based in vitro models of TSC are being increasingly explored in attempts to recapitulate tuber development and epileptogenesis in TSC. However, in vitro culture conditions for stem cell-derived neurons do not necessarily mimic physiological conditions. For example, very high glucose concentrations of up to 25 mM are common in culture media formulations. As TSC is potentially caused by a disruption of the mechanistic target of rapamycin (mTOR) pathway, a main integrator of metabolic information and intracellular signaling, we aimed to examine the impact of different glucose concentrations in the culture media on cellular phenotypes implicated in tuber characteristics. Here, we present preliminary data from a pilot study exploring cortical neuronal differentiation on human embryonic stem cells (hES) harboring a TSC2 knockout mutation (TSC2-/-) and an isogenic control line (TSC2+/+). We show that the commonly used high glucose media profoundly mask cellular phenotypes in TSC2-/- cultures during neuronal differentiation. These phenotypes only become apparent when differentiating TSC2+/+ and TSC2-/- cultures in more physiologically relevant conditions of 5 mM glucose suggesting that the careful consideration of culture conditions is vital to ensuring biological relevance and translatability of stem cell models for neurological disorders such as TSC. This article is part of the Special Issue "Proceedings of the 7th London-Innsbruck Colloquium on Status Epilepticus and Acute Seizures".


Assuntos
Glucose/farmacologia , Células-Tronco Neurais/efeitos dos fármacos , Células-Tronco Neurais/ultraestrutura , Esclerose Tuberosa/patologia , Diferenciação Celular/efeitos dos fármacos , Proliferação de Células , Células Cultivadas , Células-Tronco Embrionárias/ultraestrutura , Técnicas de Inativação de Genes , Humanos , Modelos Neurológicos , Mutação/efeitos dos fármacos , Neurogênese , Fenótipo , Projetos Piloto , Serina-Treonina Quinases TOR/metabolismo , Proteína 2 do Complexo Esclerose Tuberosa/genética
5.
Artigo em Inglês | MEDLINE | ID: mdl-30628866

RESUMO

In silico toxicity prediction plays an important role in the regulatory decision making and selection of leads in drug design as in vitro/vivo methods are often limited by ethics, time, budget, and other resources. Many computational methods have been employed in predicting the toxicity profile of chemicals. This review provides a detailed end-to-end overview of the application of machine learning algorithms to Structure-Activity Relationship (SAR)-based predictive toxicology. From raw data to model validation, the importance of data quality is stressed as it greatly affects the predictive power of derived models. Commonly overlooked challenges such as data imbalance, activity cliff, model evaluation, and definition of applicability domain are highlighted, and plausible solutions for alleviating these challenges are discussed.


Assuntos
Poluentes Ambientais/toxicidade , Testes de Toxicidade/métodos , Algoritmos , Simulação por Computador , Aprendizado de Máquina , Relação Quantitativa Estrutura-Atividade , Máquina de Vetores de Suporte
6.
Phys Med Biol ; 69(15)2024 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-38981594

RESUMO

Objective.Deep learning models that aid in medical image assessment tasks must be both accurate and reliable to be deployed within clinical settings. While deep learning models have been shown to be highly accurate across a variety of tasks, measures that indicate the reliability of these models are less established. Increasingly, uncertainty quantification (UQ) methods are being introduced to inform users on the reliability of model outputs. However, most existing methods cannot be augmented to previously validated models because they are not post hoc, and they change a model's output. In this work, we overcome these limitations by introducing a novel post hoc UQ method, termedLocal Gradients UQ, and demonstrate its utility for deep learning-based metastatic disease delineation.Approach.This method leverages a trained model's localized gradient space to assess sensitivities to trained model parameters. We compared the Local Gradients UQ method to non-gradient measures defined using model probability outputs. The performance of each uncertainty measure was assessed in four clinically relevant experiments: (1) response to artificially degraded image quality, (2) comparison between matched high- and low-quality clinical images, (3) false positive (FP) filtering, and (4) correspondence with physician-rated disease likelihood.Main results.(1) Response to artificially degraded image quality was enhanced by the Local Gradients UQ method, where the median percent difference between matching lesions in non-degraded and most degraded images was consistently higher for the Local Gradients uncertainty measure than the non-gradient uncertainty measures (e.g. 62.35% vs. 2.16% for additive Gaussian noise). (2) The Local Gradients UQ measure responded better to high- and low-quality clinical images (p< 0.05 vsp> 0.1 for both non-gradient uncertainty measures). (3) FP filtering performance was enhanced by the Local Gradients UQ method when compared to the non-gradient methods, increasing the area under the receiver operating characteristic curve (ROC AUC) by 20.1% and decreasing the false positive rate by 26%. (4) The Local Gradients UQ method also showed more favorable correspondence with physician-rated likelihood for malignant lesions by increasing ROC AUC for correspondence with physician-rated disease likelihood by 16.2%.Significance. In summary, this work introduces and validates a novel gradient-based UQ method for deep learning-based medical image assessments to enhance user trust when using deployed clinical models.


Assuntos
Aprendizado Profundo , Processamento de Imagem Assistida por Computador , Incerteza , Humanos , Processamento de Imagem Assistida por Computador/métodos
7.
Front Oncol ; 12: 974467, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36313629

RESUMO

Background: Using high robust radiomic features in modeling is recommended, yet its impact on radiomic model is unclear. This study evaluated the radiomic model's robustness and generalizability after screening out low-robust features before radiomic modeling. The results were validated with four datasets and two clinically relevant tasks. Materials and methods: A total of 1,419 head-and-neck cancer patients' computed tomography images, gross tumor volume segmentation, and clinically relevant outcomes (distant metastasis and local-regional recurrence) were collected from four publicly available datasets. The perturbation method was implemented to simulate images, and the radiomic feature robustness was quantified using intra-class correlation of coefficient (ICC). Three radiomic models were built using all features (ICC > 0), good-robust features (ICC > 0.75), and excellent-robust features (ICC > 0.95), respectively. A filter-based feature selection and Ridge classification method were used to construct the radiomic models. Model performance was assessed with both robustness and generalizability. The robustness of the model was evaluated by the ICC, and the generalizability of the model was quantified by the train-test difference of Area Under the Receiver Operating Characteristic Curve (AUC). Results: The average model robustness ICC improved significantly from 0.65 to 0.78 (P< 0.0001) using good-robust features and to 0.91 (P< 0.0001) using excellent-robust features. Model generalizability also showed a substantial increase, as a closer gap between training and testing AUC was observed where the mean train-test AUC difference was reduced from 0.21 to 0.18 (P< 0.001) in good-robust features and to 0.12 (P< 0.0001) in excellent-robust features. Furthermore, good-robust features yielded the best average AUC in the unseen datasets of 0.58 (P< 0.001) over four datasets and clinical outcomes. Conclusions: Including robust only features in radiomic modeling significantly improves model robustness and generalizability in unseen datasets. Yet, the robustness of radiomic model has to be verified despite building with robust radiomic features, and tightly restricted feature robustness may prevent the optimal model performance in the unseen dataset as it may lower the discrimination power of the model.

8.
Cells ; 11(15)2022 08 07.
Artigo em Inglês | MEDLINE | ID: mdl-35954291

RESUMO

The characterization of novel radiotracers toward their metabolic stability is an essential part of their development. While in vitro methods such as liver microsome assays or ex vivo blood or tissue samples provide information on overall stability, little or no information is obtained on cytochrome P450 (CYP) enzyme and isoform-specific contribution to the metabolic fate of individual radiotracers. Herein, we investigated recently established CYP-overexpressing hepatoblastoma cell lines (HepG2) for their suitability to study the metabolic stability of radiotracers in general and to gain insight into CYP isoform specificity. Wildtype HepG2 and CYP1A2-, CYP2C19-, and CYP3A4-overexpressing HepG2 cells were incubated with radiotracers, and metabolic turnover was analyzed. The optimized protocol, covering cell seeding in 96-well plates and analysis of supernatant by radio thin-layer-chromatography for higher throughput, was transferred to the evaluation of three 18F-labeled celecoxib-derived cyclooxygenase-2 inhibitors (coxibs). These investigations revealed time-dependent degradation of the intact radiotracers, as well as CYP isoform- and substrate-specific differences in their metabolic profiles. HepG2 CYP2C19 proved to be the cell line showing the highest metabolic turnover for each radiotracer studied here. Comparison with human and murine liver microsome assays showed good agreement with the human metabolite profile obtained by the HepG2 cell lines. Therefore, CYP-overexpressing HepG2 cells provide a good complement for assessing the metabolic stability of radiotracers and allow the analysis of the CYP isoform-specific contribution to the overall radiotracer metabolism.


Assuntos
Carcinoma Hepatocelular , Neoplasias Hepáticas , Animais , Linhagem Celular , Citocromo P-450 CYP2C19 , Sistema Enzimático do Citocromo P-450/metabolismo , Humanos , Camundongos , Isoformas de Proteínas
9.
Sci Total Environ ; 791: 148394, 2021 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-34412403

RESUMO

Although dimensional analysis suggests sound functional forms (FFs) to calculate longitudinal dispersion coefficient (Kx), no attempt has been made to quantify both reliability of the estimated Kx value and its sensitivity to variation of the FFs' parameters. This paper introduces a new index named bandwidths similarity factor (bws-factor) to quantify the reliability of FFs based on a rigorous analysis of distinct calibration datasets to tune the FFs. We modified the bootstrap approach to ensure that each resampled calibration dataset is representative of available datapoints in a rich, global database of tracer studies. The dimensionless Kx values were calculated by 200 FFs tuned with the generalized reduced gradient algorithm. Correlation coefficients for the tuned FFs varied from 0.60 to 0.98. The bws-factor ranged from 0.11 to 1.00, indicating poor reliability of FFs for Kx calculation, mainly due to different sources of error in the Kx calculation process. The calculated exponent of the river's aspect ratio varied over a wider range (i.e., -0.76 to 1.50) compared to that computed for the river's friction term (i.e., -0.56 to 0.87). Since Kx is used in combination with one-dimensional numerical models in water quality studies, poor reliability in its estimation can result in unrealistic concentrations being simulated by the models downstream of pollutant release into rivers.


Assuntos
Poluentes Ambientais , Rios , Calibragem , Reprodutibilidade dos Testes , Qualidade da Água
10.
Adv Drug Deliv Rev ; 86: 101-11, 2015 Jun 23.
Artigo em Inglês | MEDLINE | ID: mdl-25794480

RESUMO

The use of in silico tools within the drug development process to predict a wide range of properties including absorption, distribution, metabolism, elimination and toxicity has become increasingly important due to changes in legislation and both ethical and economic drivers to reduce animal testing. Whilst in silico tools have been used for decades there remains reluctance to accept predictions based on these methods particularly in regulatory settings. This apprehension arises in part due to lack of confidence in the reliability, robustness and applicability of the models. To address this issue we propose a scheme for the verification of in silico models that enables end users and modellers to assess the scientific validity of models in accordance with the principles of good computer modelling practice. We report here the implementation of the scheme within the Innovative Medicines Initiative project "eTOX" (electronic toxicity) and its application to the in silico models developed within the frame of this project.


Assuntos
Modelos Teóricos , Simulação por Computador , Humanos , Projetos Piloto , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA