Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 548
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Genet Epidemiol ; 48(6): 270-288, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-38644517

RESUMO

The genome-wide association studies (GWAS) typically use linear or logistic regression models to identify associations between phenotypes (traits) and genotypes (genetic variants) of interest. However, the use of regression with the additive assumption has potential limitations. First, the normality assumption of residuals is the one that is rarely seen in practice, and deviation from normality increases the Type-I error rate. Second, building a model based on such an assumption ignores genetic structures, like, dominant, recessive, and protective-risk cases. Ignoring genetic variants may result in spurious conclusions about the associations between a variant and a trait. We propose an assumption-free model built upon data-consistent inversion (DCI), which is a recently developed measure-theoretic framework utilized for uncertainty quantification. This proposed DCI-derived model builds a nonparametric distribution on model inputs that propagates to the distribution of observed data without the required normality assumption of residuals in the regression model. This characteristic enables the proposed DCI-derived model to cover all genetic variants without emphasizing on additivity of the classic-GWAS model. Simulations and a replication GWAS with data from the COPDGene demonstrate the ability of this model to control the Type-I error rate at least as well as the classic-GWAS (additive linear model) approach while having similar or greater power to discover variants in different genetic modes of transmission.


Assuntos
Estudo de Associação Genômica Ampla , Modelos Genéticos , Estudo de Associação Genômica Ampla/métodos , Estudo de Associação Genômica Ampla/estatística & dados numéricos , Humanos , Simulação por Computador , Polimorfismo de Nucleotídeo Único , Fenótipo , Modelos Estatísticos , Genótipo , Doença Pulmonar Obstrutiva Crônica/genética , Variação Genética
2.
Brief Bioinform ; 24(6)2023 09 22.
Artigo em Inglês | MEDLINE | ID: mdl-37985456

RESUMO

Blood-brain barrier penetrating peptides (BBBPs) are short peptide sequences that possess the ability to traverse the selective blood-brain interface, making them valuable drug candidates or carriers for various payloads. However, the in vivo or in vitro validation of BBBPs is resource-intensive and time-consuming, driving the need for accurate in silico prediction methods. Unfortunately, the scarcity of experimentally validated BBBPs hinders the efficacy of current machine-learning approaches in generating reliable predictions. In this paper, we present DeepB3P3, a novel framework for BBBPs prediction. Our contribution encompasses four key aspects. Firstly, we propose a novel deep learning model consisting of a transformer encoder layer, a convolutional network backbone, and a capsule network classification head. This integrated architecture effectively learns representative features from peptide sequences. Secondly, we introduce masked peptides as a powerful data augmentation technique to compensate for small training set sizes in BBBP prediction. Thirdly, we develop a novel threshold-tuning method to handle imbalanced data by approximating the optimal decision threshold using the training set. Lastly, DeepB3P3 provides an accurate estimation of the uncertainty level associated with each prediction. Through extensive experiments, we demonstrate that DeepB3P3 achieves state-of-the-art accuracy of up to 98.31% on a benchmarking dataset, solidifying its potential as a promising computational tool for the prediction and discovery of BBBPs.


Assuntos
Barreira Hematoencefálica , Peptídeos , Aprendizado de Máquina , Sequência de Aminoácidos , Biologia Computacional/métodos
3.
Methods ; 225: 74-88, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38493931

RESUMO

Computational modeling and simulation (CM&S) is a key tool in medical device design, development, and regulatory approval. For example, finite element analysis (FEA) is widely used to understand the mechanical integrity and durability of orthopaedic implants. The ASME V&V 40 standard and supporting FDA guidance provide a framework for establishing model credibility, enabling deeper reliance on CM&S throughout the total product lifecycle. Examples of how to apply the principles outlined in the ASME V&V 40 standard are important to facilitating greater adoption by the medical device community, but few published examples are available that demonstrate best practices. Therefore, this paper outlines an end-to-end (E2E) example of the ASME V&V 40 standard applied to an orthopaedic implant. The objective of this study was to illustrate how to establish the credibility of a computational model intended for use as part of regulatory evaluation. In particular, this study focused on whether a design change to a spinal pedicle screw construct (specifically, the addition of a cannulation to an existing non-cannulated pedicle screw) would compromise the rod-screw construct mechanical performance. This question of interest (?OI) was addressed by establishing model credibility requirements according to the ASME V&V 40 standard. Experimental testing to support model validation was performed using spinal rods and non-cannulated pedicle screw constructs made with medical grade titanium (Ti-6Al-4V ELI). FEA replicating the experimental tests was performed by three independent modelers and validated through comparisons of common mechanical properties such as stiffness and yield force. The validated model was then used to simulate F1717 compression-bending testing on the new cannulated pedicle screw design to answer the ?OI, without performing any additional experimental testing. This E2E example provides a realistic scenario for the application of the ASME V&V 40 standard to orthopedic medical device applications.


Assuntos
Análise de Elementos Finitos , Parafusos Pediculares , Parafusos Pediculares/normas , Humanos , Simulação por Computador , Teste de Materiais/métodos , Teste de Materiais/normas , Titânio/química , Força Compressiva
4.
Proc Natl Acad Sci U S A ; 119(43): e2204569119, 2022 10 25.
Artigo em Inglês | MEDLINE | ID: mdl-36256807

RESUMO

Many applications of machine-learning methods involve an iterative protocol in which data are collected, a model is trained, and then outputs of that model are used to choose what data to consider next. For example, a data-driven approach for designing proteins is to train a regression model to predict the fitness of protein sequences and then use it to propose new sequences believed to exhibit greater fitness than observed in the training data. Since validating designed sequences in the wet laboratory is typically costly, it is important to quantify the uncertainty in the model's predictions. This is challenging because of a characteristic type of distribution shift between the training and test data that arises in the design setting-one in which the training and test data are statistically dependent, as the latter is chosen based on the former. Consequently, the model's error on the test data-that is, the designed sequences-has an unknown and possibly complex relationship with its error on the training data. We introduce a method to construct confidence sets for predictions in such settings, which account for the dependence between the training and test data. The confidence sets we construct have finite-sample guarantees that hold for any regression model, even when it is used to choose the test-time input distribution. As a motivating use case, we use real datasets to demonstrate how our method quantifies uncertainty for the predicted fitness of designed proteins and can therefore be used to select design algorithms that achieve acceptable tradeoffs between high predicted fitness and low predictive uncertainty.


Assuntos
Algoritmos , Aprendizado de Máquina , Retroalimentação , Incerteza , Conformação Molecular
5.
Proc Natl Acad Sci U S A ; 119(42): e2208095119, 2022 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-36215470

RESUMO

Uncertainty in climate projections is driven by three components: scenario uncertainty, intermodel uncertainty, and internal variability. Although socioeconomic climate impact studies increasingly take into account the first two components, little attention has been paid to the role of internal variability, although underestimating this uncertainty may lead to underestimating the socioeconomic costs of climate change. Using large ensembles from seven coupled general circulation models with a total of 414 model runs, we partition the climate uncertainty in classic dose-response models relating county-level corn yield, mortality, and per-capita gross domestic product to temperature in the continental United States. The partitioning of uncertainty depends on the time frame of projection, the impact model, and the geographic region. Internal variability represents more than 50% of the total climate uncertainty in certain projections, including mortality projections for the early 21st century, although its relative influence decreases over time. We recommend including uncertainty due to internal variability for many projections of temperature-driven impacts, including early-century and midcentury projections, projections in regions with high internal variability such as the Upper Midwest United States, and impacts driven by nonlinear relationships.


Assuntos
Mudança Climática , Zea mays , Previsões , Temperatura , Incerteza , Estados Unidos
6.
BMC Bioinformatics ; 25(1): 240, 2024 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-39014339

RESUMO

BACKGROUND: Identification of human leukocyte antigen (HLA) types from DNA-sequenced human samples is important in organ transplantation and cancer immunotherapy and remains a challenging task considering sequence homology and extreme polymorphism of HLA genes. RESULTS: We present Orthanq, a novel statistical model and corresponding application for transparent and uncertainty-aware quantification of haplotypes. We utilize our approach to perform HLA typing while, for the first time, reporting uncertainty of predictions and transparently observing mutations beyond reported HLA types. Using 99 gold standard samples from 1000 Genomes, Illumina Platinum Genomes and Genome In a Bottle projects, we show that Orthanq can provide overall superior accuracy and shorter runtimes than state-of-the-art HLA typers. CONCLUSIONS: Orthanq is the first approach that allows to directly utilize existing pangenome alignments and type all HLA loci. Moreover, it can be generalized for usages beyond HLA typing, e.g. for virus lineage quantification. Orthanq is available under https://orthanq.github.io .


Assuntos
Antígenos HLA , Haplótipos , Teste de Histocompatibilidade , Humanos , Haplótipos/genética , Antígenos HLA/genética , Teste de Histocompatibilidade/métodos , Software , Incerteza , Análise de Sequência de DNA/métodos , Modelos Estatísticos , Algoritmos
7.
J Struct Biol ; 216(1): 108058, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38163450

RESUMO

In single-particle cryo-electron microscopy (cryo-EM), efficient determination of orientation parameters for particle images poses a significant challenge yet is crucial for reconstructing 3D structures. This task is complicated by the high noise levels in the datasets, which often include outliers, necessitating several time-consuming 2D clean-up processes. Recently, solutions based on deep learning have emerged, offering a more streamlined approach to the traditionally laborious task of orientation estimation. These solutions employ amortized inference, eliminating the need to estimate parameters individually for each image. However, these methods frequently overlook the presence of outliers and may not adequately concentrate on the components used within the network. This paper introduces a novel method using a 10-dimensional feature vector for orientation representation, extracting orientations as unit quaternions with an accompanying uncertainty metric. Furthermore, we propose a unique loss function that considers the pairwise distances between orientations, thereby enhancing the accuracy of our method. Finally, we also comprehensively evaluate the design choices in constructing the encoder network, a topic that has not received sufficient attention in the literature. Our numerical analysis demonstrates that our methodology effectively recovers orientations from 2D cryo-EM images in an end-to-end manner. Notably, the inclusion of uncertainty quantification allows for direct clean-up of the dataset at the 3D level. Lastly, we package our proposed methods into a user-friendly software suite named cryo-forum, designed for easy access by developers.


Assuntos
Processamento de Imagem Assistida por Computador , Software , Microscopia Crioeletrônica/métodos , Incerteza , Processamento de Imagem Assistida por Computador/métodos , Imagem Individual de Molécula
8.
Mol Pharm ; 21(9): 4356-4371, 2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39132855

RESUMO

We present a novel computational approach for predicting human pharmacokinetics (PK) that addresses the challenges of early stage drug design. Our study introduces and describes a large-scale data set of 11 clinical PK end points, encompassing over 2700 unique chemical structures to train machine learning models. To that end multiple advanced training strategies are compared, including the integration of in vitro data and a novel self-supervised pretraining task. In addition to the predictions, our final model provides meaningful epistemic uncertainties for every data point. This allows us to successfully identify regions of exceptional predictive performance, with an absolute average fold error (AAFE/geometric mean fold error) of less than 2.5 across multiple end points. Together, these advancements represent a significant leap toward actionable PK predictions, which can be utilized early on in the drug design process to expedite development and reduce reliance on nonclinical studies.


Assuntos
Desenho de Fármacos , Aprendizado de Máquina , Humanos , Farmacocinética , Preparações Farmacêuticas/química
9.
J Theor Biol ; 592: 111895, 2024 09 07.
Artigo em Inglês | MEDLINE | ID: mdl-38969168

RESUMO

In HIV drug therapy, the high variability of CD4+ T cells and viral loads brings uncertainty to the determination of treatment options and the ultimate treatment efficacy, which may be the result of poor drug adherence. We develop a dynamical HIV model coupled with pharmacokinetics, driven by drug adherence as a random variable, and systematically study the uncertainty quantification, aiming to construct the relationship between drug adherence and therapeutic effect. Using adaptive generalized polynomial chaos, stochastic solutions are approximated as polynomials of input random parameters. Numerical simulations show that results obtained by this method are in good agreement, compared with results obtained through Monte Carlo sampling, which helps to verify the accuracy of approximation. Based on these expansions, we calculate the time-dependent probability density functions of this system theoretically and numerically. To verify the applicability of this model, we fit clinical data of four HIV patients, and the goodness of fit results demonstrate that the proposed random model depicts the dynamics of HIV well. Sensitivity analyses based on the Sobol index indicate that the randomness of drug effect has the greatest impact on both CD4+ T cells and viral loads, compared to random initial values, which further highlights the significance of drug adherence. The proposed models and qualitative analysis results, along with monitoring CD4+ T cells counts and viral loads, evaluate the influence of drug adherence on HIV treatment, which helps to better interpret clinical data with fluctuations and makes several contributions to the design of individual-based optimal antiretroviral strategies.


Assuntos
Fármacos Anti-HIV , Infecções por HIV , Adesão à Medicação , Carga Viral , Humanos , Fármacos Anti-HIV/uso terapêutico , Linfócitos T CD4-Positivos/virologia , Simulação por Computador , Infecções por HIV/tratamento farmacológico , Infecções por HIV/virologia , Modelos Biológicos , Método de Monte Carlo , Processos Estocásticos , Incerteza
10.
J Sleep Res ; : e14300, 2024 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-39112022

RESUMO

Wearable electroencephalography devices emerge as a cost-effective and ergonomic alternative to gold-standard polysomnography, paving the way for better health monitoring and sleep disorder screening. Machine learning allows to automate sleep stage classification, but trust and reliability issues have hampered its adoption in clinical applications. Estimating uncertainty is a crucial factor in enhancing reliability by identifying regions of heightened and diminished confidence. In this study, we used an uncertainty-centred machine learning pipeline, U-PASS, to automate sleep staging in a challenging real-world dataset of single-channel electroencephalography and accelerometry collected with a wearable device from an elderly population. We were able to effectively limit the uncertainty of our machine learning model and to reliably inform clinical experts of which predictions were uncertain to improve the machine learning model's reliability. This increased the five-stage sleep-scoring accuracy of a state-of-the-art machine learning model from 63.9% to 71.2% on our dataset. Remarkably, the machine learning approach outperformed the human expert in interpreting these wearable data. Manual review by sleep specialists, without specific training for sleep staging on wearable electroencephalography, proved ineffective. The clinical utility of this automated remote monitoring system was also demonstrated, establishing a strong correlation between the predicted sleep parameters and the reference polysomnography parameters, and reproducing known correlations with the apnea-hypopnea index. In essence, this work presents a promising avenue to revolutionize remote patient care through the power of machine learning by the use of an automated data-processing pipeline enhanced with uncertainty estimation.

11.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38888097

RESUMO

Convolutional neural networks (CNNs) provide flexible function approximations for a wide variety of applications when the input variables are in the form of images or spatial data. Although CNNs often outperform traditional statistical models in prediction accuracy, statistical inference, such as estimating the effects of covariates and quantifying the prediction uncertainty, is not trivial due to the highly complicated model structure and overparameterization. To address this challenge, we propose a new Bayesian approach by embedding CNNs within the generalized linear models (GLMs) framework. We use extracted nodes from the last hidden layer of CNN with Monte Carlo (MC) dropout as informative covariates in GLM. This improves accuracy in prediction and regression coefficient inference, allowing for the interpretation of coefficients and uncertainty quantification. By fitting ensemble GLMs across multiple realizations from MC dropout, we can account for uncertainties in extracting the features. We apply our methods to biological and epidemiological problems, which have both high-dimensional correlated inputs and vector covariates. Specifically, we consider malaria incidence data, brain tumor image data, and fMRI data. By extracting information from correlated inputs, the proposed method can provide an interpretable Bayesian analysis. The algorithm can be broadly applicable to image regressions or correlated data analysis by enabling accurate Bayesian inference quickly.


Assuntos
Teorema de Bayes , Neoplasias Encefálicas , Imageamento por Ressonância Magnética , Método de Monte Carlo , Redes Neurais de Computação , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética/estatística & dados numéricos , Imageamento por Ressonância Magnética/métodos , Malária/epidemiologia , Algoritmos
12.
Stat Med ; 43(7): 1384-1396, 2024 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-38297411

RESUMO

Clinical prediction models are estimated using a sample of limited size from the target population, leading to uncertainty in predictions, even when the model is correctly specified. Generally, not all patient profiles are observed uniformly in model development. As a result, sampling uncertainty varies between individual patients' predictions. We aimed to develop an intuitive measure of individual prediction uncertainty. The variance of a patient's prediction can be equated to the variance of the sample mean outcome in n ∗ $$ {n}_{\ast } $$ hypothetical patients with the same predictor values. This hypothetical sample size n ∗ $$ {n}_{\ast } $$ can be interpreted as the number of similar patients n eff $$ {n}_{\mathrm{eff}} $$ that the prediction is effectively based on, given that the model is correct. For generalized linear models, we derived analytical expressions for the effective sample size. In addition, we illustrated the concept in patients with acute myocardial infarction. In model development, n eff $$ {n}_{\mathrm{eff}} $$ can be used to balance accuracy versus uncertainty of predictions. In a validation sample, the distribution of n eff $$ {n}_{\mathrm{eff}} $$ indicates which patients were more and less represented in the development data, and whether predictions might be too uncertain for some to be practically meaningful. In a clinical setting, the effective sample size may facilitate communication of uncertainty about predictions. We propose the effective sample size as a clinically interpretable measure of uncertainty in individual predictions. Its implications should be explored further for the development, validation and clinical implementation of prediction models.


Assuntos
Incerteza , Humanos , Modelos Lineares , Tamanho da Amostra
13.
Europace ; 2024 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-39259657

RESUMO

Wolff-Parkinson-White syndrome is a cardiovascular disease characterized by abnormal atrio-ventricular conduction facilitated by accessory pathways (APs). Invasive catheter ablation of the AP represents the primary treatment modality. Accurate localization of APs is crucial for successful ablation outcomes, but current diagnostic algorithms based on the 12 lead electrocardiogram (ECG) often struggle with precise determination of AP locations. In order to gain insight into the mechanisms underlying localization failures observed in current diagnostic algorithms, we employ a virtual cardiac model to elucidate the relationship between AP location and ECG morphology. We first introduce a cardiac model of electrophysiology that was specifically tailored to represent antegrade APs in the form of a short atrio-ventricular bypass tract. Locations of antegrade APs were then automatically swept across both ventricles in the virtual model to generate a synthetic ECG database consisting of 9271 signals. Regional grouping of antegrade APs revealed overarching morphological patterns originating from diverse cardiac regions. We then applied variance-based sensitivity analysis relying on polynomial chaos expansion on the ECG database to mathematically quantify how variation in AP location and timing relates to morphological variation in the 12 lead ECG. We utilized our mechanistic virtual model to showcase limitations of AP localization using standard ECG-based algorithms and provide mechanistic explanations through exemplary simulations. Our findings highlight the potential of virtual models of cardiac electrophysiology not only to deepen our understanding of the underlying mechanisms of Wolff-Parkinson-White syndrome but also to potentially enhance the diagnostic accuracy of ECG-based algorithms and facilitate personalized treatment planning.

14.
Philos Trans A Math Phys Eng Sci ; 382(2279): 20230364, 2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39129401

RESUMO

Locally resonant metamaterials (LRMs) have recently emerged in the search for lightweight noise and vibration solutions. These materials have the ability to create stop bands, which arise from the sub-wavelength addition of identical resonators to a host structure and result in strong vibration attenuation. However, their manufacturing inevitably introduces variability such that the system as-manufactured often deviates significantly from the original as-designed. This can reduce attenuation performance, but may also broaden the attenuation band. This work focuses on the impact of variability within tolerance ranges in resonator properties on the vibration attenuation in metamaterial beams. Following a qualitative pre-study, two non-intrusive uncertainty propagation approaches are applied to find the upper and lower bounds of three performance metrics, by evaluating deterministic metamaterial models with uncertain parameters defined as interval variables. A global search approach is used and compared with a machine learning (ML)-based uncertainty propagation approach which significantly reduces the required number of simulations. Variability in resonator stiffnesses and masses is found to have the highest impact. Variability in the resonator positions only has a comparable impact for less deep sub-wavelength designs. The broadening potential of varying resonator properties is exploited in broadband optimization and the robustness of the optimized metamaterial is assessed.This article is part of the theme issue 'Current developments in elastic and acoustic metamaterials science (Part 2)'.

15.
J Biomed Inform ; 157: 104693, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39019301

RESUMO

OBJECTIVE: Understanding and quantifying biases when designing and implementing actionable approaches to increase fairness and inclusion is critical for artificial intelligence (AI) in biomedical applications. METHODS: In this Special Communication, we discuss how bias is introduced at different stages of the development and use of AI applications in biomedical sciences and health care. We describe various AI applications and their implications for fairness and inclusion in sections on 1) Bias in Data Source Landscapes, 2) Algorithmic Fairness, 3) Uncertainty in AI Predictions, 4) Explainable AI for Fairness and Equity, and 5) Sociological/Ethnographic Issues in Data and Results Representation. RESULTS: We provide recommendations to address biases when developing and using AI in clinical applications. CONCLUSION: These recommendations can be applied to informatics research and practice to foster more equitable and inclusive health care systems and research discoveries.


Assuntos
Inteligência Artificial , Pesquisa Biomédica , Humanos , Algoritmos , Viés , Informática Médica/métodos , Atenção à Saúde
16.
J Biomed Inform ; 149: 104576, 2024 01.
Artigo em Inglês | MEDLINE | ID: mdl-38101690

RESUMO

INTRODUCTION: Machine learning algorithms are expected to work side-by-side with humans in decision-making pipelines. Thus, the ability of classifiers to make reliable decisions is of paramount importance. Deep neural networks (DNNs) represent the state-of-the-art models to address real-world classification. Although the strength of activation in DNNs is often correlated with the network's confidence, in-depth analyses are needed to establish whether they are well calibrated. METHOD: In this paper, we demonstrate the use of DNN-based classification tools to benefit cancer registries by automating information extraction of disease at diagnosis and at surgery from electronic text pathology reports from the US National Cancer Institute (NCI) Surveillance, Epidemiology, and End Results (SEER) population-based cancer registries. In particular, we introduce multiple methods for selective classification to achieve a target level of accuracy on multiple classification tasks while minimizing the rejection amount-that is, the number of electronic pathology reports for which the model's predictions are unreliable. We evaluate the proposed methods by comparing our approach with the current in-house deep learning-based abstaining classifier. RESULTS: Overall, all the proposed selective classification methods effectively allow for achieving the targeted level of accuracy or higher in a trade-off analysis aimed to minimize the rejection rate. On in-distribution validation and holdout test data, with all the proposed methods, we achieve on all tasks the required target level of accuracy with a lower rejection rate than the deep abstaining classifier (DAC). Interpreting the results for the out-of-distribution test data is more complex; nevertheless, in this case as well, the rejection rate from the best among the proposed methods achieving 97% accuracy or higher is lower than the rejection rate based on the DAC. CONCLUSIONS: We show that although both approaches can flag those samples that should be manually reviewed and labeled by human annotators, the newly proposed methods retain a larger fraction and do so without retraining-thus offering a reduced computational cost compared with the in-house deep learning-based abstaining classifier.


Assuntos
Aprendizado Profundo , Humanos , Incerteza , Redes Neurais de Computação , Algoritmos , Aprendizado de Máquina
17.
Environ Res ; 249: 118438, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38350546

RESUMO

Air pollution constitutes a substantial peril to human health, thereby catalyzing the evolution of an array of air quality prediction models. These models span from mechanistic and statistical strategies to machine learning methodologies. The burgeoning field of deep learning has given rise to a plethora of advanced models, which have demonstrated commendable performance. However, previous investigations have overlooked the salience of quantifying prediction uncertainties and potential future interconnections among air monitoring stations. Moreover, prior research typically utilized static predetermined spatial relationships, neglecting dynamic dependencies. To address these limitations, we propose a model named Dynamic Spatial-Temporal Denoising Diffusion Probabilistic Model (DST-DDPM) for air quality prediction. Our model is underpinned by the renowned denoising diffusion model, aiding us in discerning indeterminacy. In order to encapsulate dynamic patterns, we design a dynamic context encoder to generate dynamic adjacency matrices, whilst maintaining static spatial information. Furthermore, we incorporate a spatial-temporal denoising model to concurrently learn both spatial and temporal dependencies. Authenticating our model's performance using a real-world dataset collected in Beijing, the outcomes indicate that our model eclipses other baseline models in terms of both short-term and long-term predictions by 1.36% and 11.62% respectively. Finally, we conduct a case study to exhibit our model's capacity to quantify uncertainties.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Monitoramento Ambiental , Previsões , Modelos Estatísticos , Incerteza , Poluição do Ar/análise , Monitoramento Ambiental/métodos , Poluentes Atmosféricos/análise , Previsões/métodos , Análise Espaço-Temporal , Pequim , Material Particulado/análise
18.
J Arthroplasty ; 39(4): 966-973.e17, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37770007

RESUMO

BACKGROUND: Revision total hip arthroplasty (THA) requires preoperatively identifying in situ implants, a time-consuming and sometimes unachievable task. Although deep learning (DL) tools have been attempted to automate this process, existing approaches are limited by classifying few femoral and zero acetabular components, only classify on anterior-posterior (AP) radiographs, and do not report prediction uncertainty or flag outlier data. METHODS: This study introduces Total Hip Arhtroplasty Automated Implant Detector (THA-AID), a DL tool trained on 241,419 radiographs that identifies common designs of 20 femoral and 8 acetabular components from AP, lateral, or oblique views and reports prediction uncertainty using conformal prediction and outlier detection using a custom framework. We evaluated THA-AID using internal, external, and out-of-domain test sets and compared its performance with human experts. RESULTS: THA-AID achieved internal test set accuracies of 98.9% for both femoral and acetabular components with no significant differences based on radiographic view. The femoral classifier also achieved 97.0% accuracy on the external test set. Adding conformal prediction increased true label prediction by 0.1% for acetabular and 0.7 to 0.9% for femoral components. More than 99% of out-of-domain and >89% of in-domain outlier data were correctly identified by THA-AID. CONCLUSIONS: The THA-AID is an automated tool for implant identification from radiographs with exceptional performance on internal and external test sets and no decrement in performance based on radiographic view. Importantly, this is the first study in orthopedics to our knowledge including uncertainty quantification and outlier detection of a DL model.


Assuntos
Artroplastia de Quadril , Aprendizado Profundo , Prótese de Quadril , Humanos , Incerteza , Acetábulo/cirurgia , Estudos Retrospectivos
19.
Sensors (Basel) ; 24(15)2024 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-39123913

RESUMO

Validation is a critical aspect of product development for meeting design goals and mitigating risk in the face of considerable cost and time commitments. In this research article, uncertainty quantification (UQ) for efficiency testing of an Electric Drive Unit (EDU) is demonstrated, considering confidence in simulations with respect to the validation campaign. The methodology used for UQ is consistent with the framework mentioned in the guide to the expression of uncertainty in measurement (GUM). An analytical evaluation of the measurement chain involved in EDU efficiency testing was performed and elemental uncertainties were derived, later to be propagated to the derived quantity of efficiency. When uncertainties were associated with measurements, the erroneous measurements made through sensors in the measurement chain were highlighted. These results were used for the assessment of requirement coverage and the validation of test results.

20.
Sensors (Basel) ; 24(15)2024 Jul 27.
Artigo em Inglês | MEDLINE | ID: mdl-39123931

RESUMO

This paper presents a novel adaptation of the conventional approximate Bayesian computation sequential Monte Carlo (ABC-SMC) sampling algorithm for parameter estimation in the presence of uncertainties, coined combinatorial ABC-SMC. Inference of this type is used in situations where there does not exist a closed form of the associated likelihood function, which is replaced by a simulating model capable of producing artificial data. In the literature, conventional ABC-SMC is utilised to perform inference on continuous parameters. The novel scheme presented here has been developed to perform inference on parameters that are high-dimensional binary, rather than continuous. By altering the form of the proposal distribution from which to sample candidates in subsequent iterations (referred to as waves), high-dimensional binary variables may be targeted and inferred by the scheme. The efficacy of the proposed scheme is demonstrated through application to vibration data obtained in a structural dynamics experiment on a fibre-optic sensor simulated as a finite plate with uncertain boundary conditions at its edges. Results indicate that the method provides sound inference on the plate boundary conditions, which is validated through subsequent application of the method to multiple vibration datasets. Comparisons between appropriate forms of the metric function used in the scheme are also developed to highlight the effect of this element in the schemes convergence.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA