Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 95
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 119(30): e2122788119, 2022 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-35867822

RESUMO

Compositional analysis is based on the premise that a relatively small proportion of taxa are differentially abundant, while the ratios of the relative abundances of the remaining taxa remain unchanged. Most existing methods use log-transformed data, but log-transformation of data with pervasive zero counts is problematic, and these methods cannot always control the false discovery rate (FDR). Further, high-throughput microbiome data such as 16S amplicon or metagenomic sequencing are subject to experimental biases that are introduced in every step of the experimental workflow. McLaren et al. [eLife 8, e46923 (2019)] have recently proposed a model for how these biases affect relative abundance data. Motivated by this model, we show that the odds ratios in a logistic regression comparing counts in two taxa are invariant to experimental biases. With this motivation, we propose logistic compositional analysis (LOCOM), a robust logistic regression approach to compositional analysis, that does not require pseudocounts. Inference is based on permutation to account for overdispersion and small sample sizes. Traits can be either binary or continuous, and adjustment for confounders is supported. Our simulations indicate that LOCOM always preserved FDR and had much improved sensitivity over existing methods. In contrast, analysis of composition of microbiomes (ANCOM) and ANCOM with bias correction (ANCOM-BC)/ANOVA-Like Differential Expression tool (ALDEx2) had inflated FDR when the effect sizes were small and large, respectively. Only LOCOM was robust to experimental biases in every situation. The flexibility of our method for a variety of microbiome studies is illustrated by the analysis of data from two microbiome studies. Our R package LOCOM is publicly available.


Assuntos
Microbiota , Modelos Logísticos , Metagenômica/métodos , Microbiota/genética , Análise de Sequência
2.
Biom J ; 66(3): e2200316, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38637311

RESUMO

Network meta-analysis (NMA) usually provides estimates of the relative effects with the highest possible precision. However, sparse networks with few available studies and limited direct evidence can arise, threatening the robustness and reliability of NMA estimates. In these cases, the limited amount of available information can hamper the formal evaluation of the underlying NMA assumptions of transitivity and consistency. In addition, NMA estimates from sparse networks are expected to be imprecise and possibly biased as they rely on large-sample approximations that are invalid in the absence of sufficient data. We propose a Bayesian framework that allows sharing of information between two networks that pertain to different population subgroups. Specifically, we use the results from a subgroup with a lot of direct evidence (a dense network) to construct informative priors for the relative effects in the target subgroup (a sparse network). This is a two-stage approach where at the first stage, we extrapolate the results of the dense network to those expected from the sparse network. This takes place by using a modified hierarchical NMA model where we add a location parameter that shifts the distribution of the relative effects to make them applicable to the target population. At the second stage, these extrapolated results are used as prior information for the sparse network. We illustrate our approach through a motivating example of psychiatric patients. Our approach results in more precise and robust estimates of the relative effects and can adequately inform clinical practice in presence of sparse networks.


Assuntos
Teorema de Bayes , Humanos , Metanálise em Rede , Reprodutibilidade dos Testes , Metanálise como Assunto
3.
J Microsc ; 2023 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-37877157

RESUMO

Single-molecule localisation microscopy (SMLM) has the potential to reveal the underlying organisation of specific molecules within supramolecular complexes and their conformations, which is not possible with conventional microscope resolution. However, the detection efficiency for fluorescent molecules in cells can be limited in SMLM, even to below 1% in thick and dense samples. Segmentation of individual complexes can also be challenging. To overcome these problems, we have developed a software package termed PERPL: Pattern Extraction from Relative Positions of Localisations. This software assesses the relative likelihoods of models for underlying patterns behind incomplete SMLM data, based on the relative positions of pairs of localisations. We review its principles and demonstrate its use on the 3D lattice of Z-disk proteins in mammalian cardiomyocytes. We find known and novel features at ~20 nm with localisations of less than 1% of the target proteins, using mEos fluorescent protein constructs.

4.
Sensors (Basel) ; 23(16)2023 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-37631565

RESUMO

The projection of a point cloud onto a 2D camera image is relevant in the case of various image analysis and enhancement tasks, e.g., (i) in multimodal image processing for data fusion, (ii) in robotic applications and in scene analysis, and (iii) for deep neural networks to generate real datasets with ground truth. The challenges of the current single-shot projection methods, such as simple state-of-the-art projection, conventional, polygon, and deep learning-based upsampling methods or closed source SDK functions of low-cost depth cameras, have been identified. We developed a new way to project point clouds onto a dense, accurate 2D raster image, called Triangle-Mesh-Rasterization-Projection (TMRP). The only gaps that the 2D image still contains with our method are valid gaps that result from the physical limits of the capturing cameras. Dense accuracy is achieved by simultaneously using the 2D neighborhood information (rx,ry) of the 3D coordinates in addition to the points P(X,Y,V). In this way, a fast triangulation interpolation can be performed. The interpolation weights are determined using sub-triangles. Compared to single-shot methods, our algorithm is able to solve the following challenges. This means that: (1) no false gaps or false neighborhoods are generated, (2) the density is XYZ independent, and (3) ambiguities are eliminated. Our TMRP method is also open source, freely available on GitHub, and can be applied to almost any sensor or modality. We also demonstrate the usefulness of our method with four use cases by using the KITTI-2012 dataset or sensors with different modalities. Our goal is to improve recognition tasks and processing optimization in the perception of transparent objects for robotic manufacturing processes.

5.
Microsc Microanal ; : 1-11, 2022 Jun 10.
Artigo em Inglês | MEDLINE | ID: mdl-35686442

RESUMO

Artificial intelligence (AI) promises to reshape scientific inquiry and enable breakthrough discoveries in areas such as energy storage, quantum computing, and biomedicine. Scanning transmission electron microscopy (STEM), a cornerstone of the study of chemical and materials systems, stands to benefit greatly from AI-driven automation. However, present barriers to low-level instrument control, as well as generalizable and interpretable feature detection, make truly automated microscopy impractical. Here, we discuss the design of a closed-loop instrument control platform guided by emerging sparse data analytics. We hypothesize that a centralized controller, informed by machine learning combining limited a priori knowledge and task-based discrimination, could drive on-the-fly experimental decision-making. This platform may unlock practical, automated analysis of a variety of material features, enabling new high-throughput and statistical studies.

6.
BMC Bioinformatics ; 22(1): 89, 2021 Feb 25.
Artigo em Inglês | MEDLINE | ID: mdl-33632116

RESUMO

BACKGROUND: Matrix factorization methods are linear models, with limited capability to model complex relations. In our work, we use tropical semiring to introduce non-linearity into matrix factorization models. We propose a method called Sparse Tropical Matrix Factorization (STMF) for the estimation of missing (unknown) values in sparse data. RESULTS: We evaluate the efficiency of the STMF method on both synthetic data and biological data in the form of gene expression measurements downloaded from The Cancer Genome Atlas (TCGA) database. Tests on unique synthetic data showed that STMF approximation achieves a higher correlation than non-negative matrix factorization (NMF), which is unable to recover patterns effectively. On real data, STMF outperforms NMF on six out of nine gene expression datasets. While NMF assumes normal distribution and tends toward the mean value, STMF can better fit to extreme values and distributions. CONCLUSION: STMF is the first work that uses tropical semiring on sparse data. We show that in certain cases semirings are useful because they consider the structure, which is different and simpler to understand than it is with standard linear algebra.


Assuntos
Algoritmos , Neoplasias , Expressão Gênica , Humanos , Neoplasias/genética
7.
Stat Med ; 40(24): 5276-5297, 2021 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-34219258

RESUMO

Meta-analysis of rare event data has recently received increasing attention due to the challenging issues rare events pose to traditional meta-analytic methods. One specific way to combine information and analyze rare event meta-analysis data utilizes confidence distributions (CDs). While several CD methods exist, no comparisons have been made to determine which method is best suited for homogeneous or heterogeneous meta-analyses with rare events. In this article, we review several CD methods: Fisher's classic P-value combination method, one that combines P-value functions, another that combines confidence intervals, and one that combines confidence log-likelihood functions. We compare these CD approaches, and we propose and compare variations of these methods to determine which method produces reliable results for homogeneous or heterogeneous rare event meta-analyses. We find that for homogeneous rare event data, most CD methods perform very well. On the other hand, for heterogeneous rare event data, there is a clear split in performance between some CD methods, with some performing very poorly and others performing reasonably well.


Assuntos
Projetos de Pesquisa , Humanos , Funções Verossimilhança
8.
Stat Med ; 40(25): 5587-5604, 2021 11 10.
Artigo em Inglês | MEDLINE | ID: mdl-34328659

RESUMO

The increasingly widespread use of meta-analysis has led to growing interest in meta-analytic methods for rare events and sparse data. Conventional approaches tend to perform very poorly in such settings. Recent work in this area has provided options for sparse data, but these are still often hampered when heterogeneity across the available studies differs based on treatment group. We propose a permutation-based approach based on conditional logistic regression that accommodates this common contingency, providing more reliable statistical tests when such patterns of heterogeneity are observed. We find that commonly used methods can yield highly inflated Type I error rates, low confidence interval coverage, and bias when events are rare and non-negligible heterogeneity is present. Our method often produces much lower Type I error rates and higher confidence interval coverage than traditional methods in these circumstances. We illustrate the utility of our method by comparing it to several other methods via a simulation study and analyzing an example data set, which assess the use of antibiotics to prevent acute rheumatic fever.


Assuntos
Antibacterianos , Antibacterianos/uso terapêutico , Viés , Simulação por Computador , Humanos , Modelos Logísticos
9.
Biometrics ; 76(3): 834-842, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-31785150

RESUMO

Multinomial data arise in many areas of the life sciences, such as mark-recapture studies and phylogenetics, and will often by overdispersed, with the variance being higher than predicted by a multinomial model. The quasi-likelihood approach to modeling this overdispersion involves the assumption that the variance is proportional to that specified by the multinomial model. As this approach does not require specification of the full distribution of the response variable, it can be more robust than fitting a Dirichlet-multinomial model or adding a random effect to the linear predictor. Estimation of the amount of overdispersion is often based on Pearson's statistic X2 or the deviance D. For many types of study, such as mark-recapture, the data will be sparse. The estimator based on X2 can then be highly variable, and that based on D can have a large negative bias. We derive a new estimator, which has a smaller asymptotic variance than that based on X2 , the difference being most marked for sparse data. We illustrate the numerical difference between the three estimators using a mark-recapture study of swifts and compare their performance via a simulation study. The new estimator has the lowest root mean squared error across a range of scenarios, especially when the data are very sparse.


Assuntos
Funções Verossimilhança , Viés , Simulação por Computador
10.
Stat Med ; 39(24): 3329-3346, 2020 10 30.
Artigo em Inglês | MEDLINE | ID: mdl-32672370

RESUMO

In multivariate network meta-analysis (NMA), the piecemeal nature of the evidence base means that there may be treatment-outcome combinations for which no data is available. Most existing multivariate evidence synthesis models are either unable to estimate the missing treatment-outcome combinations, or can only do so under particularly strong assumptions, such as perfect between-study correlations between outcomes or constant effect size across outcomes. Many existing implementations are also limited to two treatments or two outcomes, or rely on model specification that is heavily tailored to the dimensions of the dataset. We present a Bayesian multivariate NMA model that estimates the missing treatment-outcome combinations via mappings between the population mean effects, while allowing the study-specific effects to be imperfectly correlated. The method is designed for aggregate-level data (rather than individual patient data) and is likely to be useful when modeling multiple sparsely reported outcomes, or when varying definitions of the same underlying outcome are adopted by different studies. We implement the model via a novel decomposition of the treatment effect variance, which can be specified efficiently for an arbitrary dataset given some basic assumptions regarding the correlation structure. The method is illustrated using data concerning the efficacy and liver-related safety of eight active treatments for relapsing-remitting multiple sclerosis. The results indicate that fingolimod and interferon beta-1b are the most efficacious treatments but also have some of the worst effects on liver safety. Dimethyl fumarate and glatiramer acetate perform reasonably on all of the efficacy and safety outcomes in the model.


Assuntos
Esclerose Múltipla Recidivante-Remitente , Esclerose Múltipla , Teorema de Bayes , Fumarato de Dimetilo , Humanos , Imunossupressores/uso terapêutico , Esclerose Múltipla Recidivante-Remitente/tratamento farmacológico , Metanálise em Rede
11.
Sensors (Basel) ; 20(21)2020 Nov 07.
Artigo em Inglês | MEDLINE | ID: mdl-33171803

RESUMO

Learning accurate Bayesian Network (BN) structures of high-dimensional and sparse data is difficult because of high computation complexity. To learn the accurate structure for high-dimensional and sparse data faster, this paper adopts a divide and conquer strategy and proposes a block learning algorithm with a mutual information based K-means algorithm (BLMKM algorithm). This method utilizes an improved K-means algorithm to block the nodes in BN and a maximum minimum parents and children (MMPC) algorithm to obtain the whole skeleton of BN and find possible graph structures based on separated blocks. Then, a pruned dynamic programming algorithm is performed sequentially for all possible graph structures to get possible BNs and find the best BN by scoring function. Experiments show that for high-dimensional and sparse data, the BLMKM algorithm can achieve the same accuracy in a reasonable time compared with non-blocking classical learning algorithms. Compared to the existing block learning algorithms, the BLMKM algorithm has a time advantage on the basis of ensuring accuracy. The analysis of the real radar effect mechanism dataset proves that BLMKM algorithm can quickly establish a global and accurate causality model to find the cause of interference, predict the detecting result, and guide the parameters optimization. BLMKM algorithm is efficient for BN learning and has practical application value.

12.
Sensors (Basel) ; 20(21)2020 Oct 24.
Artigo em Inglês | MEDLINE | ID: mdl-33114275

RESUMO

Urban transport traffic surveillance is of great importance for public traffic control and personal travel path planning. Effective and efficient traffic flow prediction is helpful to optimize these real applications. The main challenge of traffic flow prediction is the data sparsity problem, meaning that traffic flow on some roads or of certain periods cannot be monitored. This paper presents a transport traffic prediction method that leverages the spatial and temporal correlation of transportation traffic to tackle this problem. We first propose to model the traffic flow using a fourth-order tensor, which incorporates the location, the time of day, the day of the week, and the week of the month. Based on the constructed traffic flow tensor, we either propose a model to estimate the correlation in each dimension of the tensor. Furthermore, we utilize the gradient descent strategy to design a traffic flow prediction algorithm that is capable of tackling the data sparsity problem from the spatial and temporal perspectives of the traffic pattern. To validate the proposed traffic prediction method, case studies using real-work datasets are constructed, and the results demonstrate that the prediction accuracy of our proposed method outperforms the baselines. The accuracy decreases the least with the percentage of missing data increasing, including the situation of data being missing on neighboring roads in one or continuous multi-days. This certifies that the proposed prediction method can be utilized for sparse data-based transportation traffic surveillance.

13.
Biom J ; 62(2): 492-515, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32022299

RESUMO

Many flexible extensions of the Cox proportional hazards model incorporate time-dependent (TD) and/or nonlinear (NL) effects of time-invariant covariates. In contrast, little attention has been given to the assessment of such effects for continuous time-varying covariates (TVCs). We propose a flexible regression B-spline-based model for TD and NL effects of a TVC. To account for sparse TVC measurements, we added to this model the effect of time elapsed since last observation (TEL), which acts as an effect modifier. TD, NL, and TEL effects are estimated with the iterative alternative conditional estimation algorithm. Furthermore, a simulation extrapolation (SIMEX)-like procedure was adapted to correct the estimated effects for random measurement errors in the observed TVC values. In simulations, TD and NL estimates were unbiased if the TVC was measured with a high frequency. With sparse measurements, the strength of the effects was underestimated but the TEL estimate helped reduce the bias, whereas SIMEX helped further to correct for bias toward the null due to "white noise" measurement errors. We reassessed the effects of systolic blood pressure (SBP) and total cholesterol, measured at two-year intervals, on cardiovascular risks in women participating in the Framingham Heart Study. Accounting for TD effects of SBP, cholesterol and age, the NL effect of cholesterol, and the TEL effect of SBP improved substantially the model's fit to data. Flexible estimates yielded clinically important insights regarding the role of these risk factors. These results illustrate the advantages of flexible modeling of TVC effects.


Assuntos
Biometria/métodos , Dinâmica não Linear , Pressão Sanguínea , Doenças Cardiovasculares/sangue , Doenças Cardiovasculares/fisiopatologia , Colesterol/sangue , Humanos , Medição de Risco , Fatores de Tempo
14.
Entropy (Basel) ; 22(6)2020 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-33286430

RESUMO

Evaluation of the population density in many ecological and biological problems requires a satisfactory degree of accuracy. Insufficient information about the population density, obtained from sampling procedures negatively, impacts on the accuracy of the estimate. When dealing with sparse ecological data, the asymptotic error estimate fails to achieve a reliable degree of accuracy. It is essential to investigate which factors affect the degree of accuracy of numerical integration methods. When the number of traps is less than the recommended threshold, the degree of accuracy will be negatively affected. Therefore, available numerical integration methods cannot guarantee a satisfactory degree of accuracy, and in this sense the error will be probabilistic rather than deterministic. In other words, the probabilistic approach is used instead of the deterministic approach in this instance; by considering the error as a random variable, the chance of obtaining an accurate estimation can be quantified. In the probabilistic approach, we determine a threshold number of grid nodes required to guarantee a desirable level of accuracy with the probability equal to one.

15.
Cytokine ; 120: 191, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-31100683

RESUMO

The aim of this study was to mention some methodological issues in a study which investigate the effect of Granulocyte colony-stimulating factor on developing of aortitis.


Assuntos
Aortite , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos , Fator Estimulador de Colônias de Granulócitos , Humanos , Japão
16.
Risk Anal ; 39(8): 1796-1811, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30893499

RESUMO

Several statistical models for salmonella source attribution have been presented in the literature. However, these models have often been found to be sensitive to the model parameterization, as well as the specifics of the data set used. The Bayesian salmonella source attribution model presented here was developed to be generally applicable with small and sparse annual data sets obtained over several years. The full Bayesian model was modularized into three parts (an exposure model, a subtype distribution model, and an epidemiological model) in order to separately estimate unknown parameters in each module. The proposed model takes advantage of the consumption and overall salmonella prevalence of the studied sources, as well as bacteria typing results from adjacent years. The latter were used for a smoothed estimation of the annual relative proportions of different salmonella subtypes in each of the sources. The source-specific effects and the salmonella subtype-specific effects were included in the epidemiological model to describe the differences between sources and between subtypes in their ability to infect humans. The estimation of these parameters was based on data from multiple years. Finally, the model combines the total evidence from different modules to proportion human salmonellosis cases according to their sources. The model was applied to allocate reported human salmonellosis cases from the years 2008 to 2015 to eight food sources.


Assuntos
Teorema de Bayes , Modelos Biológicos , Salmonella/isolamento & purificação , Microbiologia de Alimentos , Humanos , Salmonella/classificação , Intoxicação Alimentar por Salmonella/epidemiologia , Intoxicação Alimentar por Salmonella/microbiologia
17.
Sensors (Basel) ; 18(6)2018 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-29880756

RESUMO

An Optical Wide-field patroL-Network (OWL-Net) has been developed for maintaining Korean low Earth orbit (LEO) satellites' orbital ephemeris. The OWL-Net consists of five optical tracking stations. Brightness signals of reflected sunlight of the targets were detected by a charged coupled device (CCD). A chopper system was adopted for fast astrometric data sampling, maximum 50 Hz, within a short observation time. The astrometric accuracy of the optical observation data was validated with precise orbital ephemeris such as Consolidated Prediction File (CPF) data and precise orbit determination result with onboard Global Positioning System (GPS) data from the target satellite. In the optical observation simulation of the OWL-Net for 2017, an average observation span for a single arc of 11 LEO observation targets was about 5 min, while an average optical observation separation time was 5 h. We estimated the position and velocity with an atmospheric drag coefficient of LEO observation targets using a sequential-batch orbit estimation technique after multi-arc batch orbit estimation. Post-fit residuals for the multi-arc batch orbit estimation and sequential-batch orbit estimation were analyzed for the optical measurements and reference orbit (CPF and GPS data). The post-fit residuals with reference show few tens-of-meters errors for in-track direction for multi-arc batch and sequential-batch orbit estimation results.

18.
Eur Radiol ; 27(3): 1004-1011, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-27300194

RESUMO

OBJECTIVES: To assess the image quality of sparsely sampled contrast-enhanced MR angiography (sparse CE-MRA) providing high spatial resolution and whole-head coverage. MATERIALS AND METHODS: Twenty-three patients scheduled for contrast-enhanced MR imaging of the head, (N = 19 with intracranial pathologies, N = 9 with vascular diseases), were included. Sparse CE-MRA at 3 Tesla was conducted using a single dose of contrast agent. Two neuroradiologists independently evaluated the data regarding vascular visibility and diagnostic value of overall 24 parameters and vascular segments on a 5-point ordinary scale (5 = very good, 1 = insufficient vascular visibility). Contrast bolus timing and the resulting arterio-venous overlap was also evaluated. Where available (N = 9), sparse CE-MRA was compared to intracranial Time-of-Flight MRA. RESULTS: The overall rating across all patients for sparse CE-MRA was 3.50 ± 1.07. Direct influence of the contrast bolus timing on the resulting image quality was observed. Overall mean vascular visibility and image quality across different features was rated good to intermediate (3.56 ± 0.95). The average performance of intracranial Time-of-Flight was rated 3.84 ± 0.87 across all patients and 3.54 ± 0.62 across all features. CONCLUSION: Sparse CE-MRA provides high-quality 3D MRA with high spatial resolution and whole-head coverage within short acquisition time. Accurate contrast bolus timing is mandatory. KEY POINTS: • Sparse CE-MRA enables fast vascular imaging with full brain coverage. • Volumes with sub-millimetre resolution can be acquired within 10 seconds. • Reader's ratings are good to intermediate and dependent on contrast bolus timing. • The method provides an excellent overview and allows screening for vascular pathologies.


Assuntos
Meios de Contraste , Aumento da Imagem/métodos , Doenças Arteriais Intracranianas/diagnóstico por imagem , Angiografia por Ressonância Magnética/métodos , Adolescente , Adulto , Idoso , Feminino , Humanos , Doenças Arteriais Intracranianas/patologia , Masculino , Pessoa de Meia-Idade , Sensibilidade e Especificidade , Adulto Jovem
19.
Stat Med ; 36(14): 2302-2317, 2017 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-28295456

RESUMO

Firth's logistic regression has become a standard approach for the analysis of binary outcomes with small samples. Whereas it reduces the bias in maximum likelihood estimates of coefficients, bias towards one-half is introduced in the predicted probabilities. The stronger the imbalance of the outcome, the more severe is the bias in the predicted probabilities. We propose two simple modifications of Firth's logistic regression resulting in unbiased predicted probabilities. The first corrects the predicted probabilities by a post hoc adjustment of the intercept. The other is based on an alternative formulation of Firth's penalization as an iterative data augmentation procedure. Our suggested modification consists in introducing an indicator variable that distinguishes between original and pseudo-observations in the augmented data. In a comprehensive simulation study, these approaches are compared with other attempts to improve predictions based on Firth's penalization and to other published penalization strategies intended for routine use. For instance, we consider a recently suggested compromise between maximum likelihood and Firth's logistic regression. Simulation results are scrutinized with regard to prediction and effect estimation. We find that both our suggested methods do not only give unbiased predicted probabilities but also improve the accuracy conditional on explanatory variables compared with Firth's penalization. While one method results in effect estimates identical to those of Firth's penalization, the other introduces some bias, but this is compensated by a decrease in the mean squared error. Finally, all methods considered are illustrated and compared for a study on arterial closure devices in minimally invasive cardiac surgery. Copyright © 2017 John Wiley & Sons, Ltd.


Assuntos
Modelos Logísticos , Viés , Bioestatística , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Procedimentos Cirúrgicos Cardíacos/instrumentação , Procedimentos Cirúrgicos Cardíacos/estatística & dados numéricos , Simulação por Computador , Humanos , Funções Verossimilhança , Procedimentos Cirúrgicos Minimamente Invasivos/efeitos adversos , Procedimentos Cirúrgicos Minimamente Invasivos/instrumentação , Procedimentos Cirúrgicos Minimamente Invasivos/estatística & dados numéricos , Modelos Estatísticos , Probabilidade , Tamanho da Amostra , Dispositivos de Oclusão Vascular/efeitos adversos , Dispositivos de Oclusão Vascular/estatística & dados numéricos
20.
J Biopharm Stat ; 27(2): 257-264, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-27906608

RESUMO

Bioequivalence studies are an essential part of the evaluation of generic drugs. The most common in vivo bioequivalence study design is the two-period two-treatment crossover design. The observed drug concentration-time profile for each subject from each treatment under each sequence can be obtained. AUC (the area under the concentration-time curve) and Cmax (the maximum concentration) are obtained from the observed drug concentration-time profiles for each subject from each treatment under each sequence. However, such a drug concentration-time profile for each subject from each treatment under each sequence cannot possibly be available during the development of generic ophthalmic products since there is only one-time point measured drug concentration of aqueous humor for each eye. Instead, many subjects will be assigned to each of several prespecified sampling times. Then, the mean concentration at each sampling time can be obtained by the simple average of these subjects' observed concentration. One profile of the mean concentration vs. time can be obtained for one product (either the test or the reference product). One AUC value for one product can be calculated from the mean concentration-time profile using trapezoidal rules. This article develops a novel nonparametric method for obtaining the 90% confidence interval for the ratio of AUCT and AUCR (or CT,max/CR,max) in crossover studies by bootstrapping subjects at each time point with replacement or bootstrapping subjects at all sampling time points with replacement. Here T represents the test product, and R represents the reference product. It also develops a novel nonparametric method for estimating the standard errors (SEs) of AUCh and Ch,max in parallel studies by bootstrapping subjects treated by the hth product at each time point with replacement or bootstrapping subjects treated by the hth product at all sampling time points with replacement, h = T, R. Then, 90% confidence intervals for AUCT/AUCR and CT,max/CR,max are obtained from the nonparametric bootstrap resampling samples and are used for the evaluation of bioequivalence study for one-time sparse sampling data.


Assuntos
Interpretação Estatística de Dados , Estudos de Equivalência como Asunto , Equivalência Terapêutica , Área Sob a Curva , Estudos Cross-Over , Relação Dose-Resposta a Droga , Medicamentos Genéricos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA