RESUMO
In vitro dissolution profile has been shown to be correlated with the drug absorption and has often been considered as a metric for assessing in vitro bioequivalence between a test product and corresponding reference one. Various methods have been developed to assess the similarity between two dissolution profiles. In particular, similarity factor f2 has been reviewed and discussed extensively in many statistical articles. Although the f2 lacks inferential statistical properties, the estimation of f2 and its various modified versions were the most widely used metric for comparing dissolution profiles. In this paper, we investigated performances of the naive f2 estimate method, bootstrap f2 confidence interval method and bias corrected-accelerated (BCa) bootstrap f2 confidence interval method for comparing dissolution profiles. Our studies show that naive f2 estimate method and BCa bootstrap f2 confidence interval method are unable to control the type I error rate. The bootstrap f2 confidence interval method can control the type I error rate under a specific level. However, it will cause great conservatism on the power of the test. To solve the potential issues of the previous methods, we recommended a bootstrap bias corrected (BC) f2 confidence interval method in this paper. The type I error rate, power and sensitivity among different f2 methods were compared based on simulations. The recommended bootstrap BC f2 confidence interval method shows better control of type I error than the naive f2 estimate method and BCa bootstrap f2 confidence interval method. It also provides better power than the bootstrap f2 confidence interval method.
Assuntos
Fator F , Humanos , Solubilidade , Equivalência Terapêutica , ViésRESUMO
In this paper, a robust analysis of nitrogen dioxide (NO2) concentration measurements taken at Belisario station (Quito, Ecuador) was performed. The data used for the analysis constitute a set of measurements taken from 1 January 2008 to 31 December 2019. Furthermore, the analysis was carried out in a robust way, defining variables that represent years, months, days and hours, and classifying these variables based on estimates of the central tendency and dispersion of the data. The estimators used here were classic, nonparametric, based on a bootstrap method, and robust. Additionally, confidence intervals based on these estimators were built, and these intervals were used to categorize the variables under study. The results of this research showed that the NO2 concentration at Belisario station is not harmful to humans. Moreover, it was shown that this concentration tends to be stable across the years, changes slightly during the days of the week, and varies greatly when analyzed by months and hours of the day. Here, the precision provided by both nonparametric and robust statistical methods served to comprehensively proof the aforementioned. Finally, it can be concluded that the city of Quito is progressing on the right path in terms of improving air quality, because it has been shown that there is a decreasing tendency in the NO2 concentration across the years. In addition, according to the Quito Air Quality Index, most of the observations are in either the desirable level or acceptable level of air pollution, and the number of observations that are in the desirable level of air pollution increases across the years.
RESUMO
This paper discussed the estimation of stress-strength reliability parameter R=P(Y
RESUMO
Under the assumption of missing at random, eight confidence intervals (CIs) for the difference between two correlated proportions in the presence of incomplete paired binary data are constructed on the basis of the likelihood ratio statistic, the score statistic, the Wald-type statistic, the hybrid method incorporated with the Wilson score and Agresti-Coull (AC) intervals, and the Bootstrap-resampling method. Extensive simulation studies are conducted to evaluate the performance of the presented CIs in terms of coverage probability and expected interval width. Our empirical results evidence that the Wilson-score-based hybrid CI and the Wald-type CI together with the constrained maximum likelihood estimates perform well for small-to-moderate sample sizes in the sense that (i) their empirical coverage probabilities are quite close to the prespecified confidence level, (ii) their expected interval widths are shorter, and (iii) their ratios of the mesial non-coverage to non-coverage probabilities lie in interval [0.4, 0.6]. An example from a neurological study is used to illustrate the proposed methodologies.
Assuntos
Intervalos de Confiança , Modelos Estatísticos , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Simulação por Computador , Estudos Cross-Over , Interpretação Estatística de Dados , Humanos , Análise por Pareamento , Meningite/complicações , Meningite/tratamento farmacológico , Método de Monte Carlo , Exame Neurológico/estatística & dados numéricosRESUMO
Thailand is currently grappling with a severe problem of air pollution, especially from small particulate matter (PM), which poses considerable threats to public health. The speed of the wind is pivotal in spreading these harmful particles across the atmosphere. Given the inherently unpredictable wind speed behavior, our focus lies in establishing the confidence interval (CI) for the variance of wind speed data. To achieve this, we will employ the delta-Birnbaum-Saunders (delta-BirSau) distribution. This statistical model allows for analyzing wind speed data and offers valuable insights into its variability and potential implications for air quality. The intervals are derived from ten different methods: generalized confidence interval (GCI), bootstrap confidence interval (BCI), generalized fiducial confidence interval (GFCI), and normal approximation (NA). Specifically, we apply GCI, BCI, and GFCI while considering the estimation of the proportion of zeros using the variance stabilized transformation (VST), Wilson, and Hannig methods. To evaluate the performance of these methods, we conduct a simulation study using Monte Carlo simulations in the R statistical software. The study assesses the coverage probabilities and average widths of the proposed confidence intervals. The simulation results reveal that GFCI based on the Wilson method is optimal for small sample sizes, GFCI based on the Hannig method excels for medium sample sizes, and GFCI based on the VST method stands out for large sample sizes. To further validate the practical application of these methods, we employ daily wind speed data from an industrial area in Prachin Buri and Rayong provinces, Thailand.
Assuntos
Vento , Tailândia , Modelos Estatísticos , Método de Monte Carlo , Poluição do Ar/análise , Humanos , Material Particulado/análise , Intervalos de Confiança , Simulação por ComputadorRESUMO
Purpose: Propensity score-weighting for confounder control and multiple imputation to counter missing data are both widely used methods in epidemiological research. Combination of the two is not trivial and requires a number of decisions to produce valid inference. In this tutorial, we outline the assumptions underlying each of the methods, present our considerations in combining the two, discuss the methodological and practical implications of our choices and briefly point to alternatives. Throughout we apply the theory to a research project about post-traumatic stress disorder in Syrian refugees. Patients and Methods: We detail how we used logistic regression-based propensity scores to produce "standardized mortality ratio"-weights and Substantive Model Compatible-Full Conditional Specification for multiple imputation of missing data to get the estimate of association. Finally, a percentile confidence interval was produced by bootstrapping. Results: A simple propensity score model with weight truncation at 1st and 99th percentile obtained acceptable balance on all covariates and was chosen as our model. Due to computational issues in the multiple imputation, two levels of one of the substantive model covariates and two levels of one of the auxiliary covariates were collapsed. This slightly modified propensity score model was the substantive model in the SMC-FCS multiple imputation, and regression models were set up for all partially observed covariates. We set the number of imputations to 10 and number of iterations to 40. We produced 999 bootstrap estimates to compute the 95-percentile confidence interval. Conclusion: Combining propensity score-weighting and multiple imputation is not a trivial task. We present considerations necessary to do so, realizing it is demanding in terms of both workload and computational time; however, we do not consider the former a drawback: it makes some of the underlying assumptions explicit and the latter may be a nuisance that will diminish with faster computers and better implementations.
RESUMO
We are mainly interested in estimating the stress-strength parameter, say R , when the parent distribution follows the well-known Marshall-Olkin model and the accessible data has the form of the progressively Type-II censored samples. In this case, the stress-strength parameter is free of the base distribution employed in the Marshall-Olkin model. Thus, we use the exponential distribution for simplicity. The maximum likelihood methods as well as some Bayesian approaches are used for the estimation purpose. The corresponding estimators of the latter approach are obtained by using Lindley's approximation and Gibbs sampling methods since the Bayesian estimator of R cannot be obtained as an explicit form. Moreover, some confidence intervals of various types are derived for R and then compared via a Monte Carlo simulation. Finally, the survival times of head and neck cancer patients are analyzed by two therapies for illustrating purposes.
RESUMO
The bias-corrected bootstrap confidence interval (BCBCI) was once the method of choice for conducting inference on the indirect effect in mediation analysis due to its high power in small samples, but now it is criticized by methodologists for its inflated type I error rates. In its place, the percentile bootstrap confidence interval (PBCI), which does not adjust for bias, is currently the recommended inferential method for indirect effects. This study proposes two alternative bias-corrected bootstrap methods for creating confidence intervals around the indirect effect: one originally used by Stine (1989) with the correlation coefficient, and a novel method that implements a reduced version of the BCBCI's bias correction. Using a Monte Carlo simulation, these methods were compared to the BCBCI, PBCI, and Chen and Fritz (2021)'s 30% Winsorized BCBCI. The results showed that the methods perform on a continuum, where the BCBCI has the best balance (i.e., having closest to an equal proportion of CIs falling above and below the true effect), highest power, and highest type I error rate; the PBCI has the worst balance, lowest power, and lowest type I error rate; and the alternative bias-corrected methods fall between these two methods on all three performance criteria. An extension of the original simulation that compared the bias-corrected methods to the PBCI after controlling for type I error rate inflation suggests that the increased power of these methods might only be due to their higher type I error rates. Thus, if control over the type I error rate is desired, the PBCI is still the recommended method for use with the indirect effect. Future research should examine the performance of these methods in the presence of missing data, confounding variables, and other real-world complications to enhance the generalizability of these results.
RESUMO
Process capability indices (PCIs) are most effective devices/techniques used in industries for determining the quality of products and performance of manufacturing processes. In this article, we consider the PCI Cpc which is based on the proportion of conformance and is applicable to normally as well as non-normally and continuous as well as discrete distributed processes. In order to estimate the PCI Cpc when the process follows exponentiated exponential distribution, we have used five classical methods of estimation. The performances of these classical estimators are compared with respect to their biases and mean squared errors (MSEs) of the index Cpc through simulation study. Also, the confidence intervals for the index Cpc are constructed using five bootstrap confidence interval (BCIs) methods. Monte Carlo simulation study has been carried out to compare the performances of these five BCIs in terms of their average width and coverage probabilities. Besides, net sensitivity (NS) analysis for the given PCI Cpc is considered. We use two data sets related to electronic and food industries and two failure time data sets to illustrate the performance of the proposed methods of estimation and BCIs. Additionally, we have developed PCI Cpc using aforementioned methods for generalized Rayleigh distribution.
RESUMO
In this paper, a multiple step-stress model is designed and analyzed when the data are Type-I censored. Lifetime distributions of the experimental units at each stress level are assumed to follow a two-parameter Weibull distribution. Further, distributions under each of the stress levels are connected through a tampered failure-rate based model. In a step-stress experiment, as the stress level increases, the load on the experimental units increases and hence the mean lifetime is expected to be shortened. Taking this into account, the aim of this paper is to develop the order restricted inference of the model parameters of a multiple step-stress model based on the frequentist approach. An extensive simulation study has been carried out and two real data sets have been analyzed for illustrative purposes.
RESUMO
The Asian economic crises of 1997 and the 2008 Global Financial Crisis (GFC) had far-reaching impacts on Asian and other global economies. Turmoil in the banking and finance sectors led to downturns in stock markets, resulting in bankruptcies, house repossessions and high unemployment. These crises have been shown to be correlated with a deterioration in mental health and an increase in suicides, and it is important to understand the implication of these impacts and how such recessions affect the health of affected populations. With the benefit of hindsight, did lessons learned from the negative effects of the 1997 Asian economic recession impact the aftermath of the 2008 GFC in Asian countries? Utilising a framework based on a simple strata-bootstrap algorithm using daily data - where available - we investigate the trend in suicide rates over time in three different populations (Hong Kong, Taiwan and South Korea), and examine whether there were any changes in the pattern of suicide rates in each country subsequent to both the 1997 Asian and 2008. We find that each country responded differently to each of the crises and the suicide rates for certain age-gender specific groups in each country were more affected.
RESUMO
Biomechanical processes are often manifested as one-dimensional (1D) trajectories. It has been shown that 1D confidence intervals (CIs) are biased when based on 0D statistical procedures, and the non-parametric 1D bootstrap CI has emerged in the Biomechanics literature as a viable solution. The primary purpose of this paper was to clarify that, for 1D biomechanics datasets, the distinction between 0D and 1D methods is much more important than the distinction between parametric and non-parametric procedures. A secondary purpose was to demonstrate that a parametric equivalent to the 1D bootstrap exists in the form of a random field theory (RFT) correction for multiple comparisons. To emphasize these points we analyzed six datasets consisting of force and kinematic trajectories in one-sample, paired, two-sample and regression designs. Results showed, first, that the 1D bootstrap and other 1D non-parametric CIs were qualitatively identical to RFT CIs, and all were very different from 0D CIs. Second, 1D parametric and 1D non-parametric hypothesis testing results were qualitatively identical for all six datasets. Last, we highlight the limitations of 1D CIs by demonstrating that they are complex, design-dependent, and thus non-generalizable. These results suggest that (i) analyses of 1D data based on 0D models of randomness are generally biased unless one explicitly identifies 0D variables before the experiment, and (ii) parametric and non-parametric 1D hypothesis testing provide an unambiguous framework for analysis when one׳s hypothesis explicitly or implicitly pertains to whole 1D trajectories.