RESUMEN
Double sampling is usually applied to collect necessary information for situations in which an infallible classifier is available for validating a subset of the sample that has already been classified by a fallible classifier. Inference procedures have previously been developed based on the partially validated data obtained by the double-sampling process. However, it could happen in practice that such infallible classifier or gold standard does not exist. In this article, we consider the case in which both classifiers are fallible and propose asymptotic and approximate unconditional test procedures based on six test statistics for a population proportion and five approximate sample size formulas based on the recommended test procedures under two models. Our results suggest that both asymptotic and approximate unconditional procedures based on the score statistic perform satisfactorily for small to large sample sizes and are highly recommended. When sample size is moderate or large, asymptotic procedures based on the Wald statistic with the variance being estimated under the null hypothesis, likelihood rate statistic, log- and logit-transformation statistics based on both models generally perform well and are hence recommended. The approximate unconditional procedures based on the log-transformation statistic under Model I, Wald statistic with the variance being estimated under the null hypothesis, log- and logit-transformation statistics under Model II are recommended when sample size is small. In general, sample size formulae based on the Wald statistic with the variance being estimated under the null hypothesis, likelihood rate statistic and score statistic are recommended in practical applications. The applicability of the proposed methods is illustrated by a real-data example.
Asunto(s)
Modelos Estadísticos , Muestreo , Algoritmos , Humanos , Funciones de Verosimilitud , Noruega , Tamaño de la MuestraRESUMEN
Double-sampling schemes using one classifier assessing the whole sample and another classifier assessing a subset of the sample have been introduced for reducing classification errors when an infallible or gold standard classifier is unavailable or impractical. Inference procedures have previously been proposed for situations where an infallible classifier is available for validating a subset of the sample that has already been classified by a fallible classifier. Here, we consider the case where both classifiers are fallible, proposing and evaluating several confidence interval procedures for a proportion under two models, distinguished by the assumption regarding ascertainment of two classifiers. Simulation results suggest that the modified Wald-based confidence interval, Score-based confidence interval, two Bayesian credible intervals, and the percentile Bootstrap confidence interval performed reasonably well even for small binomial proportions and small validated sample under the model with the conditional independent assumption, and the confidence interval derived from the Wald test with nuisance parameters appropriately evaluated, likelihood ratio-based confidence interval, Score-based confidence interval, and the percentile Bootstrap confidence interval performed satisfactory in terms of coverage under the model without the conditional independent assumption. Moreover, confidence intervals based on log- and logit-transformations also performed well when the binomial proportion and the ratio of the validated sample are not very small under two models. Two examples were used to illustrate the procedures.