RESUMO
Genome-wide association studies (GWASs) have been performed to identify host genetic factors for a range of phenotypes, including for infectious diseases. The use of population-based common control subjects from biobanks and extensive consortia is a valuable resource to increase sample sizes in the identification of associated loci with minimal additional expense. Non-differential misclassification of the outcome has been reported when the control subjects are not well characterized, which often attenuates the true effect size. However, for infectious diseases the comparison of affected subjects to population-based common control subjects regardless of pathogen exposure can also result in selection bias. Through simulated comparisons of pathogen-exposed cases and population-based common control subjects, we demonstrate that not accounting for pathogen exposure can result in biased effect estimates and spurious genome-wide significant signals. Further, the observed association can be distorted depending upon strength of the association between a locus and pathogen exposure and the prevalence of pathogen exposure. We also used a real data example from the hepatitis C virus (HCV) genetic consortium comparing HCV spontaneous clearance to persistent infection with both well-characterized control subjects and population-based common control subjects from the UK Biobank. We find biased effect estimates for known HCV clearance-associated loci and potentially spurious HCV clearance associations. These findings suggest that the choice of control subjects is especially important for infectious diseases or outcomes that are conditional upon environmental exposures.
Assuntos
Doenças Transmissíveis , Hepatite C , Humanos , Estudo de Associação Genômica Ampla , Doenças Transmissíveis/genética , Fenótipo , Hepatite C/genética , HepacivirusRESUMO
Human Immunodeficiency Virus (HIV)-positive individuals lost to follow-up from particular clinics may not be lost to care (LTC). After linking Vanderbilt's Comprehensive Care Clinic cohort to Tennessee's statewide HIV surveillance database, LTC decreased from 48.4% to 35.0% at 10 years. Routine surveillance linkage by domestic HIV clinics would improve LTC and retention measure accuracy.
Assuntos
Fármacos Anti-HIV , Infecções por HIV , Soropositividade para HIV , Humanos , Infecções por HIV/epidemiologia , Infecções por HIV/tratamento farmacológico , HIV , Fármacos Anti-HIV/uso terapêutico , Soropositividade para HIV/tratamento farmacológico , Instituições de Assistência AmbulatorialRESUMO
We spend a great deal of time on confounding in our teaching, in our methods development and in our assessment of study results. This may give the impression that uncontrolled confounding is the biggest problem that observational epidemiology faces, when in fact, other sources of bias such as selection bias, measurement error, missing data, and misalignment of zero time may often (especially if they are all present in a single study) lead to a stronger deviation from the truth. Compared to the amount of time we spend teaching how to address confounding in a data analysis, we spend relatively little time teaching methods for simulating confounding (and other sources of bias) to learn their impact and develop plans to mitigate or quantify the bias. We review a paper by Desai et al that uses simulation methods to quantify the impact of an unmeasured confounder when it is completely missing or when a proxy of the confounder is measured. We use this article to discuss how we can use simulations of sources of bias to ensure we generate better and more valid study estimates, and we discuss the importance of simulating realistic datasets with plausible bias structures to guide data collection. If an advanced life form exists outside of our current universe and they came to earth with the goal of scouring the published epidemiologic literature to understand what the biggest problem epidemiologists have, they would quickly discover that the limitations section of publications would provide them with all the information they needed. And most likely what they would conclude is that the biggest problem that we face is uncontrolled confounding. It seems to be an obsession of ours.
RESUMO
Gender is an observed effect modifier of the association between loneliness and memory aging. However, this effect modification may be a result of information bias due to differential loneliness under-reporting by gender. We applied probabilistic bias analyses to examine whether effect modification of the loneliness-memory decline relationship by gender is retained under three simulation scenarios with various magnitudes of differential loneliness under-reporting between men and women. Data were from biennial interviews with adults aged 50+ in the US Health and Retirement Study from 1996-2016 (5,646 women and 3,386 men). Loneliness status (yes vs. no) was measured from 1996-2004 using the CES-D loneliness item and memory was measured from 2004-2016. Simulated sensitivity and specificity of the loneliness measure were informed by a validation study using the UCLA Loneliness Scale as a gold standard. The likelihood of observing effect modification by gender was higher than 90% in all simulations, although the likelihood reduced with an increasing difference in magnitude of the loneliness under-reporting between men and women. The gender difference in loneliness under-reporting did not meaningfully affect the observed effect modification by gender in our simulations. Our simulation approach may be promising to quantify potential information bias in effect modification analyses.
RESUMO
The self-controlled case-series (SCCS) research design is increasingly used in pharmacoepidemiologic studies of drug-drug interactions (DDIs), with the target of inference being the incidence rate ratio (IRR) associated with concomitant exposure to the object plus precipitant drug versus the object drug alone. While day-level drug exposure can be inferred from dispensing claims, these inferences may be inaccurate, leading to biased IRRs. Grace periods (periods assuming continued treatment impact after days' supply exhaustion) are frequently used by researchers, but the impact of grace period decisions on bias from exposure misclassification remains unclear. Motivated by an SCCS study examining the potential DDI between clopidogrel (object) and warfarin (precipitant), we investigated bias due to precipitant or object exposure misclassification using simulations. We show that misclassified precipitant treatment always biases the estimated IRR toward the null, whereas misclassified object treatment may lead to bias in either direction or no bias, depending on the scenario. Further, including a grace period for each object dispensing may unintentionally increase the risk of misclassification bias. To minimize such bias, we recommend 1) avoiding the use of grace periods when specifying object drug exposure episodes; and 2) including a washout period following each precipitant exposed period.
RESUMO
The test-negative design (TND) is a popular method for evaluating vaccine effectiveness (VE). A "classical" TND study includes symptomatic individuals tested for the disease targeted by the vaccine to estimate VE against symptomatic infection. However, recent applications of the TND have attempted to estimate VE against infection by including all tested individuals, regardless of their symptoms. In this article, we use directed acyclic graphs and simulations to investigate potential biases in TND studies of COVID-19 VE arising from the use of this "alternative" approach, particularly when applied during periods of widespread testing. We show that the inclusion of asymptomatic individuals can potentially lead to collider stratification bias, uncontrolled confounding by health and healthcare-seeking behaviors (HSBs), and differential outcome misclassification. While our focus is on the COVID-19 setting, the issues discussed here may also be relevant in the context of other infectious diseases. This may be particularly true in scenarios where there is either a high baseline prevalence of infection, a strong correlation between HSBs and vaccination, different testing practices for vaccinated and unvaccinated individuals, or settings where both the vaccine under study attenuates symptoms of infection and diagnostic accuracy is modified by the presence of symptoms.
RESUMO
Clinicians, researchers, regulators, and other decision-makers increasingly rely on evidence from real-world data (RWD), including data routinely accumulating in health and administrative databases. RWD studies often rely on algorithms to operationalize variable definitions. An algorithm is a combination of codes or concepts used to identify persons with a specific health condition or characteristic. Establishing the validity of algorithms is a prerequisite for generating valid study findings that can ultimately inform evidence-based health care. This paper aims to systematize terminology, methods, and practical considerations relevant to the conduct of validation studies of RWD-based algorithms. We discuss measures of algorithm accuracy; gold/reference standard; study size; prioritizing accuracy measures; algorithm portability; and implication for interpretation. Information bias is common in epidemiologic studies, underscoring the importance of transparency in decisions regarding choice and prioritizing measures of algorithm validity. The validity of an algorithm should be judged in the context of a data source, and one size does not fit all. Prioritizing validity measures within a given data source depends on the role of a given variable in the analysis (eligibility criterion, exposure, outcome or covariate). Validation work should be part of routine maintenance of RWD sources.
RESUMO
PURPOSE: Germline variant interpretation often depends on population-matched control cohorts. This is not feasible for population groups that are underrepresented in current population reference databases. METHODS: We classify germline variants with population-matched controls for 2 ancestrally diverse cohorts of patients: 132 early-onset or familial colorectal carcinoma patients from Singapore and 100 early-onset colorectal carcinoma patients from the United States. The effects of using a population-mismatched control cohort are simulated by swapping the control cohorts used for each patient cohort, with or without the popmax computational strategy. RESULTS: Population-matched classifications revealed a combined 62 pathogenic or likely pathogenic (P/LP) variants in 34 genes across both cohorts. Using a population-mismatched control cohort resulted in misclassification of non-P/LP variants as P/LP, driven by the absence of ancestry-specific rare variants in the control cohort. Popmax was more effective in alleviating misclassifications for the Singapore cohort than the US cohort. CONCLUSION: Underrepresented population groups can suffer from higher rates of false-positive P/LP results. Popmax can partially alleviate these misclassifications, but its efficacy still depends on the degree with which the population groups are represented in the control cohort.
Assuntos
Neoplasias Colorretais , Mutação em Linhagem Germinativa , Humanos , Mutação em Linhagem Germinativa/genética , Singapura , Neoplasias Colorretais/genética , Estados Unidos , Estudos de Coortes , Masculino , Feminino , Predisposição Genética para Doença , Genética Populacional/métodos , Estudos de Casos e Controles , Grupos Minoritários , Bases de Dados GenéticasRESUMO
BACKGROUND: The presence of lymph node (LN) metastasis is a known negative prognostic factor in appendix cancer (AC) patients. However, currently the minimum number of LNs required to adequately determine LN negativity is extrapolated from colorectal studies and data specific to AC is lacking. We aimed to define the lowest number of LNs required to adequately stage AC and assess its impact on oncologic outcomes. METHODS: Patients with stage II-III AC from the National Cancer Database (NCDB 2004-2019) undergoing surgical resection with complete information about LN examination were included. Multivariable logistic regression assessed the odds of LN positive (LNP) disease for different numbers of LNs examined. Multivariable Cox regressions were performed by LN status subgroups, adjusted by prognostic factors, including grade, histologic subtype, surgical approach, and documented adjuvant systemic chemotherapy. RESULTS: Overall, 3,602 patients were included, from which 1,026 (28.5%) were LNP. Harvesting ten LNs was the minimum number required without decreased odds of LNP compared with the reference category (≥ 20 LNs). Total LNs examined were < 10 in 466 (12.9%) patients. Median follow-up from diagnosis was 75.4 months. Failing to evaluate at least ten LNs was an independent negative prognostic factor for overall survival (adjusted hazard ratio 1.39, p < 0.01). CONCLUSIONS: In appendix adenocarcinoma, examining a minimum of ten LNs was necessary to minimize the risk of missing LNP disease and was associated with improved overall survival rates. To mitigate the risk of misclassification, an adequate number of regional LNs must be assessed to determine LN status.
Assuntos
Adenocarcinoma , Neoplasias do Apêndice , Apêndice , Humanos , Excisão de Linfonodo , Apêndice/patologia , Estadiamento de Neoplasias , Linfonodos/patologia , Adenocarcinoma/cirurgia , Prognóstico , Neoplasias do Apêndice/patologia , Metástase Linfática/patologia , Estudos RetrospectivosRESUMO
BACKGROUND: Migraine is common in women of reproductive age. Migraine's episodic manifestation and acute and preventive pharmacological treatment options challenge studying drug safety for this condition during pregnancy. To improve such studies, we aimed to develop algorithms to identify and characterize migraines in electronic healthcare registries and to assess the level of care. METHODS: We linked four registries to detect pregnancies from 2009-2018 and used three algorithms for migraine identification: i) diagnostic codes, ii) triptans dispensed, and iii) a combination of both. We assessed migraine severity using dispensed drugs as proxies. ICD-10 diagnostic subcodes of migraine (G43) allowed the allocation of four subtypes: complicated and/or status migrainosus; with aura; without aura; other/unspecified. RESULTS: We included 535,089 pregnancies in 367,908 women with available one-year lookback. The prevalence of migraines identified was 2.9%-4.3% before, and 0.8%-1.5% during pregnancy, depending on algorithm used. Pregnant women with migraine were mostly managed in primary care. CONCLUSIONS: Primary care data in combination with drug dispensation records were instrumental for identification of migraine in electronic healthcare registries. Data from secondary care and drug dispensations allow better characterization of migraines. Jointly, these algorithms may contribute to improved perinatal pharmacoepidemiological studies in this population by addressing confounding by maternal migraine indication.
Assuntos
Transtornos de Enxaqueca , Complicações na Gravidez , Sistema de Registros , Humanos , Feminino , Gravidez , Transtornos de Enxaqueca/epidemiologia , Transtornos de Enxaqueca/diagnóstico , Transtornos de Enxaqueca/tratamento farmacológico , Noruega/epidemiologia , Adulto , Complicações na Gravidez/epidemiologia , Complicações na Gravidez/diagnóstico , Estudos de Coortes , Triptaminas/uso terapêutico , Algoritmos , Adulto JovemRESUMO
The misclassification of the species Pasteurella caecimuris Lagkouvardos et al. 2016 along with the heterotypic synonymy between P. caecimuris and Rodentibacter heylii Adhikary et al. 2017 has long been recognized. However, no formal assignment of P. caecimuris to its correct taxonomic position has been made accordingly and therefore the nomenclatural consequences have not been implemented. In the present study, the author first re-evaluates the taxonomic relationships of P. caecimuris using genome-based approaches, confirming the need of reclassification to the genus Rodentibacter and presenting evidence of the synonymy between R. heylii and P. caecimuris. Next, the author proposes a new name Rodentibacter caecimuris comb. nov. and, based on the priority of their specific epithets, treats Rodentibacter heylii as a later heterotypic synonym of Rodentibacter caecimuris.
Assuntos
Ácidos Graxos , Pasteurella , Pasteurellaceae , DNA Bacteriano/genética , Filogenia , RNA Ribossômico 16S/genética , Técnicas de Tipagem Bacteriana , Análise de Sequência de DNA , Composição de Bases , Ácidos Graxos/químicaRESUMO
We consider the problem of combining multiple biomarkers to improve the diagnostic accuracy of detecting a disease when only group-tested data on the disease status are available. There are several challenges in addressing this problem, including unavailable individual disease statuses, differential misclassification depending on group size and number of diseased individuals in the group, and extensive computation due to a large number of possible combinations of multiple biomarkers. To tackle these issues, we propose a pairwise model fitting approach to estimating the distribution of the optimal linear combination of biomarkers and its diagnostic accuracy under the assumption of a multivariate normal distribution. The approach is evaluated in simulation studies and applied to data on chlamydia detection and COVID-19 diagnosis.
RESUMO
Research on dynamic treatment regimes has enticed extensive interest. Many methods have been proposed in the literature, which, however, are vulnerable to the presence of misclassification in covariates. In particular, although Q-learning has received considerable attention, its applicability to data with misclassified covariates is unclear. In this article, we investigate how ignoring misclassification in binary covariates can impact the determination of optimal decision rules in randomized treatment settings, and demonstrate its deleterious effects on Q-learning through empirical studies. We present two correction methods to address misclassification effects on Q-learning. Numerical studies reveal that misclassification in covariates induces non-negligible estimation bias and that the correction methods successfully ameliorate bias in parameter estimation.
Assuntos
Regras de Decisão Clínica , Aprendizado de Máquina , HumanosRESUMO
OBJECTIVE: To assess the association between childhood body fatness and epithelial ovarian cancer (EOC), and whether this association differs by type of EOC. METHODS: Using data from a population-based case-control study (497 cases and 902 controls) in Montreal, Canada conducted 2011-2016, we examined the association between childhood body fatness and EOC, overall and separately for invasive vs. borderline EOCs. A figure rating scale was used to measure body fatness at ages 5 and 10. Multivariable logistic regression was used to estimate adjusted odds ratios (aORs) and 95% confidence intervals (95% CI). Quantitative bias analyses were conducted to assess the impact of exposure misclassification and non-participation. RESULTS: The aOR (95% CI) of overall EOC for high vs. low body fatness was 1.07 (0.85-1.34) at age 5 and 1.28 (0.98-1.68) at age 10. The associations were stronger for invasive EOC, specifically the endometrioid histological type. For borderline cancers, the aORs were below the null value with wide confidence intervals. Bias analyses did not reveal a strong influence of non-participation. Non-differential exposure misclassification may have biased aORs towards the null for invasive cancers but did not appear to have an appreciable influence on the aORs for borderline cancers. CONCLUSIONS: Childhood body fatness may be a risk factor for invasive EOC in later adult life. Our study highlights the potential importance of examining early life factors for a comprehensive understanding of EOC development.
Assuntos
Neoplasias Epiteliais e Glandulares , Neoplasias Ovarianas , Criança , Adulto , Humanos , Feminino , Carcinoma Epitelial do Ovário/epidemiologia , Carcinoma Epitelial do Ovário/etiologia , Neoplasias Ovarianas/epidemiologia , Neoplasias Ovarianas/etiologia , Neoplasias Ovarianas/patologia , Estudos de Casos e Controles , Neoplasias Epiteliais e Glandulares/epidemiologia , Neoplasias Epiteliais e Glandulares/etiologia , Fatores de RiscoRESUMO
BACKGROUND: Misclassification bias (MB) is the deviation of measured from true values due to incorrect case assignment. This study compared MB when cystectomy status was determined using administrative database codes vs. predicted cystectomy probability. METHODS: We identified every primary cystectomy-diversion type at a single hospital 2009-2019. We linked to claims data to measure true association of cystectomy with 30 patient and hospitalization factors. Associations were also measured when cystectomy status was assigned using billing codes and by cystectomy probability from multivariate logistic regression model with covariates from administrative data. MB was the difference between measured and true associations. RESULTS: 500 people underwent cystectomy (0.12% of 428 677 hospitalizations). Sensitivity and positive predictive values for cystectomy codes were 97.1% and 58.6% for incontinent diversions and 100.0% and 48.4% for continent diversions, respectively. The model accurately predicted cystectomy-incontinent diversion (c-statistic [C] 0.999, Integrated Calibration Index [ICI] 0.000) and cystectomy-continent diversion (C:1.000, ICI 0.000) probabilities. MB was significantly lower when model-based predictions was used to impute cystectomy-diversion type status using for both incontinent cystectomy (F = 12.75; p < .0001) and continent cystectomy (F = 11.25; p < .0001). CONCLUSIONS: A model using administrative data accurately returned the probability that cystectomy by diversion type occurred during a hospitalization. Using this model to impute cystectomy status minimized MB. Accuracy of administrative database research can be increased by using probabilistic imputation to determine case status instead of individual codes.
Assuntos
Cistectomia , Neoplasias da Bexiga Urinária , Humanos , Hospitalização , Probabilidade , Viés , Bases de Dados Factuais , Neoplasias da Bexiga Urinária/cirurgiaRESUMO
Higher-order evidence is evidence about evidence. Epidemiologic examples of higher-order evidence include the settings where the study data constitute first-order evidence and estimates of misclassification comprise the second-order evidence (e.g., sensitivity, specificity) of a binary exposure or outcome collected in the main study. While sampling variability in higher-order evidence is typically acknowledged, higher-order evidence is often assumed to be free of measurement error (e.g., gold standard measures). Here we provide two examples, each with multiple scenarios where second-order evidence is imperfectly measured, and this measurement error can either amplify or attenuate standard corrections to first-order evidence. We propose a way to account for such imperfections that requires third-order evidence. Further illustrations and exploration of how higher-order evidence impacts results of epidemiologic studies is warranted.
Assuntos
Viés , Humanos , Sensibilidade e EspecificidadeRESUMO
Mortality statistics are critical to determine the burden of disease. Certain causes of death are prone to being misclassified on cause of death certificates. This poses a serious risk for public health and safety, as accurate death certificates form the basis for mortality statistics, which in turn are crucial for research, funding allocation and health interventions. This study uses generalised estimating equations and regression modelling to investigate for which cause of death categories suicide and accident deaths are misclassified as. National mortality statistics and autopsy rates from North America and Europe covering the past forty years were analysed to determine the associations between the different causes of death in cross-sectional and longitudinal models. We find that suicides and deaths by accidents are frequently mutually misclassified. We also find that suicides are frequently misclassified as drug use disorder deaths, in contrast to accident deaths, which are not misclassified as drug use disorder deaths. Furthermore, suicides do not seem to be misclassified as undetermined deaths or ill-defined deaths. The frequency of misclassification shows that the quality of death certificates should be improved, and autopsies may be used systematically to control the quality of death certificates.
RESUMO
OBJECTIVE: Phenotypic misclassification in genetic association analyses can impact the accuracy of PRS-based prediction models. The bias reduction method proposed by Tong et al. (2019) has demonstrated its efficacy in reducing the effects of bias on the estimation of association parameters between genotype and phenotype while minimizing variance by employing chart reviews on a subset of the data for validating phenotypes, however its improvement of subsequent PRS prediction accuracy remains unclear. Our study aims to fill this gap by assessing the performance of simulated PRS models and estimating the optimal number of chart reviews needed for validation. METHODS: To comprehensively assess the efficacy of the bias reduction method proposed by Tong et al. in enhancing the accuracy of PRS-based prediction models, we simulated each phenotype under different correlation structures (an independent model, a weakly correlated model, a strongly correlated model) and introduced error-prone phenotypes using two distinct error mechanisms (differential and non-differential phenotyping errors). To facilitate this, we used genotype and phenotype data from 12 case-control datasets in the Alzheimer's Disease Genetics Consortium (ADGC) to produce simulated phenotypes. The evaluation included analyses across various misclassification rates of original phenotypes as well as quantities of validation set. Additionally, we determined the median threshold, identifying the minimal validation size required for a meaningful improvement in the accuracy of PRS-based predictions across a broad spectrum. RESULTS: This simulation study demonstrated that incorporating chart review does not universally guarantee enhanced performance of PRS-based prediction models. Specifically, in scenarios with minimal misclassification rates and limited validation sizes, PRS models utilizing debiased regression coefficients demonstrated inferior predictive capabilities compared to models using error-prone phenotypes. Put differently, the effectiveness of the bias reduction method is contingent upon the misclassification rates of phenotypes and the size of the validation set employed during chart reviews. Notably, when dealing with datasets featuring higher misclassification rates, the advantages of utilizing this bias reduction method become more evident, requiring a smaller validation set to achieve better performance. CONCLUSION: This study highlights the importance of choosing an appropriate validation set size to balance between the efforts of chart review and the gain in PRS prediction accuracy. Consequently, our study establishes a valuable guidance for validation planning, across a diverse array of sensitivity and specificity combinations.
Assuntos
Doença de Alzheimer , Fenótipo , Humanos , Doença de Alzheimer/genética , Genótipo , Estudos de Casos e Controles , Predisposição Genética para Doença , Estudos de Associação Genética/métodos , Estudo de Associação Genômica Ampla/métodos , Polimorfismo de Nucleotídeo Único , Simulação por Computador , Modelos GenéticosRESUMO
BACKGROUND: Observational designs can complement evidence from randomized controlled trials not only in situations when randomization is not feasible, but also by evaluating drug effects in real-world, considering a broader spectrum of users and clinical scenarios. However, use of such real-world scenarios captured in routinely collected clinical or administrative data also comes with specific challenges. Unlike in trials, medication use is not protocol based. Instead, exposure is determined by a multitude of factors involving patients, providers, healthcare access, and other policies. Accurate measurement of medication exposure relies on a similar broad set of factors which, if not understood and appropriately addressed, can lead to exposure misclassification and bias. AIM: To describe core considerations for measurement of medication exposure in routinely collected healthcare data. METHODS: We describe the strengths and weaknesses of the two main types of routinely collected healthcare data (electronic health records and administrative claims) used in pharmacoepidemiologic research. We introduce key elements in those data sources and issues in the curation process that should be considered when developing exposure definitions. We present challenges in exposure measurement such as the appropriate determination of exposure time windows or the delineation of concomitant medication use versus switching of therapy, and related implications for bias. RESULTS: We note that true exposure patterns are typically unknown when using routinely collected healthcare data and that an in-depth understanding of healthcare delivery, patient and provider decision-making, data documentation and governance, as well as pharmacology are needed to ensure unbiased approaches to measuring exposure. CONCLUSIONS: Various assumptions are made with the goal that the chosen exposure definition can approximate true exposure. However, the possibility of exposure misclassification remains, and sensitivity analyses that can test the impact of such assumptions on the robustness of estimated medication effects are necessary to support causal inferences.
Assuntos
Farmacoepidemiologia , Projetos de Pesquisa , Humanos , Farmacoepidemiologia/métodos , Causalidade , Atenção à Saúde , ViésRESUMO
For reliable clinical decisions in transfusion medicine, assessing the performance of qualitative tests performed in medical laboratories is critical. When false results are reported, these can lead to an adverse reaction to blood components. Good performance assessment practices are essential for this kind of scenario, and they still remain as one of the many unmet high-priority challenges in this area. This paper aims to provide an overview of the current trends in this field. A review of the IFCC-IUPC. qualitative vocabulary was carried out, and a particular focus was given to the evaluation protocols CLSI EP12-A3 and Eurachem AQA, such as the European Union Regulation for class D in vitro diagnostic medical devices. There is a consistency between the current protocols and recognized performance assessment principles, which are mandatory in transfusion service labs. We believe that a revised imprecision interval approach and models based on emerging qualitative test types may prove beneficial in the long run. It is also important to emphasize the uncertainty of proportions to mitigate the risk of misclassification.