Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ann Lab Med ; 44(5): 385-391, 2024 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-38835211

RESUMO

Patient-based real-time QC (PBRTQC) uses patient-derived data to assess assay performance. PBRTQC algorithms have advanced in parallel with developments in computer science and the increased availability of more powerful computers. The uptake of Artificial Intelligence in PBRTQC has been rapid, with many stated advantages over conventional approaches. However, until this review, there has been no critical comparison of these. The PBRTQC algorithms based on moving averages, regression-adjusted real-time QC, neural networks and anomaly detection are described and contrasted. As Artificial Intelligence tools become more available to laboratories, user-friendly and computationally efficient, the major disadvantages, such as complexity and the need for high computing resources, are reduced and become attractive to implement in PBRTQC applications.


Assuntos
Algoritmos , Controle de Qualidade , Humanos , Redes Neurais de Computação , Inteligência Artificial , Laboratórios Clínicos/normas
2.
Am J Clin Pathol ; 161(1): 4-8, 2024 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-37769333

RESUMO

OBJECTIVES: An increase in analytical imprecision and/or the introduction of bias can affect the interpretation of quantitative laboratory results. In this study, we explore the impact of varying assay imprecision and bias introduction on the classification of patients based on fixed thresholds. METHODS: Simple spreadsheets (Microsoft Excel) were constructed to simulate conditions of assay deterioration, expressed as coefficient of variation and bias (in percentages). The impact on patient classification was explored based on fixed interpretative limits. A combined matrix of imprecision and bias of 0%, 2%, 4%, 6%, 8%, and 10% (tool 1) as well as 0%, 2%, 5%, 10%, 15%, and 20% (tool 2) was simulated, respectively. The percentage of patients who were reclassified following the addition of simulated imprecision and bias was summarized and presented in tables and graphs. RESULTS: The percentage of patients who were reclassified increased with increasing/decreasing magnitude of imprecision and bias. The impact of imprecision lessens with increasing bias such that at high biases, the bias becomes the dominant cause for reclassification. CONCLUSIONS: The spreadsheet tools, available as Supplemental Material, allow laboratories to visualize the impact of additional analytical imprecision and bias on the classification of their patients when applied to locally extracted historical results.


Assuntos
Viés , Pacientes , Humanos , Laboratórios , Pacientes/classificação
4.
Adv Clin Chem ; 115: 175-203, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37673520

RESUMO

Delta check is an electronic error detection tool. It compares the difference in sequential results within a patient against a predefined limit, and when exceeded, the delta check rule is considered triggered. The patient results should be withheld for review and troubleshooting before releasing to the clinical team for patient management. Delta check was initially developed as a tool to detect wrong-blood-in-tube (sample misidentification) errors. It is now applied to detect errors more broadly within the total testing process. Recent advancements in the theoretical understanding of delta check has allowed for more precise application of this tool to achieve the desired clinical performance and operational set up. In this Chapter, we review the different pre-implementation considerations, the foundation concepts of delta check, the process of setting up key delta check parameters, performance verification and troubleshooting of a delta check flag.

5.
Crit Rev Clin Lab Sci ; 60(7): 502-517, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37194676

RESUMO

Quality control practices in the modern laboratory are the result of significant advances over the many years of the profession. Major advance in conventional internal quality control has undergone a philosophical shift from a focus solely on the statistical assessment of the probability of error identification to more recent thinking on the capability of the measurement procedure (e.g. sigma metrics), and most recently, the risk of harm to the patient (the probability of patient results being affected by an error or the number of patient results with unacceptable analytical quality). Nonetheless, conventional internal quality control strategies still face significant limitations, such as the lack of (proven) commutability of the material with patient samples, the frequency of episodic testing, and the impact of operational and financial costs, that cannot be overcome by statistical advances. In contrast, patient-based quality control has seen significant developments including algorithms that improve the detection of specific errors, parameter optimization approaches, systematic validation protocols, and advanced algorithms that require very low numbers of patient results while retaining sensitive error detection. Patient-based quality control will continue to improve with the development of new algorithms that reduce biological noise and improve analytical error detection. Patient-based quality control provides continuous and commutable information about the measurement procedure that cannot be easily replicated by conventional internal quality control. Most importantly, the use of patient-based quality control helps laboratories to improve their appreciation of the clinical impact of the laboratory results produced, bringing them closer to the patients.Laboratories are encouraged to implement patient-based quality control processes to overcome the limitations of conventional internal quality control practices. Regulatory changes to recognize the capability of patient-based quality approaches, as well as laboratory informatics advances, are required for this tool to be adopted more widely.

6.
Ann Lab Med ; 43(5): 408-417, 2023 09 01.
Artigo em Inglês | MEDLINE | ID: mdl-37080741

RESUMO

Functional reference limits describe key changes in the physiological relationship between a pair of physiologically related components. Statistically, this can be represented by a significant change in the curvature of a mathematical function or curve (e.g., an observed plateau). The point at which the statistical relationship changes significantly is the point of curvature inflection and can be mathematically modeled from the relationship between the interrelated biomarkers. Conceptually, they reside between reference intervals, which describe the statistical boundaries of a single biomarker within the reference population, and clinical decision limits that are often linked to the risk of morbidity or mortality and set as thresholds. Functional reference limits provide important physiological and pathophysiological insights that can aid laboratory result interpretation. Laboratory professionals are in a unique position to harness data from laboratory information systems to derive clinically relevant values. Increasing research on and reporting of functional reference limits in the literature will enhance their contribution to laboratory medicine and widen the evidence base used in clinical decision limits, which are currently almost exclusively contributed to by clinical trials. Their inclusion in laboratory reports will enhance the intellectual value of laboratory professionals in clinical care beyond the statistical boundaries of a healthy reference population and pave the way to them being considered in shaping clinical decision limits. This review provides an overview of the concepts related to functional reference limits, clinical examples of their use, and the impetus to include them in laboratory reports.


Assuntos
Técnicas de Laboratório Clínico , Laboratórios , Humanos , Valores de Referência , Biomarcadores
7.
Pathology ; 55(4): 525-530, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36894352

RESUMO

The variability between calibrations can be larger than the within calibration variation for some measurement procedures, that is a large CVbetween:CVwithin ratio. In this study, we examined the false rejection rate and probability of bias detection of quality control (QC) rules at varying calibration CVbetween:CVwithin ratios. Historical QC data for six representative routine clinical chemistry serum measurement procedures (calcium, creatinine, aspartate aminotransferase, thyrotrophin, prostate specific antigen and gentamicin) were extracted to derive the CVbetween:CVwithin ratios using analysis of variance. Additionally, the false rejection rate and probability of bias detection of three 'Westgard' QC rules (2:2S, 4:1S, 10X) at varying CVbetween:CVwithin ratios (0.1-10), magnitudes of bias, and QC events per calibration (5-80) were examined through simulation modelling. The CVbetween:CVwithin ratios for the six routine measurement procedures ranged from 1.1 to 34.5. With ratios >3, false rejection rates were generally above 10%. Similarly for QC rules involving a greater number of consecutive results, false rejection rates increased with increasing ratios, while all rules achieved maximum bias detection. Laboratories should avoid the 2:2S, 4:1S and 10X QC rules when calibration CVbetween:CVwithin ratios are elevated, particularly for those measurement procedures with a higher number of QC events per calibration.


Assuntos
Laboratórios , Antígeno Prostático Específico , Masculino , Humanos , Calibragem , Controle de Qualidade , Viés
8.
Clin Biochem ; 114: 86-94, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36822348

RESUMO

OBJECTIVE: This simulation study was undertaken to assess the statistical performance of six commonly used rejection criteria for bias detection. METHODS: The false rejection rate (i.e. rejection in the absence of simulated bias) and the probability of bias detection were assessed for the following: difference in measurements for individual sample pair, the mean of the paired differences, t-statistics (paired t-test), slope < 0.9 or > 1.1, intercept > 50% of the lower limit of measurement range, and coefficient of determination (R2) > 0.95. The linear regressions evaluated were ordinary least squares, weighted least squares and Passing-Bablok regressions. A bias detection rate of < 50% and false rejection rates of >10% are considered unacceptable for the purpose of this study. RESULTS: Rejection criteria based on regression slope, intercept and paired difference (10%) for individual samples have high false rejection rates and/ or low probability of bias detection. T-statistics (α = 0.05) performed best in low range ratio (lowest-to-highest concentration in measurement range) and low imprecision scenarios. Mean difference (10%) performed better in all other range ratio and imprecision scenarios. Combining mean difference and paired-t test improves the power of bias detection but carries higher false rejection rates. CONCLUSIONS: This study provided objective evidence on commonly used rejection criteria to guide laboratory on the experimental design and statistical assessment for bias detection during method evaluation or reagent lot verification.


Assuntos
Projetos de Pesquisa , Humanos , Simulação por Computador , Probabilidade , Viés
9.
Ann Lab Med ; 43(1): 5-18, 2023 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-36045052

RESUMO

Background: Calibration is a critical component for the reliability, accuracy, and precision of mass spectrometry measurements. Optimal practice in the construction, evaluation, and implementation of a new calibration curve is often underappreciated. This systematic review examined how calibration practices are applied to liquid chromatography-tandem mass spectrometry measurement procedures. Methods: The electronic database PubMed was searched from the date of database inception to April 1, 2022. The search terms used were "calibration," "mass spectrometry," and "regression." Twenty-one articles were identified and included in this review, following evaluation of the titles, abstracts, full text, and reference lists of the search results. Results: The use of matrix-matched calibrators and stable isotope-labeled internal standards helps to mitigate the impact of matrix effects. A higher number of calibration standards or replicate measurements improves the mapping of the detector response and hence the accuracy and precision of the regression model. Constructing a calibration curve with each analytical batch recharacterizes the instrument detector but does not reduce the actual variability. The analytical response and measurand concentrations should be considered when constructing a calibration curve, along with subsequent use of quality controls to confirm assay performance. It is important to assess the linearity of the calibration curve by using actual experimental data and appropriate statistics. The heteroscedasticity of the calibration data should be investigated, and appropriate weighting should be applied during regression modeling. Conclusions: This review provides an outline and guidance for optimal calibration practices in clinical mass spectrometry laboratories.


Assuntos
Calibragem , Cromatografia Líquida/métodos , Humanos , Espectrometria de Massas , Padrões de Referência , Reprodutibilidade dos Testes
10.
Clin Chim Acta ; 539: 87-89, 2023 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-36513171

RESUMO

BACKGROUND: There is uncertainty whether increased frequency of calibrations may affect the overall analytical variability of a measurement procedure as reflected in quality control (QC) performance. In this simulation study, we examined the impact of calibration frequencies on the variability of laboratory measurements. METHODS: A 5-point calibration curve was modeled with simulated concentrations ranging from 10 to 10,000 mmol/l, and signal intensities with CVs of 3 % around the mean, under a Gaussian distribution. 3 levels of QC (20, 150, 600 mmol/l) interspersed within the analytical measurement range were also simulated. RESULTS: The CV of the 3 QC levels remained stable across the different calibration frequencies simulated (5, 10, 15 and 30 QC measurements per recalibration episode). The imprecision was greatest (18 %) at the lowest concentration of 20 mmol/l, when the calibration curve was derived using ordinary least squares regression, reducing to 3.5 % and 3.8 % at 150 and 600 mmol/l, respectively. The CV of all 3 QC concentrations remained constant at 3.4 % and close the predefined CV (3 %) when weighted least squares regression was used to derive the calibration model. Similar findings were observed with 2-point calibrations using WLS models at narrower concentration ranges (50 and 100 mmol/l as well as 50 and 500 mmol/l). DISCUSSION: Within the parameters of the simulation study, an increased frequency of calibration events does not adversely impact the overall analytical performance of a measurement procedure under most circumstances.


Assuntos
Calibragem , Humanos , Análise dos Mínimos Quadrados , Simulação por Computador , Controle de Qualidade , Incerteza
12.
Clin Chem Lab Med ; 61(5): 769-776, 2023 04 25.
Artigo em Inglês | MEDLINE | ID: mdl-36420533

RESUMO

Lot-to-lot verification is an integral component for monitoring the long-term stability of a measurement procedure. The practice is challenged by the resource requirements as well as uncertainty surrounding experimental design and statistical analysis that is optimal for individual laboratories, although guidance is becoming increasingly available. Collaborative verification efforts as well as application of patient-based monitoring are likely to further improve identification of any differences in performance in a relatively timely manner. Appropriate follow up actions of failed lot-to-lot verification is required and must balance potential disruptions to clinical services provided by the laboratory. Manufacturers need to increase transparency surrounding release criteria and work closer with laboratory professionals to ensure acceptable reagent lots are released to end users. A tripartite collaboration between regulatory bodies, manufacturers, and laboratory medicine professional bodies is key to developing a balanced system where regulatory, manufacturing, and clinical requirements of laboratory testing are met, to minimize differences between reagent lots and ensure patient safety. Clinical Chemistry and Laboratory Medicine has served as a fertile platform for advancing the discussion and practice of lot-to-lot verification in the past 60 years and will continue to be an advocate of this important topic for many more years to come.


Assuntos
Química Clínica , Kit de Reagentes para Diagnóstico , Humanos , Controle de Qualidade , Laboratórios
13.
Am J Clin Pathol ; 158(4): 480-487, 2022 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-35849102

RESUMO

OBJECTIVES: Automated qualitative serology assays often measure quantitative signals that are compared against a manufacturer-defined cutoff for qualitative (positive/negative) interpretation. The current general practice of assessing serology assay performance by overall concordance in a qualitative manner may not detect the presence of analytical shift/drift that could affect disease classifications. METHODS: We describe an approach to defining bias specifications for qualitative serology assays that considers minimum positive predictive values (PPVs) and negative predictive values (NPVs). Desirable minimum PPVs and NPVs for a given disease prevalence are projected as equi-PPV and equi-NPV lines into the receiver operator characteristic curve space of coronavirus disease 2019 serology assays, and the boundaries define the allowable area of performance (AAP). RESULTS: More stringent predictive values produce smaller AAPs. When higher NPVs are required, there is lower tolerance for negative biases. Conversely, when higher PPVs are required, there is less tolerance for positive biases. As prevalence increases, so too does the allowable positive bias, although the allowable negative bias decreases. The bias specification may be asymmetric for positive and negative direction and should be method specific. CONCLUSIONS: The described approach allows setting bias specifications in a way that considers clinical requirements for qualitative assays that measure signal intensity (eg, serology and polymerase chain reaction).


Assuntos
COVID-19 , Viés , COVID-19/diagnóstico , Teste para COVID-19 , Humanos , Reação em Cadeia da Polimerase , Valor Preditivo dos Testes
14.
Clin Chim Acta ; 534: 29-34, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35810798

RESUMO

BACKGROUND: We investigate the simulated impact of varying sample size and replicate number using ordinary least squares (OLS) and Deming regression (DR) in both weighted and unweighted forms, when applied to paired measurements in lot-to-lot verification. METHODS: Simulation parameter investigated in this study were: range ratio, analytical coefficient of variation, sample size, replicates, alpha (level of significance) and constant and proportional biases. For each simulation scenario, 10,000 iterations were performed, and the average probability of bias detection was determined. RESULTS: Generally, the weighted forms of regression significantly outperformed the unweighted forms for bias detection. At the low range ratio (1:10), for both weighted OLS and DR, improved bias detection was observed with greater number of replicates, than increasing the number of comparison samples. At the high range ratio (1:1000), for both weighted OLS and DR, increasing the number of replicates above two is only slightly more advantageous in the scenarios examined. Increasing the numbers of comparison samples resulted in better detection of smaller biases between reagent lots. CONCLUSIONS: The results of this study allow laboratories to determine a tailored approach to lot-to-lot verification studies, balancing the number of replicates and comparison samples with the analytical performance of measurement procedures involved.


Assuntos
Laboratórios , Humanos , Indicadores e Reagentes , Análise dos Mínimos Quadrados , Modelos Lineares , Tamanho da Amostra
15.
Clin Chem Lab Med ; 60(8): 1164-1174, 2022 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-35647783

RESUMO

OBJECTIVES: One approach to assessing reference material (RM) commutability and agreement with clinical samples (CS) is to use ordinary least squares or Deming regression with prediction intervals. This approach assumes constant variance that may not be fulfilled by the measurement procedures. Flexible regression frameworks which relax this assumption, such as quantile regression or generalized additive models for location, scale, and shape (GAMLSS), have recently been implemented, which can model the changing variance with measurand concentration. METHODS: We simulated four imprecision profiles, ranging from simple constant variance to complex mixtures of constant and proportional variance, and examined the effects on commutability assessment outcomes with above four regression frameworks and varying the number of CS, data transformations and RM location relative to CS concentration. Regression framework performance was determined by the proportion of false rejections of commutability from prediction intervals or centiles across relative RM concentrations and was compared with the expected nominal probability coverage. RESULTS: In simple variance profiles (constant or proportional variance), Deming regression, without or with logarithmic transformation respectively, is the most efficient approach. In mixed variance profiles, GAMLSS with smoothing techniques are more appropriate, with consideration given to increasing the number of CS and the relative location of RM. In the case where analytical coefficients of variation profiles are U-shaped, even the more flexible regression frameworks may not be entirely suitable. CONCLUSIONS: In commutability assessments, variance profiles of measurement procedures and location of RM in respect to clinical sample concentration significantly influence the false rejection rate of commutability.


Assuntos
Padrões de Referência , Humanos
16.
Clin Chem Lab Med ; 60(8): 1175-1185, 2022 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-35576605

RESUMO

OBJECTIVES: Detection of between-lot reagent bias is clinically important and can be assessed by application of regression-based statistics on several paired measurements obtained from the existing and new candidate lot. Here, the bias detection capability of six regression-based lot-to-lot reagent verification assessments, including an extension of the Bland-Altman with regression approach are compared. METHODS: Least squares and Deming regression (in both weighted and unweighted forms), confidence ellipses and Bland-Altman with regression (BA-R) approaches were investigated. The numerical simulation included permutations of the following parameters: differing result range ratios (upper:lower measurement limits), levels of significance (alpha), constant and proportional biases, analytical coefficients of variation (CV), and numbers of replicates and sample sizes. The sample concentrations simulated were drawn from a uniformly distributed concentration range. RESULTS: At a low range ratio (1:10, CV 3%), the BA-R performed the best, albeit with a higher false rejection rate and closely followed by weighted regression approaches. At larger range ratios (1:1,000, CV 3%), the BA-R performed poorly and weighted regression approaches performed the best. At higher assay imprecision (CV 10%), all six approaches performed poorly with bias detection rates <50%. A lower alpha reduced the false rejection rate, while greater sample numbers and replicates improved bias detection. CONCLUSIONS: When performing reagent lot verification, laboratories need to finely balance the false rejection rate (selecting an appropriate alpha) with the power of bias detection (appropriate statistical approach to match assay performance characteristics) and operational considerations (number of clinical samples and replicates, not having alternate reagent lot).


Assuntos
Laboratórios , Viés , Simulação por Computador , Humanos , Indicadores e Reagentes
17.
Ann Lab Med ; 42(5): 597-601, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35470278

RESUMO

This study describes an objective approach to deriving the clinical performance of autoverification rules to inform laboratory practice when implementing them. Anonymized historical laboratory data for 12 biochemistry measurands were collected and Box-Cox-transformed to approximate a Gaussian distribution. The historical laboratory data were assumed to be error-free. Using the probability theory, the clinical specificity of a set of autoverification limits can be derived by calculating the percentile values of the overall distribution of a measurand. The 5th and 95th percentile values of the laboratory data were calculated to achieve a 90% clinical specificity. Next, a predefined tolerable total error adopted from the Royal College of Pathologists of Australasia Quality Assurance Program was applied to the extracted data before subjecting to Box-Cox transformation. Using a standard normal distribution, the clinical sensitivity can be derived from the probability of the Z-value to the right of the autoverification limit for a one-tailed probability and multiplied by two for a two-tailed probability. The clinical sensitivity showed an inverse relationship with between-subject biological variation. The laboratory can set and assess the clinical performance of its autoverification rules that conforms to its desired risk profile.


Assuntos
Laboratórios , Humanos
19.
Clin Biochem ; 98: 63-69, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34534518

RESUMO

INTRODUCTION: Internal quality control (IQC) is traditionally interpreted against predefined control limits using multi-rules or 'Westgard rules'. These include the commonly used 1:3s and 2:2s rules. Either individually or in combination, these rules have limited sensitivity for detection of systematic errors. In this proof-of-concept study, we directly compare the performance of three moving average algorithms with Westgard rules for detection of systematic error. METHODS: In this simulation study, 'error-free' IQC data (control case) was generated. Westgard rules (1:3s and 2:2s) and three moving average algorithms (simple moving average (SMA), weighted moving average (WMA), exponentially weighted moving average (EWMA); all using ±3SD as control limits) were applied to examine the false positive rates. Following this, systematic errors were introduced to the baseline IQC data to evaluate the probability of error detection and average number of episodes for error detection (ANEed). RESULTS: From the power function graphs, in comparison to Westgard rules, all three moving average algorithms showed better probability of error detection. Additionally, they also had lower ANEed compared to Westgard rules. False positive rates were comparable between the moving average algorithms and Westgard rules (all <0.5%). The performance of the SMA algorithm was comparable to the weighted algorithms forms (i.e. WMA and EWMA). CONCLUSION: Application of an SMA algorithm on IQC data improves systematic error detection compared to Westgard rules. Application of SMA algorithms can simplify laboratories IQC strategy.


Assuntos
Algoritmos , Laboratórios , Modelos Teóricos , Linguagens de Programação , Controle de Qualidade , Humanos
20.
Am J Clin Pathol ; 156(6): 1058-1067, 2021 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-34111241

RESUMO

OBJECTIVES: We examined the false acceptance rate (FAR) and false rejection rate (FRR) of varying precision verification experimental designs. METHODS: Analysis of variance was applied to derive the subcomponents of imprecision (ie, repeatability, between-run, between-day imprecision) for complex matrix experimental designs (day × run × replicate; day × run). For simple nonmatrix designs (1 day × multiple replicates or multiday × 1 replicate), ordinary standard deviations were calculated. The FAR and FRR in these different scenarios were estimated. RESULTS: The FRR increased as more samples were included in the precision experiment. The application of an upper verification limit, which seeks to cap FRR at 5% for multiple experiments, significantly increased the FAR. The FRR decreases as the observed imprecision increases relative to the claimed imprecision and when a greater number of days, runs, or replicates are included in the verification design. Increasing the number or days, runs, or replicates also reduces the FAR for between-day imprecision and repeatability. CONCLUSIONS: Design of verification experiments should incorporate the local availability of resources and analytical expertise. The largest imprecision component should be targeted with a greater number of measurements. Consideration of both FAR and FRR should be given when committing a platform into service.


Assuntos
Projetos de Pesquisa , Humanos , Patologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...