Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 45
Filtrar
1.
Cureus ; 16(3): e57243, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38559530

RESUMO

The accuracy of diagnostic results in clinical laboratory testing is paramount for informed healthcare decisions and effective patient care. While the focus has traditionally been on the analytical phase, attention has shifted towards optimizing the preanalytical phase due to its significant contribution to total laboratory errors. This review highlights preanalytical errors, their sources, and control measures to improve the quality of laboratory testing. Blood sample quality is a critical concern, with factors such as hemolysis, lipemia, and icterus leading to erroneous results. Sources of preanalytical errors encompass inappropriate test requests, patient preparation lapses, and errors during sample collection, handling, and transportation. Mitigating these errors includes harmonization efforts, education and training programs, automated methods for sample quality assessment, and quality monitoring. Collaboration between laboratory personnel and healthcare professionals is crucial for implementing and sustaining these measures to enhance the accuracy and reliability of diagnostic results, ultimately improving patient care.

2.
Cureus ; 16(2): e53393, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38435196

RESUMO

Diverse errors occur in a pathology laboratory and manual mistakes are the most common. There are various advancements to replace manual procedures with digitized automation techniques. Guidelines and protocols are available to run a standard pathology laboratory. But, even with such attempts to reinforce and strengthen the protocols, the complete elimination of errors is yet not possible. Root cause analysis (RCA) is the best way forward to develop an error-free laboratory, In this review, the importance of RCA, common errors taking place in laboratories, methods to carry out RCA, and its effectiveness are discussed in detail. The review also highlights the potential of RCA to provide long-term quality improvement and efficient laboratory management.

3.
Res Vet Sci ; 171: 105203, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38432158

RESUMO

Although haemolysis is the most common source of preanalytical error in clinical laboratories, its influence on cattle biochemistry remains poorly understood. The effect of haemolysis and its clinical relevance were investigated in 70 samples in which haemolysis was artificially induced (by spiking with increasing amounts of haemolysate, yielding 0.0%, 0.2%, 0.5%, 1.0%, 2.5%, 5.0% and 10% haemolysis degree (HD)), focusing on key parameters for bovine metabolic health assessment, including albumin, alkaline phosphatase (ALP), aspartate aminotransferase (AST), blood urea nitrogen (BUN), calcium (Ca), cholesterol, creatinine, creatine kinase (CK), gamma-glutamyl transferase (GGT), globulins, magnesium (Mg), phosphorus (P), total bilirubin (TBIL) and total proteins (TP). Preanalytical haemolysis significantly affected most (8 of 14) of the biochemical parameters analysed, leading to significant increases in concentrations of albumin (starting at 5% HD), cholesterol (at 5% HD) and P (at 10% HD) and to significant decreases in Ca (at 2.5% HD), creatinine (at 5% HD), globulins (at 10% HD), TBIL (at 2.5% HD) and TP (at 10% HD). Comparison of the present and previous data indicated that, for each parameter, the HD required to produce significant bias and the clinical relevance of over- and underestimation are variable and appear to depend on the analytical technique used. Therefore, different laboratories should evaluate the influence of haemolysis in their analytical results and provide advice to clinicians accordingly. Affected parameters should be interpreted together with clinical signs and other analytical data to minimize misinterpretations (false or masked variations). Finally, due to the significant impact on numerous parameters and the limited potential for correction, we recommend rejection of samples with >10% HD.


Assuntos
Doenças dos Bovinos , Globulinas , Bovinos , Animais , Hemólise , Creatinina , Colesterol , Cálcio , Albuminas
4.
Ann Clin Biochem ; : 45632241226916, 2024 Jan 11.
Artigo em Inglês | MEDLINE | ID: mdl-38172080

RESUMO

BACKGROUND: Clinical laboratories frequently implement the same tests and internal quality control (QC) rules on identical instruments. It is unclear whether individual QC targets for each analyser or ones that are common to all instruments are preferable. This study modelled how common QC targets influence assay error detection before examining their effect on real-world data. METHODS: The effect of variable bias and imprecision on error detection and false rejection rates when using common or individual QC targets on two instruments was simulated. QC data from tests run on two identical Beckman instruments (6-month period, same QC lot, n > 100 points for each instrument) determined likely real-world consequences. RESULTS: Compared to individual QC targets, common targets had an asymmetrical effect on systematic error detection, with one instrument assay losing detection power more than the other gained. If individual in-control assay standard deviations (SDs) differed, then common targets led to one assay failing QC more frequently. Applied to two analysers (95 QC levels and 45 tests), common targets reduced one instrument's error detection by ≥ 0.4 sigma on 15/45 (33%) of tests. Such targets also meant 14/45 (31%) of assays on one in-control instrument would fail over twice as frequently as the other (median ratio 1.62, IQR 1.20-2.39) using a 2SD rule. CONCLUSIONS: Compared to instrument-specific QC targets, common targets can reduce the probability of detecting changes in individual assay performance and cause one in-control assay to fail QC more frequently than another. Any impact on clinical care requires further investigation.

5.
Adv Clin Chem ; 115: 175-203, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37673520

RESUMO

Delta check is an electronic error detection tool. It compares the difference in sequential results within a patient against a predefined limit, and when exceeded, the delta check rule is considered triggered. The patient results should be withheld for review and troubleshooting before releasing to the clinical team for patient management. Delta check was initially developed as a tool to detect wrong-blood-in-tube (sample misidentification) errors. It is now applied to detect errors more broadly within the total testing process. Recent advancements in the theoretical understanding of delta check has allowed for more precise application of this tool to achieve the desired clinical performance and operational set up. In this Chapter, we review the different pre-implementation considerations, the foundation concepts of delta check, the process of setting up key delta check parameters, performance verification and troubleshooting of a delta check flag.

6.
Vet Clin North Am Small Anim Pract ; 53(1): 1-16, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36270839

RESUMO

Quality assurance and the implementation of a quality management system are as important for veterinary in-clinic laboratories as for reference laboratories. Elements of a quality management system include the formulation of a quality plan, establishment of quality goals, a health and safety policy, trained personnel, appropriate and well-maintained facilities and equipment, standard operating procedures, and participation in external quality assurance programs. Quality assurance principles should be applied to preanaltyic, analytic, and postanalytic phases of the in-clinic laboratory cycle to ensure that results are accurate and reliable and are released in a timely manner.


Assuntos
Hospitais Veterinários , Laboratórios , Animais , Controle de Qualidade
7.
Ann Lab Med ; 42(5): 597-601, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-35470278

RESUMO

This study describes an objective approach to deriving the clinical performance of autoverification rules to inform laboratory practice when implementing them. Anonymized historical laboratory data for 12 biochemistry measurands were collected and Box-Cox-transformed to approximate a Gaussian distribution. The historical laboratory data were assumed to be error-free. Using the probability theory, the clinical specificity of a set of autoverification limits can be derived by calculating the percentile values of the overall distribution of a measurand. The 5th and 95th percentile values of the laboratory data were calculated to achieve a 90% clinical specificity. Next, a predefined tolerable total error adopted from the Royal College of Pathologists of Australasia Quality Assurance Program was applied to the extracted data before subjecting to Box-Cox transformation. Using a standard normal distribution, the clinical sensitivity can be derived from the probability of the Z-value to the right of the autoverification limit for a one-tailed probability and multiplied by two for a two-tailed probability. The clinical sensitivity showed an inverse relationship with between-subject biological variation. The laboratory can set and assess the clinical performance of its autoverification rules that conforms to its desired risk profile.


Assuntos
Laboratórios , Humanos
8.
Clin Chim Acta ; 523: 26-30, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34480952

RESUMO

BACKGROUND: There have been few reports regarding the frequency and types of preanalytical errors in stat laboratories, in particular those occurring in the satellite laboratory setting. The impact of this error type on laboratory performance in this environment is largely unknown. We assessed the performance of a stat laboratory serving a population of predominantly elderly patients with suspected or established diagnoses of cancer using Six Sigma methodology and compared the results to previous work on this subject. METHODS: We performed an observational retrospective study using data from the period 2013-2020. The clinical setting was a satellite laboratory supporting an outpatient medical clinic. The type and frequency of each type of preanalytical error were compiled and were used to derive the quarterly error rate. Overall and quarterly performance were calculated using Six Sigma methodology. RESULTS: During the study period 1314 preanalytical errors were identified from 247,271 laboratory tests (0.5% of total test volume). There was a steady decrease in the error rate over the course of the study period, ranging from 1.4% in 2013 to 0.14% in 2020, despite a 290% increase in quarterly test volume during this period. The most common error types encountered were order error, hemolysis, collection error, and lab accident. CONCLUSION: 1) The overall performance of a satellite laboratory with a stat testing menu is comparable to hospital-based laboratory stat testing. 2) The most frequent error types encountered in satellite laboratory stat testing differ from those found in hospital-based laboratories. 3) There was an overall improvement in laboratory performance based on Six Sigma methodology.


Assuntos
Laboratórios Hospitalares , Gestão da Qualidade Total , Idoso , Técnicas de Laboratório Clínico , Erros de Diagnóstico , Hemólise , Humanos , Estudos Retrospectivos
9.
Medicina (Kaunas) ; 57(5)2021 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-34065022

RESUMO

Background and Objectives: Risk management is considered an integral part of laboratory medicine to assure laboratory quality and patient safety. However, the concept of risk management is philosophical, so actually performing risk management in a clinical laboratory can be challenging. Therefore, we would like to develop a sustainable, practical system for continuous total laboratory risk management. Materials and Methods: This study was composed of two phases: the development phase in 2019 and the application phase in 2020. A concept flow diagram for the computerized risk registry and management tool (RRMT) was designed using the failure mode and effects analysis (FMEA) and the failure reporting, analysis, and corrective action system (FRACAS) methods. The failure stage was divided into six according to the testing sequence. We applied laboratory errors to this system over one year in 2020. The risk priority number (RPN) score was calculated by multiplying the severity of the failure mode, frequency (or probability) of occurrence, and detection difficulty. Results: 103 cases were reported to RRMT during one year. Among them, 32 cases (31.1%) were summarized using the FMEA method, and the remaining 71 cases (68.9%) were evaluated using the FRACAS method. There was no failure in the patient registration phase. Chemistry units accounted for the highest proportion of failure with 18 cases (17.5%), while urine test units accounted for the lowest portion of failure with two cases (1.9%). Conclusion: We developed and applied a practical computerized risk-management tool based on FMEA and FRACAS methods for the entire testing process. RRMT was useful to detect, evaluate, and report failures. This system might be a great example of a risk management system optimized for clinical laboratories.


Assuntos
Segurança do Paciente , Gestão de Riscos , Humanos , Sistema de Registros , Medição de Risco
10.
Pract Lab Med ; 23: e00196, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33385053

RESUMO

INTRODUCTION: Interpretation of Thromboelastography (TEG) curve involves correlating patient's clinical profile with TEG parameters and the tracing, keeping in mind the potential sources of errors, and hence requires expertise. We aimed to analyse the analytical errors in TEG interpretation due to paucity of literature in this regard. MATERIAL AND METHODS: The retrospective study was conducted in an apex trauma center in North India. Five months of data was reviewed by two laboratory physicians, with differences resolved by consensus. Cases with pre-analytical errors, missing data and TEG runs lasting <10 â€‹min were excluded. The analytical errors were classified into: preventable, potentially preventable, non-preventable, and non-preventable but care could have been improved. RESULTS: Out of 440 TEG tracings reviewed, 70 were excluded. An analytical error was present in 60/370 (16.2%) tracings. There were six types analytical errors, of which, tracings of severe hypocoagulable states showing k-time â€‹= â€‹0 (33.3%) was the commonest, followed by tracings with spikes at irregular intervals (30%). Of all the analytical errors, 29/60 (48.2%) were preventable and 5/60 (8.3%) were potentially preventable. CONCLUSION: Analytical variables that lead to errors in TEG interpretation were identified in about one-sixth of the cases and almost half of them were preventable. Awareness about the common errors amongst clinicians and laboratory physicians is critical to prevent treatment delay and safeguard patient safety.

11.
Crit Rev Clin Lab Sci ; 58(1): 49-59, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-32795201

RESUMO

Delta checks are a post-analytical verification tool that compare the difference in sequential laboratory results belonging to the same patient against a predefined limit. This unique quality tool highlights a potential error at the individual patient level. A difference in sequential laboratory results that exceeds the predefined limit is considered likely to contain an error that requires further investigation that can be time and resource intensive. This may cause a delay in the provision of the result to the healthcare provider or entail recollection of the patient sample. Delta checks have been used primarily to detect sample misidentification (sample mix-up, wrong blood in tube), and recent advancements in laboratory medicine, including the adoption of protocolized procedures, information technology and automation in the total testing process, have significantly reduced the prevalence of such errors. As such, delta check rules need to be selected carefully to balance the clinical risk of these errors and the need to maintain operational efficiency. Historically, delta check rules have been set by professional opinion based on reference change values (biological variation) or the published literature. Delta check rules implemented in this manner may not inform laboratory practitioners of their real-world performance. This review discusses several evidence-based approaches to the optimal setting of delta check rules that directly inform the laboratory practitioner of the error detection capabilities of the selected rules. Subsequent verification of workflow for the selected delta check rules is also discussed. This review is intended to provide practical assistance to laboratories in setting evidence-based delta check rules that best suits their local operational and clinical needs.


Assuntos
Laboratórios , Humanos , Controle de Qualidade , Valores de Referência
12.
Biochem Med (Zagreb) ; 31(1): 010705, 2021 Feb 15.
Artigo em Inglês | MEDLINE | ID: mdl-33380892

RESUMO

INTRODUCTION: To interpret test results correctly, understanding of the variations that affect test results is essential. The aim of this study is: 1) to evaluate the clinicians' knowledge and opinion concerning biological variation (BV), and 2) to investigate if clinicians use BV in the interpretation of test results. MATERIALS AND METHODS: This study uses a questionnaire comprising open-ended and close-ended questions. Questions were selected from the real-life numerical examples of interpretation of test results, the knowledge about main sources of variations in laboratories and the opinion of clinicians on BV. A total of 399 clinicians were interviewed, and the answers were evaluated using a scoring system ranked from A (clinician has the highest level of knowledge and the ability of using BV data) to D (clinician has no knowledge about variations in laboratory). The results were presented as number (N) and percentage (%). RESULTS: Altogether, 60.4% of clinicians have knowledge of pre-analytical and analytical variations; but only 3.5% of them have knowledge related to BV. The number of clinicians using BV data or reference change value (RCV) to interpret measurements results was zero, while 79.4% of clinicians accepted that the difference between two measurements results located within the reference interval may be significant. CONCLUSIONS: Clinicians do not use BV data or tools derived from BV such as RCV to interpret test results. It is recommended that BV should be included in the medical school curriculum, and clinicians should be encouraged to use BV data for safe and valid interpretation of test results.


Assuntos
Técnicas de Laboratório Clínico , Ciência de Laboratório Médico , Humanos , Valores de Referência , Reprodutibilidade dos Testes
13.
Clin Biochem ; 80: 42-47, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32247779

RESUMO

OBJECTIVES: The performance of delta check rules has been considered to be dependent on the biological variation characteristics of the analyte of interest. The assumed relationships have not been formally studied. The mathematical relationship between biological variation and delta check rules is explored in this study. DESIGN AND METHODS: From the mathematical model for absolute difference delta check, the threshold for specificity and sensitivity are observed to be normalized differently. For specificity, the threshold is normalized by the within-subject biological variation (expressed as a coefficient of variation, CVi), whereas for sensitivity the threshold is normalized by the between-subject biological variation (expressed as a coefficient of variation, CVg). This highlights the different roles the two biological variations play in affecting the absolute difference distribution for correct and switched patient samples. Analogous to absolute difference delta checks, for relative difference delta checks, the expressions for specificity and sensitivity are scaled by CVi and CVg, respectively. However, the expressions are independent of µg(the average of the population). RESULTS: A comparison between the mathematical model and empirical/ historical laboratory data obtained from patients was conducted for both absolute and relative difference delta checks. In general it was found that the specificity obtained from the historical laboratory data was less than the model predicted values, while on the other hand, good correspondence was obtained between the experimental sensitivity and predicted sensitivity. CONCLUSIONS: The difference in within-subject biological variation in different patients may contribute to the observed discrepancy in predicted and empirical delta check performance.


Assuntos
Variação Biológica da População , Testes de Química Clínica , Controle de Qualidade , Humanos , Laboratórios , Valores de Referência
14.
Ann Clin Biochem ; 57(3): 215-222, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-31955587

RESUMO

OBJECTIVES: The interpretation of delta check rules in a panel of tests should be different to that at the single analyte level, as the number of hypothesis tests conducted (i.e. the number of delta check rules) is greater and needs to be taken into account. METHODS: De-identified paediatric laboratory results were extracted, and the first two serial results for each patient were used for analysis. Analytes were grouped into four common laboratory test panels consisting of renal, liver, bone and full blood count panels. The sensitivities and specificities of delta check limits as discrete panel tests were assessed by random permutation of the original data-set to simulate a wrong blood in tube situation. RESULTS: Generally, as the number of analytes included in a panel increases, the delta check rules deteriorate considerably due to the increased number of false positives, i.e. increased number hypothesis tests performed. To reduce high false-positive rates, patient results may be rejected from autovalidation only if the number of analytes failing the delta check limits exceeds a certain threshold of the total number of analytes in the panel (N). Our study found that the use of the (N2 rule) for panel results had a specificity >90% and sensitivity ranging from 25% to 45% across the four common laboratory panels. However, this did not achieve performance close to some analytes when considered in isolation. CONCLUSIONS: The simple N2 rule reduces the false-positive rate and minimizes unnecessary, resource-intensive investigations for potentially erroneous results.


Assuntos
Testes de Química Clínica , Confiabilidade dos Dados , Controle de Qualidade , Criança , Humanos , Laboratórios Hospitalares , Pediatria , Manejo de Espécimes
15.
ADMET DMPK ; 8(3): 215-250, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-35300305

RESUMO

We describe three machine learning models submitted to the 2019 Solubility Challenge. All are founded on tree-like classifiers, with one model being based on Random Forest and another on the related Extra Trees algorithm. The third model is a consensus predictor combining the former two with a Bagging classifier. We call this consensus classifier Vox Machinarum, and here discuss how it benefits from the Wisdom of Crowds. On the first 2019 Solubility Challenge test set of 100 low-variance intrinsic aqueous solubilities, Extra Trees is our best classifier. One the other, a high-variance set of 32 molecules, we find that Vox Machinarum and Random Forest both perform a little better than Extra Trees, and almost equally to one another. We also compare the gold standard solubilities from the 2019 Solubility Challenge with a set of literature-based solubilities for most of the same compounds.

17.
Clin Chem Lab Med ; 58(3): 384-389, 2020 02 25.
Artigo em Inglês | MEDLINE | ID: mdl-31734649

RESUMO

Background The delta check time interval limit is the maximum time window within which two sequential results of a patient will be evaluated by the delta check rule. The impact of time interval on delta check performance is not well studied. Methods De-identified historical laboratory data were extracted from the laboratory information system and divided into children (≤18 years) and adults (>21 years). The relative and absolute differences of the original pair of results from each patient were compared against the delta check limits associated with 90% specificity. The data were then randomly reshuffled to simulate a switched (misidentified) sample scenario. The data were divided into 1-day, 3-day, 7-day, 14-day, 1-month, 3-month, 6-month and 1-year time interval bins. The true positive- and false-positive rates at different intervals were examined. Results Overall, 24 biochemical and 20 haematological tests were analysed. For nearly all the analytes, there was no statistical evidence of any difference in the true- or false-positive rates of the delta check rules at different time intervals when compared to the overall data. The only exceptions to this were mean corpuscular volume (using both relative- and absolute-difference delta check) and mean corpuscular haemoglobin (only absolute-difference delta check) in the children population, where the false-positive rates became significantly lower at 1-year interval. Conclusions This study showed that there is no optimal delta check time interval. This fills an important evidence gap for future guidance development.


Assuntos
Análise de Dados , Projetos de Pesquisa , Técnicas de Laboratório Clínico , Humanos , Fatores de Tempo
20.
Int J Health Care Qual Assur ; 32(1): 84-86, 2019 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-30859881

RESUMO

PURPOSE: With recent advances in laboratory hematology automation, emphasis is now on quality assurance processes as they are indispensable for generating reliable and accurate test results. It is therefore imperative to acquire efficient measures for recognizing laboratory malfunctions and errors to improve patient safety. The paper aims to discuss these issues. DESIGN/METHODOLOGY/APPROACH: Moving algorithm is a quality control process that monitors analyzer performance from historical records through a continuous process, which does not require additional expenditure, and can serve as an additional support to the laboratory quality control program. FINDINGS: The authors describe an important quality assurance tool, which can be easily applied in any laboratory setting, especially in cost-constrained areas where running commercial controls throughout every shift may not be a feasible option. ORIGINALITY/VALUE: The authors focus on clinical laboratory quality control measures for providing reliable test results. The moving average appears to be a reasonable and applicable choice for vigilantly monitoring each result.


Assuntos
Algoritmos , Serviços de Laboratório Clínico/organização & administração , Erros Médicos/prevenção & controle , Garantia da Qualidade dos Cuidados de Saúde/organização & administração , Países em Desenvolvimento , Feminino , Humanos , Laboratórios/normas , Masculino , Avaliação das Necessidades , Paquistão , Segurança do Paciente , Controle de Qualidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...