Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Clin Chem ; 70(2): 444-452, 2024 02 07.
Article in English | MEDLINE | ID: mdl-38084963

ABSTRACT

BACKGROUND: Intravenous (IV) fluid contamination is a common cause of preanalytical error that can delay or misguide treatment decisions, leading to patient harm. Current approaches for detecting contamination rely on delta checks, which require a prior result, or manual technologist intervention, which is inefficient and vulnerable to human error. Supervised machine learning may provide a means to detect contamination, but its implementation is hindered by its reliance on expert-labeled training data. An automated approach that is accurate, reproducible, and practical is needed. METHODS: A total of 25 747 291 basic metabolic panel (BMP) results from 312 721 patients were obtained from the laboratory information system (LIS). A Uniform Manifold Approximation and Projection (UMAP) model was trained and tested using a combination of real patient data and simulated IV fluid contamination. To provide an objective metric for classification, an "enrichment score" was derived and its performance assessed. Our current workflow was compared to UMAP predictions using expert chart review. RESULTS: UMAP embeddings from real patient results demonstrated outliers suspicious for IV fluid contamination when compared with the simulated contamination's embeddings. At a flag rate of 3 per 1000 results, the positive predictive value (PPV) was adjudicated to be 0.78 from 100 consecutive positive predictions. Of these, 58 were previously undetected by our current clinical workflows, with 49 BMPs displaying a total of 56 critical results. CONCLUSIONS: Accurate and automatable detection of IV fluid contamination in BMP results is achievable without curating expertly labeled training data.


Subject(s)
Unsupervised Machine Learning , Humans , Predictive Value of Tests , Workflow
2.
Clin Chem ; 2023 May 06.
Article in English | MEDLINE | ID: mdl-37147848

ABSTRACT

BACKGROUND: Serum free light chain (sFLC) assays are interpreted using a sFLC-ratio-based reference interval (manufacturer's interval) that was defined using a cohort of healthy patients. However, renal impairment elevates the sFLC-ratio, leading to a high false positive rate when using the manufacturer's interval. Prior studies have developed renal-specific reference intervals; however, this approach has not been widely adopted due to practical limitations. Thus, there remains a critical need for a renally robust sFLC interpretation method. METHODS: Retrospective data mining was used to define patient cohorts that reflect the spectrum of renal function seen in clinical practice. Two new reference intervals, one based on the sFLC-ratio and one based on a novel principal component analysis (PCA)-based metric, were developed for the FREELITE assay (Binding Site) on the Roche Cobas c501 instrument (Roche). RESULTS: Compared to the manufacturer's reference interval, both new methods exhibited significantly lower false positive rates and greater robustness to renal function while maintaining equivalent sensitivity for monoclonal gammopathy (MG) diagnosis. While not significantly different, the point estimate for sensitivity was highest for the PCA-based approach. CONCLUSION: Renally robust sFLC interpretation using a single reference interval is possible given a reference cohort that reflects the variation in renal function observed in practice. Further studies are needed to achieve sufficient power and determine if the novel PCA-based metric offers superior sensitivity for MG diagnosis. These new methods offer the practical advantages of not requiring an estimated glomerular filtration rate result or multiple reference intervals, thereby lowering practical barriers to implementation.

3.
Ann Diagn Pathol ; 62: 152076, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36495735

ABSTRACT

OBJECTIVE: To evaluate if peri-pregnancy timing of a PCR+ test for SARS-CoV-2 RNA affects pregnancy outcomes and placental pathology. METHODS: This is a retrospective cohort study conducted in a tertiary center. Pregnancy outcomes and placental pathology were compiled for women who tested positive for SARS-CoV-2 RNA from a nasopharyngeal swab assessed by RT-PCR. The population comprised four groups that were PCR+ preconception (T0) or in the 1st (T1), 2nd (T2), or 3rd (T3) trimester of pregnancy. A fifth, control group (TC) tested PCR- for SARS-CoV-2 before delivery. RESULTS: Seventy-one pregnancies were studied. The T0 group exhibited lower gestational ages at delivery, had infants with the lowest birth weights, the highest rate of pregnancy loss before 20 weeks. Features of maternal vascular malperfusion and accelerated villous maturation were prominent findings in the histopathology of placentas from women PCR+ for SARS-CoV-2 RNA, especially in the T0 and the T1 groups. CONCLUSION: Women at highest risk for pregnancy complications are those who test PCR+ for viral RNA preconception or during first trimester of pregnancy.


Subject(s)
COVID-19 , Placenta , Pregnancy Complications, Infectious , Female , Humans , Infant , Pregnancy , COVID-19/pathology , Placenta/pathology , Pregnancy Complications, Infectious/diagnosis , Pregnancy Complications, Infectious/epidemiology , Pregnancy Complications, Infectious/pathology , Pregnancy Outcome , Retrospective Studies , RNA, Viral , SARS-CoV-2
4.
Clin Chem ; 68(3): 402-412, 2022 03 04.
Article in English | MEDLINE | ID: mdl-34871351

ABSTRACT

BACKGROUND: As technology enables new and increasingly complex laboratory tests, test utilization presents a growing challenge for healthcare systems. Clinical decision support (CDS) refers to digital tools that present providers with clinically relevant information and recommendations, which have been shown to improve test utilization. Nevertheless, individual CDS applications often fail, and implementation remains challenging. CONTENT: We review common classes of CDS tools grounded in examples from the literature as well as our own institutional experience. In addition, we present a practical framework and specific recommendations for effective CDS implementation. SUMMARY: CDS encompasses a rich set of tools that have the potential to drive significant improvements in laboratory testing, especially with respect to test utilization. Deploying CDS effectively requires thoughtful design and careful maintenance, and structured processes focused on quality improvement and change management play an important role in achieving these goals.


Subject(s)
Decision Support Systems, Clinical , Delivery of Health Care , Humans , Palliative Care
5.
Transfusion ; 62(7): 1365-1376, 2022 07.
Article in English | MEDLINE | ID: mdl-35748490

ABSTRACT

BACKGROUND: Platelet transfusion carries risk of transfusion-transmitted infection (TTI). Pathogen reduction of platelet components (PRPC) is designed to reduce TTI. Pulmonary adverse events (AEs), including transfusion-related acute lung injury and acute respiratory distress syndrome (ARDS) occur with platelet transfusion. STUDY DESIGN: An open label, sequential cohort study of transfusion-dependent hematology-oncology patients was conducted to compare pulmonary safety of PRPC with conventional PC (CPC). The primary outcome was the incidence of treatment-emergent assisted mechanical ventilation (TEAMV) by non-inferiority. Secondary outcomes included: time to TEAMV, ARDS, pulmonary AEs, peri-transfusion AE, hemorrhagic AE, transfusion reactions (TRs), PC and red blood cell (RBC) use, and mortality. RESULTS: By modified intent-to-treat (mITT), 1068 patients received 5277 PRPC and 1223 patients received 5487 CPC. The cohorts had similar demographics, primary disease, and primary therapy. PRPC were non-inferior to CPC for TEAMV (treatment difference -1.7%, 95% CI: (-3.3% to -0.1%); odds ratio = 0.53, 95% CI: (0.30, 0.94). The cumulative incidence of TEAMV for PRPC (2.9%) was significantly less than CPC (4.6%, p = .039). The incidence of ARDS was less, but not significantly different, for PRPC (1.0% vs. 1.8%, p = .151; odds ratio = 0.57, 95% CI: (0.27, 1.18). AE, pulmonary AE, and mortality were not different between cohorts. TRs were similar for PRPC and CPC (8.3% vs. 9.7%, p = .256); and allergic TR were significantly less with PRPC (p = .006). PC and RBC use were not increased with PRPC. DISCUSSION: PRPC demonstrated reduced TEAMV with no excess treatment-related pulmonary morbidity.


Subject(s)
Respiratory Distress Syndrome , Transfusion Reaction , Blood Platelets , Blood Transfusion , Cohort Studies , Humans , Photosensitizing Agents , Platelet Transfusion/adverse effects , Respiratory Distress Syndrome/etiology , Respiratory Distress Syndrome/therapy , Transfusion Reaction/epidemiology , Transfusion Reaction/etiology
6.
J Biomed Inform ; 117: 103756, 2021 05.
Article in English | MEDLINE | ID: mdl-33766781

ABSTRACT

OBJECTIVE: Clinicians order laboratory tests in an effort to reduce diagnostic or therapeutic uncertainty. Information theory provides the opportunity to quantify the degree to which a test result is expected to reduce diagnostic uncertainty. We sought to apply information theory toward the evaluation and optimization of a diagnostic test threshold and to determine if the results would differ from those of conventional methodologies. We used a heparin/PF4 immunoassay (PF4 ELISA) as a case study. MATERIALS AND METHODS: The laboratory database was queried for PF4 ELISA and serotonin release assay (SRA) results during the study period, with the latter serving as the gold standard for the disease heparin-induced thrombocytopenia (HIT). The optimized diagnostic threshold of the PF4 ELISA test was compared using conventional versus information theoretic approaches under idealized (pretest probability = 50%) and realistic (pretest probability = 2.4%) testing conditions. RESULTS: Under ideal testing conditions, both analyses yielded a similar optimized optical density (OD) threshold of OD > 0.79. Under realistic testing conditions, information theory suggested a higher threshold, OD > 1.5 versus OD > 0.6. Increasing the diagnostic threshold improved the global information value, the value of a positive test and the noise content with only a minute change in the negative test value. DISCUSSION: Our information theoretic approach suggested that the current FDA approved cutoff (OD > 0.4) is overly permissive leading to loss of test value and injection of noise into an already complex diagnostic dilemma. Because our approach is purely statistical and takes as input data that are readily accessible in the clinical laboratory it offers a scalable and data-driven strategy for optimizing test value that may be widely applicable in the domain of laboratory medicine. CONCLUSION: Information theory provides more meaningful measures of test value than the widely used accuracy-based metrics.


Subject(s)
Physicians , Thrombocytopenia , Heparin/adverse effects , Humans , Information Theory , Platelet Factor 4
7.
J Clin Apher ; 35(1): 41-49, 2020 Jan.
Article in English | MEDLINE | ID: mdl-31713919

ABSTRACT

BACKGROUND: Therapeutic plasma exchange (TPE) utilizes an extracorporeal circuit to remove pathologic proteins causing serious illness. When processing a patient's entire blood volume through an extracorporeal circuit, proteins responsible for maintaining hemostatic system homeostasis can reach critically low levels if replacement fluid types and volumes are not carefully titrated, which may increase complications. METHODS: The charts from 27 patients undergoing 46 TPE procedures were reviewed to evaluate the accuracy of our predictive mathematical model, utilizing the following patient information: weight, hematocrit, pre- and post-TPE factor levels (fibrinogen, n = 46, and antithrombin, n = 23), process volume and volumes of fluids (eg, plasma, albumin, and normal saline) administered during TPE and adverse events during and after TPE. RESULTS: Altogether, 25% of patients experienced minor adverse events that resolved spontaneously or with management. There were no bleeding or thrombotic complications. The mean difference between predicted and measured post-TPE fibrinogen concentrations was -0.29 mg/dL (SD ±23.0, range -59 to 37), while percent difference between measured and predicted fibrinogen concentration was 0.94% (SD ±10.8, range of -22 to 19). The mean difference between predicted and measured post-TPE antithrombin concentrations were 0.89% activity (SD ±10.0, range -23 to 14), while mean percent difference between predicted and measured antithrombin concentrations was 3.87% (SD ±14.5, range -25 to 38). CONCLUSIONS: Our model reliably predicts post-TPE fibrinogen and antithrombin concentrations, and may help optimize patient management and attenuate complications.


Subject(s)
Antithrombins/blood , Fibrinogen/analysis , Plasma Exchange/methods , Anticoagulants/therapeutic use , Automation , Hematocrit/methods , Hemorrhage/etiology , Hemostasis , Homeostasis , Humans , Models, Theoretical , Plasmapheresis/methods , Risk , Thrombosis
8.
J Clin Microbiol ; 57(10)2019 10.
Article in English | MEDLINE | ID: mdl-31391227

ABSTRACT

There is limited knowledge on the incidence, diagnostic yield, and cost associated with inappropriate repeat urine cultures. The factors that affect repeat urine culturing practices are not well understood. We conducted a retrospective study of adult inpatients who had ≥1 urine culture performed during their hospitalization between January 2015 and February 2018. We analyzed the proportion of inappropriate repeat urine cultures performed <48 h after the index culture. We defined an inappropriate repeat urine culture to be a repeat urine culture performed following a negative index culture or a repeat urine specimen obtained from the same urinary catheter. Overall, 28,141 urine cultures were performed on 21,306 patients. There were 2,060 (7.3%) urine cultures repeated in <48 h. Of these, 1,120 (54.4%) urine cultures were inappropriate. Predictors for inappropriate repeat urine cultures included collection of the initial urine sample for culture in the emergency department (adjusted odds ratio [aOR], 5.65; 95% confidence interval [CI], 4.70 to 6.78), male gender (aOR, 1.61; 95% CI, 1.42 to 1.84), congestive heart failure (aOR, 1.20; 95% CI, 1.03 to 1.38), and a longer hospital stay (aOR, 1.01 per day; 95% CI, 1.00 to 1.01). A patient with an index urine culture obtained from an indwelling catheter (aOR, 0.65; 95% CI, 0.53 to 0.80) was less likely to have an inappropriate repeat culture. Among 1,120 negative index urine cultures, only 4.7% of repeat cultures were positive for bacteriuria. The estimated laboratory charges for inappropriate repeat urine cultures were $16,800 over the study period. Among inpatients, over half of all urine cultures repeated in <48 h were inappropriate. This offers an opportunity for diagnostic stewardship and optimization of antimicrobial use.


Subject(s)
Hospitalization , Urinalysis/methods , Urinary Tract Infections/diagnosis , Urinary Tract Infections/epidemiology , Aged , Bacteriuria/diagnosis , Bacteriuria/microbiology , Comorbidity , Cross Infection/epidemiology , Female , Humans , Incidence , Male , Middle Aged , Urinary Tract Infections/microbiology
9.
Clin Chem ; 65(9): 1125-1131, 2019 09.
Article in English | MEDLINE | ID: mdl-31296551

ABSTRACT

BACKGROUND: Clinical decision support alerts for laboratory testing have poor compliance. Once-per-visit alerts, triggered by reorder of a test within the same admission, are highly specific for unnecessary orders and provide a means to study alert compliance. METHODS: Once-per-visit alerts for 18 laboratory orderables were analyzed over a 60-month period from September 2012 to October 2016 at a 1200-bed academic medical center. To determine correlates of alert compliance, we compared alerts by test and provider characteristics. RESULTS: Overall alert compliance was 54.5%. In multivariate regression, compliance correlated with length of stay at time of alert, provider type, previous alerts in a patient visit, test ordered, total alerts experienced by ordering provider, and previous order status. CONCLUSIONS: A diverse set of provider and test characteristics influences compliance with once-per-visit laboratory alerts. Future alerts should incorporate these characteristics into alert design to minimize alert overrides.


Subject(s)
Clinical Laboratory Techniques/statistics & numerical data , Decision Support Systems, Clinical , Medical Order Entry Systems , Medical Overuse/prevention & control , Academic Medical Centers , Humans , Multivariate Analysis , Regression Analysis , Retrospective Studies
11.
Transfusion ; 58 Suppl 1: 569-579, 2018 02.
Article in English | MEDLINE | ID: mdl-29443408

ABSTRACT

Red blood cell exchange is the process of removing red blood cells from a patient and replacing them with donated blood using either automated or manual techniques. Red blood cell exchange is a well-recognized and effective therapy for many red blood cell-related diseases, especially sickle cell disease. However, decisions regarding the best methods for vascular access are not intuitive and must account for the patient's clinical condition, complication risks, and lifestyle, especially in the context of long-term vascular access. In this review, we discuss the recognized indications for red blood cell exchange, considerations for the selection of exchange modality and vascular access, and recommendations for the appropriate care and prevention of risks associated with vascular access.


Subject(s)
Blood Component Removal/methods , Catheterization, Central Venous/methods , Catheterization, Peripheral/methods , Erythrocyte Transfusion/methods , Vascular Access Devices , Blood Component Removal/instrumentation , Catheterization, Central Venous/instrumentation , Catheterization, Peripheral/instrumentation , Erythrocyte Transfusion/instrumentation , Humans
12.
Transfusion ; 57(6): 1480-1484, 2017 06.
Article in English | MEDLINE | ID: mdl-28266038

ABSTRACT

BACKGROUND: Cold agglutinin disease (CAD) is a rare autoimmune hemolytic anemia mediated by autoantibodies that preferentially react at 4°C. Laboratory testing for cold-reactive autoantibodies is laborious and may not be ordered judiciously, particularly in patients with a negative direct antiglobulin test (DAT). We sought to determine whether a negative DAT using anti-human complement (anti-C3) rules out elevated cold agglutinin (CA) titers and the diagnosis of CAD. STUDY DESIGN AND METHODS: We performed a retrospective study of patients with a CA test performed at three major academic medical centers: Barnes-Jewish Hospital (2003-2014), Vanderbilt University Medical Center (2007-2009), and Massachusetts General Hospital (2009-2014). RESULTS: This study included 801 patients, of whom 51% (n = 410) had a DAT within the 7 days before CA testing. A total of 98% of patients with a negative DAT using anti-C3 had a negative CA titer (<64). Only five subjects had a negative DAT using anti-C3 and an elevated CA titer. CONCLUSIONS: Overutilization of CA testing could be reduced by establishing laboratory acceptance criteria based on a positive DAT using anti-C3. Such acceptance criteria would have reduced CA testing by 68% for those with an available DAT result.


Subject(s)
Coombs Test , Anemia, Hemolytic, Autoimmune/diagnosis , Anemia, Hemolytic, Autoimmune/immunology , Autoantibodies/analysis , Autoantibodies/immunology , Cryoglobulins/analysis , Cryoglobulins/immunology , Humans , Retrospective Studies
13.
Pediatr Crit Care Med ; 18(2): 134-142, 2017 02.
Article in English | MEDLINE | ID: mdl-27832023

ABSTRACT

OBJECTIVES: RBC distribution width is reported to be an independent predictor of outcome in adults with a variety of conditions. We sought to determine if RBC distribution width is associated with morbidity or mortality in critically ill children. DESIGN: Retrospective observational study. SETTING: Tertiary PICU. PATIENTS: All admissions to St. Louis Children's Hospital PICU between January 1, 2005, and December 31, 2012. INTERVENTIONS: We collected demographics, laboratory values, hospitalization characteristics, and outcomes. We calculated the relative change in RBC distribution width from admission RBC distribution width to the highest RBC distribution width during the first 7 days of hospitalization. Our primary outcome was ICU mortality or use of extracorporeal membrane oxygenation as a composite. Secondary outcomes were ICU- and ventilator-free days. MEASUREMENTS AND MAIN RESULTS: We identified 3,913 eligible subjects with an estimated mortality (by Pediatric Index of Mortality 2) of 2.94% ± 9.25% and an actual ICU mortality of 2.91%. For the study cohort, admission RBC distribution width was 14.12% ± 1.89% and relative change in RBC distribution width was 2.63% ± 6.23%. On univariate analysis, both admission RBC distribution width and relative change in RBC distribution width correlated with mortality or the use of extracorporeal membrane oxygenation (odds ratio, 1.19 [95% CI, 1.12-1.27] and odds ratio, 1.06 [95% CI, 1.04-1.08], respectively; p < 0.001). After adjusting for confounding variables, including severity of illness, both admission RBC distribution width (odds ratio, 1.13; 95% CI, 1.03-1.24) and relative change in RBC distribution width (odds ratio, 1.04; 95% CI, 1.01-1.07) remained independently associated with ICU mortality or the use of extracorporeal membrane oxygenation. Admission RBC distribution width and relative change in RBC distribution width both weakly correlated with fewer ICU- (r = 0.038) and ventilator-free days (r = 0.05) (p < 0.001). CONCLUSIONS: Independent of illness severity in critically ill children, admission RBC distribution width is associated with ICU mortality and morbidity. These data suggest that RBC distribution width may be a biomarker for RBC injury that is of sufficient magnitude to influence critical illness outcome, possibly via oxygen delivery impairment.


Subject(s)
Critical Care , Critical Illness/mortality , Erythrocyte Indices , Erythrocytes/pathology , Hospital Mortality , Severity of Illness Index , Adolescent , Biomarkers/blood , Child , Child, Preschool , Critical Illness/therapy , Erythrocytes/physiology , Extracorporeal Membrane Oxygenation , Female , Humans , Infant , Infant, Newborn , Intensive Care Units, Pediatric , Linear Models , Logistic Models , Male , Prognosis , ROC Curve , Retrospective Studies
15.
J Clin Microbiol ; 53(3): 887-95, 2015 Mar.
Article in English | MEDLINE | ID: mdl-25568435

ABSTRACT

Excessive utilization of laboratory diagnostic testing leads to increased health care costs. We evaluated criteria to reduce unnecessary nucleic acid amplification testing (NAAT) for viral pathogens in cerebrospinal fluid (CSF) samples from adults. This is a single-center split retrospective observational study with a screening cohort from 2008 to 2012 and a validation cohort from 2013. Adults with available results for herpes simplex virus 1/2 (HSV-1/2), varicella-zoster virus (VZV), cytomegalovirus (CMV), or enterovirus (EV) NAAT with CSF samples between 2008 and 2013 were included (n = 10,917). During this study, 1.3% (n = 140) of viral NAAT studies yielded positive results. The acceptance criteria of >10 nucleated cells/µl in the CSF of immunocompetent subjects would have reduced HSV-1/2, VZV, CMV, and EV testing by 63%, 50%, 44%, and 51%, respectively, from 2008 to 2012. When these criteria were applied to the 2013 validation data set, 54% of HSV-1/2, 57% of VZV, 35% of CMV, and 56% of EV tests would have been cancelled. No clinically significant positive tests would have been cancelled in 2013 with this approach. The introduction of a computerized order entry set was associated with increased test requests, suggesting that computerized order sets may contribute to unnecessary testing. Acceptance criteria of >10 nucleated cells/µl in the CSF of immunocompetent adults for viral CSF NAAT assays would increase clinical specificity and preserve sensitivity, resulting in significant cost savings. Implementation of these acceptance criteria led to a 46% reduction in testing during a limited follow-up period.


Subject(s)
Cerebrospinal Fluid/cytology , Cerebrospinal Fluid/virology , Leukocytes, Mononuclear/cytology , Meningitis, Aseptic/diagnosis , Molecular Diagnostic Techniques/methods , Nucleic Acid Amplification Techniques/methods , Adult , Aged , Cohort Studies , Costs and Cost Analysis , Cytomegalovirus/isolation & purification , Enterovirus/isolation & purification , Herpesvirus 3, Human/isolation & purification , Humans , Leukocyte Count , Middle Aged , Molecular Diagnostic Techniques/economics , Nucleic Acid Amplification Techniques/economics , Retrospective Studies , Sensitivity and Specificity , Simplexvirus/isolation & purification
18.
Transfusion ; 55(2): 348-56, 2015 Feb.
Article in English | MEDLINE | ID: mdl-25178153

ABSTRACT

BACKGROUND: Transfusion of ABO-incompatible platelets (PLTs) is associated with reduced PLT recovery and a risk of transfusion reactions. However, a policy of transfusing only ABO-identical PLTs may increase wastage due to product outdating. A prospective study attempting to compare the effects of different ABO compatibility strategies could be costly and disruptive to a blood bank's operations. STUDY DESIGN AND METHODS: We designed a "virtual blood bank," a stochastic computer program that models the stocking and release of products to meet demand for PLT transfusion in a simulated hospital population. ABO-nonidentical transfusions (ABOni), outdates, and inventory shortages were recorded and compared under two different transfusion strategies: ABO-First, a strategy that prioritizes transfusion of ABO-identical PLTs, and Age-First, a strategy that minimizes outdating by transfusing products closest to expiration. RESULTS: The ABO-First strategy resulted in fewer ABOni but more outdates than the Age-First strategy. Under conditions that mimic a large hospital blood bank, the ABO-First strategy was more cost-effective overall than the Age-First strategy if avoiding an ABOni is valued at more than $19 to $26. For a small blood bank, the ABO-First strategy was more cost-effective if avoiding an ABOni is valued at more than $104 to $123. CONCLUSION: Based on a virtual blood bank computer simulation, the cost of avoiding an ABOni using the ABO-First strategy varies greatly by size of institution. Individual blood banks must carefully consider these management strategies to determine the most cost-effective solution.


Subject(s)
ABO Blood-Group System , Blood Banking/methods , Blood Grouping and Crossmatching , Computer Simulation , Models, Theoretical , Platelet Transfusion/methods , Blood Group Incompatibility/prevention & control , Humans
20.
Clin Chim Acta ; 557: 117862, 2024 Apr 15.
Article in English | MEDLINE | ID: mdl-38460583

ABSTRACT

BACKGROUND: Analysis of whole blood specimens is rapid and saves blood, but hemolysis may go undetected and compromise the accuracy of potassium measurement. We aimed to define the frequency and magnitude of error in whole blood potassium measurement. METHODS: 34 months of whole blood and plasma potassium data were extracted from patients aged less than 2 years at the time of sample acquisition. Hemolysis was detected using the plasma "H index." The magnitude of potassium bias was estimated from the difference between paired whole blood and plasma measurement separated by less than 2 h. RESULTS: 56,000 of the 105,000 data points were from plasma and 20 % of these had significant hemolysis. Rates of hemolysis (nearing 50 %) were greatest in the neonatal nursery. Of 662 proximal whole blood and plasma paired results, 8 % had elevated whole blood potassium with a normal plasma value and 4 % had a normal whole blood potassium with reduced plasma potassium. The bias between whole blood and plasma potassium ranged from -1.0 to 4.0 mmol/L. CONCLUSIONS: The use of whole blood analysis brings with it significant risk for error in potassium measurement. Better tools to detect hemolysis in these types of specimens are indicated.


Subject(s)
Hemolysis , Potassium , Infant, Newborn , Humans , Child , Hematologic Tests , Reference Values
SELECTION OF CITATIONS
SEARCH DETAIL