RESUMEN
Monoclonal gammopathy (MG) is a spectrum of diseases ranging from the benign asymptomatic monoclonal gammopathy of undetermined significance to the malignant multiple myeloma. Clinical guidelines and laboratory recommendations have been developed to inform best practices in the diagnosis, monitoring, and management of MG. In this review, the pathophysiology, relevant laboratory testing recommended in clinical practice guidelines and laboratory recommendations related to MG testing and reporting are examined. The clinical guidelines recommend serum protein electrophoresis, serum immunofixation and serum free light chain measurement as initial screening. The laboratory recommendations omit serum immunofixation as it offers limited additional diagnostic value. The laboratory recommendations offer guidance on reporting findings beyond monoclonal protein, which was not required by the clinical guidelines. The clinical guidelines suggested monitoring total IgA concentration by turbidimetry or nephelometry method if the monoclonal protein migrates in the non-gamma region, whereas the laboratory recommendations make allowance for involved IgM and IgG. Additionally, several external quality assurance programs for MG protein electrophoresis and free light chain testing are also appraised. The external quality assurance programs show varied assessment criteria for protein electrophoresis reporting and unit of measurement. There is also significant disparity in reported monoclonal protein concentrations with wide inter-method analytical variation noted for both monoclonal protein quantification and serum free light chain measurement, however this variation appears smaller when the same method was used. Greater harmonization among laboratory recommendations and reporting format may improve clinical interpretation of MG testing.
Asunto(s)
Gammopatía Monoclonal de Relevancia Indeterminada , Paraproteinemias , Humanos , Gammopatía Monoclonal de Relevancia Indeterminada/diagnóstico , Paraproteinemias/diagnóstico , Laboratorios , Cadenas Ligeras de InmunoglobulinaRESUMEN
Emerging technology in laboratory medicine can be defined as an analytical method (including biomarkers) or device (software, applications, and algorithms) that by its stage of development, translation into broad routine clinical practice, or geographical adoption and implementation has the potential to add value to clinical diagnostics. Paediatric laboratory medicine itself may be considered an emerging area of specialisation that is established relatively recently following increased appreciation and understanding of the unique physiology and healthcare needs of the children. Through four clinical (neonatal hypoglycaemia, neonatal hyperbilirubinaemia, sickle cell disorder, congenital adrenal hyperplasia) and six technological (microassays, noninvasive testing, alternative matrices, next generation sequencing, exosome analysis, machine learning) illustrations, key takeaways of application of emerging technology for each area are summarised. Additionally, nine key considerations when applying emerging technology in paediatric laboratory medicine setting are discussed.
Asunto(s)
Pediatría , Humanos , Pediatría/métodos , Niño , Recién Nacido , Secuenciación de Nucleótidos de Alto Rendimiento , Hiperplasia Suprarrenal Congénita/diagnóstico , Anemia de Células Falciformes/diagnóstico , Biomarcadores/análisis , Biomarcadores/sangre , Hiperbilirrubinemia Neonatal/diagnóstico , Hiperbilirrubinemia Neonatal/sangre , Aprendizaje Automático , Técnicas de Laboratorio Clínico/métodosRESUMEN
Analytical performance specifications (APS) are used for decisions about the required analytical quality of pathology tests to meet clinical needs. The Milan models, based on clinical outcome, biological variation, or state of the art, were developed to provide a framework for setting APS. An approach has been proposed to assign each measurand to one of the models based on a defined clinical use, physiological control, or an absence of quality information about these factors. In this paper we propose that in addition to such assignment, available information from all models should be considered using a risk-based approach that considers the purpose and role of the actual test in a clinical pathway and its impact on medical decisions and clinical outcomes in addition to biological variation and the state-of-the-art. Consideration of APS already in use and the use of results in calculations may also need to be considered to determine the most appropriate APS for use in a specific setting.
Asunto(s)
Control de Calidad , Humanos , Técnicas de Laboratorio Clínico/normas , Modelos TeóricosRESUMEN
Analytical performance specifications (APS) based on outcomes refer to how 'good' the analytical performance of a test needs to be to do more good than harm to the patient. Analytical performance of a measurand affects its clinical performance. Without first setting clinical performance requirements, it is difficult to define how good analytically the test needs to be to meet medical needs. As testing is indirectly linked to health outcomes through clinical decisions on patient management, often simulation-based studies are used to assess the impact of analytical performance on the probability of clinical outcomes which is then translated to Model 1b APS according to the Milan consensus. This paper discusses the related key definitions, concepts and considerations that should assist in finding the most appropriate methods for deriving Model 1b APS. We review the advantages and limitations of published methods and discuss the criteria for transferability of Model 1b APS to different settings. We consider that the definition of the clinically acceptable misclassification rate is central to Model 1b APS. We provide some examples and guidance on a more systematic approach for first defining the clinical performance requirements for tests and we also highlight a few ideas to tackle the future challenges associated with providing outcome-based APS for laboratory testing.
Asunto(s)
Técnicas de Laboratorio Clínico , Humanos , Técnicas de Laboratorio Clínico/normasRESUMEN
In this computer simulation study, we examine four different statistical approaches of linearity assessment, including two variants of deviation from linearity (individual (IDL) and averaged (AD)), along with detection capabilities of residuals of linear regression (individual and averaged). From the results of the simulation, the following broad suggestions are provided to laboratory practitioners when performing linearity assessment. A high imprecision can challenge linearity investigations by producing a high false positive rate or low power of detection. Therefore, the imprecision of the measurement procedure should be considered when interpreting linearity assessment results. In the presence of high imprecision, the results of linearity assessment should be interpreted with caution. Different linearity assessment approaches examined in this study performed well under different analytical scenarios. For optimal outcomes, a considered and tailored study design should be implemented. With the exception of specific scenarios, both ADL and IDL methods were suboptimal for the assessment of linearity compared. When imprecision is low (3â¯%), averaged residual of linear regression with triplicate measurements and a non-linearity acceptance limit of 5â¯% produces <5â¯% false positive rates and a high power for detection of non-linearity of >70â¯% across different types and degrees of non-linearity. Detection of departures from linearity are difficult to identify in practice and enhanced methods of detection need development.
Asunto(s)
Simulación por Computador , Modelos Lineales , HumanosRESUMEN
OBJECTIVES: Interference from isomeric steroids is a potential cause of disparity between mass spectrometry-based 17-hydroxyprogesterone (17OHP) results. We aimed to assess the proficiency of mass spectrometry laboratories to report 17OHP in the presence of known isomeric steroids. METHODS: A series of five samples were prepared using a previously demonstrated commutable approach. These samples included a control (spiked to 15.0â¯nmol/L 17OHP) and four challenge samples further enriched with equimolar concentrations of 17OHP isomers (11α-hydroxyprogesterone, 11ß-hydroxyprogesterone, 16α-hydroxyprogesterone or 21-hydroxyprogesterone). These samples were distributed to 38 participating laboratories that reported serum 17OHP results using mass spectrometry in two external quality assurance programs. The result for each challenge sample was compared to the control sample submitted by each participant. RESULTS: Twenty-six laboratories (68â¯% of distribution) across three continents returned results. Twenty-five laboratories used liquid chromatography-tandem mass spectrometry (LC-MS/MS), and one used gas chromatography-tandem mass spectrometry to measure 17OHP. The all-method median of the control sample was 14.3â¯nmol/L, ranging from 12.4 to 17.6â¯nmol/L. One laboratory had results that approached the lower limit of tolerance (minus 17.7â¯% of the control sample), suggesting the isomeric steroid caused an irregular result. CONCLUSIONS: Most participating laboratories demonstrated their ability to reliably measure 17OHP in the presence of the four clinically relevant isomeric steroids. The performance of the 12 (32â¯%) laboratories that did not engage in this activity remains unclear. We recommend that all laboratories offering LC-MS/MS analysis of 17OHP in serum, plasma, or dried bloodspots determine that the isomeric steroids are appropriately separated.
Asunto(s)
Hidroxiprogesteronas , Espectrometría de Masas en Tándem , Humanos , Cromatografía Liquida/métodos , Espectrometría de Masas en Tándem/métodos , Sensibilidad y Especificidad , 17-alfa-Hidroxiprogesterona , EsteroidesRESUMEN
The biuret method is currently recognized as a reference measurement procedure for serum/plasma total protein by the Joint Committee for Traceability in Laboratory Medicine (JCTLM). However, as the reaction involved in this method is highly time-dependent, to ensure identical measurement conditions for calibrator and samples for high accuracy, a fast and simple measurement procedure is critical to ensure the precision and trueness of this method. We measured serum/plasma total protein using a Cary 60 spectrophotometer coupled with a fiber optic probe, which was faster and simpler than the conventional cuvette method. The biuret method utilizing alkaline solutions of copper sulfate and potassium sodium tartrate was added to the sample and calibrator (NIST SRM 927e) incubated for 1 h before measurement. A panel of samples consisting of pooled human serum, single donor serum, and certified reference materials (CRMs) from three sources were measured for method validation. Sixteen native patient samples were measured using the newly developed biuret method and compared against clinical analyzers. Additionally, the results of three cycles of a local External Quality Assessment (EQA) Programme submitted by participating clinical laboratories were compared against the biuret method. Our biuret method using fiber optic probe demonstrated good precision with within-day relative standard deviation (RSD) of 0.04 to 0.23% and between-day RSD of 0.58%. The deviations between the obtained values and the certified values for all three CRMs ranged from -0.38 to 1.60%, indicating good method trueness. The routine methods using clinical analyzers were also found to agree well with the developed biuret method using fiber optic probe for EQA samples and native patient samples. The biuret method using a fiber optic probe represented a convenient and reliable way of measuring serum total protein. It also demonstrated excellent precision and trueness using CRMs and patient samples, which made the method a simpler candidate reference method for serum protein measurement.
RESUMEN
Quality control practices in the modern laboratory are the result of significant advances over the many years of the profession. Major advance in conventional internal quality control has undergone a philosophical shift from a focus solely on the statistical assessment of the probability of error identification to more recent thinking on the capability of the measurement procedure (e.g. sigma metrics), and most recently, the risk of harm to the patient (the probability of patient results being affected by an error or the number of patient results with unacceptable analytical quality). Nonetheless, conventional internal quality control strategies still face significant limitations, such as the lack of (proven) commutability of the material with patient samples, the frequency of episodic testing, and the impact of operational and financial costs, that cannot be overcome by statistical advances. In contrast, patient-based quality control has seen significant developments including algorithms that improve the detection of specific errors, parameter optimization approaches, systematic validation protocols, and advanced algorithms that require very low numbers of patient results while retaining sensitive error detection. Patient-based quality control will continue to improve with the development of new algorithms that reduce biological noise and improve analytical error detection. Patient-based quality control provides continuous and commutable information about the measurement procedure that cannot be easily replicated by conventional internal quality control. Most importantly, the use of patient-based quality control helps laboratories to improve their appreciation of the clinical impact of the laboratory results produced, bringing them closer to the patients.Laboratories are encouraged to implement patient-based quality control processes to overcome the limitations of conventional internal quality control practices. Regulatory changes to recognize the capability of patient-based quality approaches, as well as laboratory informatics advances, are required for this tool to be adopted more widely.
RESUMEN
BACKGROUND: Social behaviors such as altruism, where one self-sacrifices for collective benefits, critically influence an organism's survival and responses to the environment. Such behaviors are widely exemplified in nature but have been underexplored in cancer cells which are conventionally seen as selfish competitive players. This multidisciplinary study explores altruism and its mechanism in breast cancer cells and its contribution to chemoresistance. METHODS: MicroRNA profiling was performed on circulating tumor cells collected from the blood of treated breast cancer patients. Cancer cell lines ectopically expressing candidate miRNA were used in co-culture experiments and treated with docetaxel. Ecological parameters like relative survival and relative fitness were assessed using flow cytometry. Functional studies and characterization performed in vitro and in vivo include proliferation, iTRAQ-mass spectrometry, RNA sequencing, inhibition by small molecules and antibodies, siRNA knockdown, CRISPR/dCas9 inhibition and fluorescence imaging of promoter reporter-expressing cells. Mathematical modeling based on evolutionary game theory was performed to simulate spatial organization of cancer cells. RESULTS: Opposing cancer processes underlie altruism: an oncogenic process involving secretion of IGFBP2 and CCL28 by the altruists to induce survival benefits in neighboring cells under taxane exposure, and a self-sacrificial tumor suppressive process impeding proliferation of altruists via cell cycle arrest. Both processes are regulated concurrently in the altruists by miR-125b, via differential NF-κB signaling specifically through IKKß. Altruistic cells persist in the tumor despite their self-sacrifice, as they can regenerate epigenetically from non-altruists via a KLF2/PCAF-mediated mechanism. The altruists maintain a sparse spatial organization by inhibiting surrounding cells from adopting the altruistic fate via a lateral inhibition mechanism involving a GAB1-PI3K-AKT-miR-125b signaling circuit. CONCLUSIONS: Our data reveal molecular mechanisms underlying manifestation, persistence and spatial spread of cancer cell altruism. A minor population behave altruistically at a cost to itself producing a collective benefit for the tumor, suggesting tumors to be dynamic social systems governed by the same rules of cooperation in social organisms. Understanding cancer cell altruism may lead to more holistic models of tumor evolution and drug response, as well as therapeutic paradigms that account for social interactions. Cancer cells constitute tractable experimental models for fields beyond oncology, like evolutionary ecology and game theory.
Asunto(s)
Neoplasias de la Mama , MicroARNs , Humanos , Femenino , Altruismo , Fosfatidilinositol 3-Quinasas , MicroARNs/genética , Neoplasias de la Mama/genéticaRESUMEN
An emerging technology (ET) for laboratory medicine can be defined as an analytical method (including biomarkers) or device (software, applications, and algorithms) that by its stage of development, translation into broad routine clinical practice, or geographical adoption and implementation has the potential to add value to clinical diagnostics. Considering the laboratory medicine-specific definition, this document examines eight key tools, encompassing clinical, analytical, operational, and financial aspects, used throughout the life cycle of ET implementation. The tools provide a systematic approach starting with identifying the unmet need or identifying opportunities for improvement (Tool 1), forecasting (Tool 2), technology readiness assessment (Tool 3), health technology assessment (Tool 4), organizational impact map (Tool 5), change management (Tool 6), total pathway to method evaluation checklist (Tool 7), and green procurement (Tool 8). Whilst there are differences in clinical priorities between different settings, the use of this set of tools will help support the overall quality and sustainability of the emerging technology implementation.
Asunto(s)
Tecnología Biomédica , Ciencia del Laboratorio Clínico , Predicción , Ciencia del Laboratorio Clínico/tendenciasRESUMEN
Reporting a measurement procedure and its analytical performance following method evaluation in a peer-reviewed journal is an important means for clinical laboratory practitioners to share their findings. It also represents an important source of evidence base to help others make informed decisions about their practice. At present, there are significant variations in the information reported in laboratory medicine journal publications describing the analytical performance of measurement procedures. These variations also challenge authors, readers, reviewers, and editors in deciding the quality of a submitted manuscript. The International Federation of Clinical Chemistry and Laboratory Medicine Working Group on Method Evaluation Protocols (IFCC WG-MEP) developed a checklist and recommends its adoption to enable a consistent approach to reporting method evaluation and analytical performance characteristics of measurement procedures in laboratory medicine journals. It is envisioned that the LEAP checklist will improve the standardisation of journal publications describing method evaluation and analytical performance characteristics, improving the quality of the evidence base that is relied upon by practitioners.
RESUMEN
Lot-to-lot verification is an integral component for monitoring the long-term stability of a measurement procedure. The practice is challenged by the resource requirements as well as uncertainty surrounding experimental design and statistical analysis that is optimal for individual laboratories, although guidance is becoming increasingly available. Collaborative verification efforts as well as application of patient-based monitoring are likely to further improve identification of any differences in performance in a relatively timely manner. Appropriate follow up actions of failed lot-to-lot verification is required and must balance potential disruptions to clinical services provided by the laboratory. Manufacturers need to increase transparency surrounding release criteria and work closer with laboratory professionals to ensure acceptable reagent lots are released to end users. A tripartite collaboration between regulatory bodies, manufacturers, and laboratory medicine professional bodies is key to developing a balanced system where regulatory, manufacturing, and clinical requirements of laboratory testing are met, to minimize differences between reagent lots and ensure patient safety. Clinical Chemistry and Laboratory Medicine has served as a fertile platform for advancing the discussion and practice of lot-to-lot verification in the past 60 years and will continue to be an advocate of this important topic for many more years to come.
Asunto(s)
Química Clínica , Juego de Reactivos para Diagnóstico , Humanos , Control de Calidad , LaboratoriosRESUMEN
Method evaluation is one of the critical components of the quality system that ensures the ongoing quality of a clinical laboratory. As part of implementing new methods or reviewing best practices, the peer-reviewed published literature is often searched for guidance. From the outset, Clinical Chemistry and Laboratory Medicine (CCLM) has a rich history of publishing methods relevant to clinical laboratory medicine. An insight into submissions, from editors' and reviewers' experiences, shows that authors still struggle with method evaluation, particularly the appropriate requirements for validation in clinical laboratory medicine. Here, we consider through a series of discussion points an overview of the status, challenges, and needs of method evaluation from the perspective of clinical laboratory medicine. We identify six key high-level aspects of clinical laboratory method evaluation that potentially lead to inconsistency. 1. Standardisation of terminology, 2. Selection of analytical performance specifications, 3. Experimental design of method evaluation, 4. Sample requirements of method evaluation, 5. Statistical assessment and interpretation of method evaluation data, and 6. Reporting of method evaluation data. Each of these areas requires considerable work to harmonise the practice of method evaluation in laboratory medicine, including more empirical studies to be incorporated into guidance documents that are relevant to clinical laboratories and are freely and widely available. To further close the loop, educational activities and fostering professional collaborations are essential to promote and improve the practice of method evaluation procedures.
Asunto(s)
Servicios de Laboratorio Clínico , Laboratorios Clínicos , Humanos , Técnicas de Laboratorio Clínico , LaboratoriosRESUMEN
Urine albumin concentration and albumin-creatinine ratio are important for the screening of early-stage kidney damage. Commutable urine certified reference materials (CRMs) for albumin and creatinine are necessary for standardization of urine albumin and accurate measurement of albumin-urine ratio. Two urine CRMs for albumin and creatinine with certified values determined using higher-order reference measurement procedures were evaluated for their commutability on five brands/models of clinical analyzers where different reagent kits were used, including Roche Cobas c702, Roche Cobas c311, Siemens Atellica CH, Beckman Coulter AU5800, and Abbott Architect c16000. The commutability study was conducted by measuring at least 26 authentic patient urine samples and the human urine CRMs using both reference measurement procedures and the routine methods. Both the linear regression model suggested by the Clinical and Laboratory Standard Institute (CLSI) guidelines and log-transformed model recommended by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) Commutability Working Group were used to evaluate the commutability of the human urine CRMs. The commutability of the human urine CRMs was found to be generally satisfactory on all five clinical analyzers for both albumin and creatinine, suggesting that they are suitable to be used routinely by clinical laboratories as quality control or for method validation of urine albumin and creatinine measurements.
Asunto(s)
Albúminas , Modelos Estadísticos , Humanos , Creatinina , Estándares de Referencia , Control de CalidadRESUMEN
Reporting a measurement procedure and its analytical performance following method evaluation in a peer-reviewed journal is an important means for clinical laboratory practitioners to share their findings. It also represents an important source of evidence base to help others make informed decisions about their practice. At present, there are significant variations in the information reported in laboratory medicine journal publications describing the analytical performance of measurement procedures. These variations also challenge authors, readers, reviewers, and editors in deciding the quality of a submitted manuscript.The International Federation of Clinical Chemistry and Laboratory Medicine Working Group on Method Evaluation Protocols (IFCC WG-MEP) developed a checklist and recommends its adoption to enable a consistent approach to reporting method evaluation and analytical performance characteristics of measurement procedures in laboratory medicine journals. It is envisioned that the LEAP checklist will improve the standardisation of journal publications describing method evaluation and analytical performance characteristics, improving the quality of the evidence base that is relied upon by practitioners.
Asunto(s)
Lista de Verificación , Servicios de Laboratorio Clínico , Humanos , Estándares de Referencia , Laboratorios , Laboratorios ClínicosRESUMEN
Neonatal jaundice is one of the most common clinical conditions affecting newborns. For most newborns, jaundice is harmless, however, a proportion of newborns develops severe neonatal jaundice requiring therapeutic interventions, accentuating the need to have reliable and accurate screening tools for timely recognition across different health settings. The gold standard method in diagnosing jaundice involves a blood test and requires specialized hospital-based laboratory instruments. Despite technological advancements in point-of-care laboratory medicine, there is limited accessibility of the specialized devices and sample stability in geographically remote areas. Lack of suitable testing options leads to delays in timely diagnosis and treatment of clinically significant jaundice in developed and developing countries alike. There has been an ever-increasing need for a low-cost, simple to use screening technology to improve timely diagnosis and management of neonatal jaundice. Consequently, several point-of-care (POC) devices have been developed to address this concern. This paper aims to review the literature, focusing on emerging technologies in the screening and diagnosing of neonatal jaundice. We report on the challenges associated with the existing screening tools, followed by an overview of emerging sensors currently in pre-clinical development and the emerging POC devices in clinical trials to advance the screening of neonatal jaundice. The benefits offered by emerging POC devices include their ease of use, low cost, and the accessibility of rapid response test results. However, further clinical trials are required to overcome the current limitations of the emerging POC's before their implementation in clinical settings. Hence, the need for a simple to use, low-cost POC jaundice detection technology for newborns remains an unsolved challenge globally.
Asunto(s)
Ictericia Neonatal , Humanos , Recién Nacido , Ictericia Neonatal/diagnóstico , Tamizaje Neonatal , Sistemas de Atención de PuntoRESUMEN
Lot-to-lot verification is an important laboratory activity that is performed to monitor the consistency of analytical performance over time. In this opinion paper, the concept, clinical impact, challenges and potential solutions for lot-to-lot verification are exained.
Asunto(s)
Laboratorios , Juego de Reactivos para Diagnóstico , Humanos , Control de CalidadRESUMEN
OBJECTIVES: Within-subject biological variation (CVi ) is a fundamental aspect of laboratory medicine, from interpretation of serial results, partitioning of reference intervals and setting analytical performance specifications. Four indirect (data mining) approaches in determination of CVi were directly compared. METHODS: Paired serial laboratory results for 5,000 patients was simulated using four parameters, d the percentage difference in the means between the pathological and non-pathological populations, CVi the within-subject coefficient of variation for non-pathological values, f the fraction of pathological values, and e the relative increase in CVi of the pathological distribution. These parameters resulted in a total of 128 permutations. Performance of the Expected Mean Squares method (EMS), the median method, a result ratio method with Tukey's outlier exclusion method and a modified result ratio method with Tukey's outlier exclusion were compared. RESULTS: Within the 128 permutations examined in this study, the EMS method performed the best with 101/128 permutations falling within ±0.20 fractional error of the 'true' simulated CVi , followed by the result ratio method with Tukey's exclusion method for 78/128 permutations. The median method grossly under-estimated the CVi . The modified result ratio with Tukey's rule performed best overall with 114/128 permutations within allowable error. CONCLUSIONS: This simulation study demonstrates that with careful selection of the statistical approach the influence of outliers from pathological populations can be minimised, and it is possible to recover CVi values close to the 'true' underlying non-pathological population. This finding provides further evidence for use of routine laboratory databases in derivation of biological variation components.
Asunto(s)
Minería de Datos , Proyectos de Investigación , Simulación por Computador , Humanos , Laboratorios , Valores de ReferenciaRESUMEN
OBJECTIVES: Detection of between-lot reagent bias is clinically important and can be assessed by application of regression-based statistics on several paired measurements obtained from the existing and new candidate lot. Here, the bias detection capability of six regression-based lot-to-lot reagent verification assessments, including an extension of the Bland-Altman with regression approach are compared. METHODS: Least squares and Deming regression (in both weighted and unweighted forms), confidence ellipses and Bland-Altman with regression (BA-R) approaches were investigated. The numerical simulation included permutations of the following parameters: differing result range ratios (upper:lower measurement limits), levels of significance (alpha), constant and proportional biases, analytical coefficients of variation (CV), and numbers of replicates and sample sizes. The sample concentrations simulated were drawn from a uniformly distributed concentration range. RESULTS: At a low range ratio (1:10, CV 3%), the BA-R performed the best, albeit with a higher false rejection rate and closely followed by weighted regression approaches. At larger range ratios (1:1,000, CV 3%), the BA-R performed poorly and weighted regression approaches performed the best. At higher assay imprecision (CV 10%), all six approaches performed poorly with bias detection rates <50%. A lower alpha reduced the false rejection rate, while greater sample numbers and replicates improved bias detection. CONCLUSIONS: When performing reagent lot verification, laboratories need to finely balance the false rejection rate (selecting an appropriate alpha) with the power of bias detection (appropriate statistical approach to match assay performance characteristics) and operational considerations (number of clinical samples and replicates, not having alternate reagent lot).
Asunto(s)
Laboratorios , Sesgo , Simulación por Computador , Humanos , Indicadores y ReactivosRESUMEN
OBJECTIVES: Lipemia is the presence of abnormally high lipoprotein concentrations in serum or plasma samples that can interfere with laboratory testing. There is little guidance available from manufacturers or professional bodies on processing lipemic samples to produce clinically acceptable results. This systematic review summarizes existing literature on the effectiveness of lipid removal techniques in reducing interference in clinical chemistry tests. METHODS: A PubMed search using terms relating to lipid removal from human samples for clinical chemistry tests produced 1,558 studies published between January 2010 and July 2021. 15 articles met the criteria for further analyses. RESULTS: A total of 66 analytes were investigated amongst the 15 studies, which showed highly heterogenous study designs. High-speed centrifugation was consistently effective for 13 analytes: albumin, alkaline phosphatase (ALP), alanine aminotransferase (ALT), aspartate aminotransferase (AST), total bilirubin, creatine kinase (CK), creatinine (Jaffe method), gamma-glutamyl transferase (GGT), glucose (hexokinase-based method), lactate dehydrogenase (LDH), phosphate, potassium, and urea. Lipid-clearing agents were uniformly effective for seven analytes: ALT, AST, total bilirubin, CK, creatinine (Jaffe method), lipase, and urea. Mixed results were reported for the remaining analytes. CONCLUSIONS: For some analytes, high-speed centrifugation and/or lipid-clearing agents can be used in place of ultracentrifugation. Harmonized protocols and acceptability criteria are required to allow pooled data analysis and interpretation of different lipemic interference studies.