Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.567
Filter
1.
BMC Prim Care ; 25(1): 270, 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39054449

ABSTRACT

BACKGROUND: Clinical laboratory testing, essential for medical diagnostics, represents a significant part of healthcare activity, influencing around 70% of critical clinical decisions. The automation of laboratory equipment has expanded test menus and increased efficiency to meet the growing demands for clinical testing. However, concerns about misutilization remain prevalent. In Belgium, primary care has seen a dramatic increase in lab test usage, but recent utilization data is lacking. METHODS: We conducted a comprehensive retrospective analysis of laboratory test utilization trends within the primary care settings of Belgium over a ten-year period, spanning from 2012 to 2021, incorporating a vast dataset of 189 million test records for almost 1.5 million persons. This was the first study to integrate the metadata from both the INTEGO & THIN databases, which are derived from the two major electronic medical record (EMR) systems used in primary care in Belgium, providing a comprehensive national perspective. This research provides crucial insights into patient-level patterns, test-level utilization, and offers international perspectives through comparative analysis. RESULTS: We found a subtle annual increase in the average number of laboratory tests per patient (ranging from approximately 0.5-1%), indicative of a deceleration in growth in laboratory test ordering when compared to previous decades. We also witnessed stability and consistency of the most frequently ordered laboratory tests across diverse patient populations and healthcare contexts over the years. CONCLUSIONS: These findings emphasize the need for continued efforts to optimize test utilization, focusing not only on tackling overutilization but on enhancing the diagnostic relevance of tests ordered. The frequently ordered tests should be prioritized in these initiatives to ensure their continued effectiveness in patient care. By consolidating extensive datasets, employing rigorous statistical analysis, and incorporating international perspectives, this study provides a solid foundation for evidence-based strategies aimed at refining laboratory test utilization practices. These strategies can potentially improve the quality of healthcare delivery while simultaneously addressing cost-effectiveness concerns in healthcare.


Subject(s)
Primary Health Care , Belgium , Humans , Primary Health Care/statistics & numerical data , Primary Health Care/trends , Retrospective Studies , Electronic Health Records/trends , Electronic Health Records/statistics & numerical data , Clinical Laboratory Techniques/trends , Clinical Laboratory Techniques/statistics & numerical data , Female , Male , Middle Aged , Adult , Aged
2.
Saudi Med J ; 45(4): 356-361, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38657991

ABSTRACT

OBJECTIVES: To assess the rate of inappropriate repetition of laboratory testing and estimate the cost of such testing for thyroid stimulating hormone (TSH), total cholesterol, vitamin D, and vitamin B12 tests. METHODS: A retrospective cohort study was carried out in the Family Medicine and Polyclinic Department at King Faisal Specialist Hospital and Research Center, Riyadh, Saudi Arabia. Clinical and laboratory data were collected between 2018-2021 for the 4 laboratory tests. The inappropriate repetition of tests was defined according to international guidelines and the costs were calculated using the hospital prices. RESULTS: A total of 109,929 laboratory tests carried out on 23,280 patients were included in this study. The percentage of inappropriate tests, as per the study criteria, was estimated to be 6.1% of all repeated tests. Additionally, the estimated total cost wasted amounted to 2,364,410 Saudi Riyals. Age exhibited a weak positive correlation with the total number of inappropriate tests (r=0.196, p=0.001). Furthermore, significant differences were observed in the medians of the total number of inappropriate tests among genders and nationalities (p<0.001). CONCLUSION: The study identified significantly high rates of inadequate repetitions of frequently requested laboratory tests. Urgent action is therefore crucial to overcoming such an issue.


Subject(s)
Tertiary Healthcare , Humans , Retrospective Studies , Female , Saudi Arabia , Male , Middle Aged , Adult , Tertiary Healthcare/statistics & numerical data , Unnecessary Procedures/statistics & numerical data , Unnecessary Procedures/economics , Ambulatory Care/statistics & numerical data , Ambulatory Care/economics , Thyrotropin/blood , Aged , Young Adult , Cholesterol/blood , Vitamin B 12/blood , Vitamin D/blood , Cohort Studies , Clinical Laboratory Techniques/economics , Clinical Laboratory Techniques/statistics & numerical data , Adolescent , Value-Based Health Care
3.
J Appl Lab Med ; 9(4): 776-788, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38642405

ABSTRACT

BACKGROUND: This paper presents a data-driven strategy for establishing the reportable interval in clinical laboratory testing. The reportable interval defines the range of laboratory result values beyond which reporting should be withheld. The lack of clear guidelines and methodology for determining the reportable interval has led to potential errors in reporting and patient risk. METHODS: To address this gap, the study developed an integrated strategy that combines statistical analysis, expert review, and hypothetical outlier calculations. A large data set from an accredited clinical laboratory was utilized, analyzing over 124 million laboratory test records from 916 distinct tests. The Dixon test was applied to identify outliers and establish the highest and lowest non-outlier result values for each test, which were validated by clinical pathology experts. The methodology also included matching the reportable intervals with relevant Logical Observation Identifiers Names and Codes (LOINC) and Unified Code for Units of Measure (UCUM)-valid units for broader applicability. RESULTS: Upon establishing the reportable interval for 135 routine laboratory tests (493 LOINC codes), we applied these to a primary care laboratory data set of 23 million records, demonstrating their efficacy with over 1% of result records identified as implausible. CONCLUSIONS: We developed and tested a data-driven strategy for establishing reportable intervals utilizing large electronic medical record (EMR) data sets. Implementing the established interval in clinical laboratory settings can improve autoverification systems, enhance data reliability, and reduce errors in patient care. Ongoing refinement and reporting of cases exceeding the reportable limits will contribute to continuous improvement in laboratory result management and patient safety.


Subject(s)
Electronic Health Records , Humans , Electronic Health Records/statistics & numerical data , Retrospective Studies , Clinical Laboratory Techniques/standards , Clinical Laboratory Techniques/statistics & numerical data , Clinical Laboratory Techniques/methods , Laboratories, Clinical/statistics & numerical data , Diagnostic Tests, Routine/standards , Diagnostic Tests, Routine/statistics & numerical data , Diagnostic Tests, Routine/methods , Logical Observation Identifiers Names and Codes
4.
Mayo Clin Proc ; 96(12): 3030-3041, 2021 12.
Article in English | MEDLINE | ID: mdl-34863394

ABSTRACT

OBJECTIVE: To evaluate clinical characteristics of patients admitted to the hospital with coronavirus disease 2019 (COVID-19) in Southern United States and development as well as validation of a mortality risk prediction model. PATIENTS AND METHODS: Southern Louisiana was an early hotspot during the pandemic, which provided a large collection of clinical data on inpatients with COVID-19. We designed a risk stratification model to assess the mortality risk for patients admitted to the hospital with COVID-19. Data from 1673 consecutive patients diagnosed with COVID-19 infection and hospitalized between March 1, 2020, and April 30, 2020, was used to create an 11-factor mortality risk model based on baseline comorbidity, organ injury, and laboratory results. The risk model was validated using a subsequent cohort of 2067 consecutive hospitalized patients admitted between June 1, 2020, and December 31, 2020. RESULTS: The resultant model has an area under the curve of 0.783 (95% CI, 0.76 to 0.81), with an optimal sensitivity of 0.74 and specificity of 0.69 for predicting mortality. Validation of this model in a subsequent cohort of 2067 consecutively hospitalized patients yielded comparable prognostic performance. CONCLUSION: We have developed an easy-to-use, robust model for systematically evaluating patients presenting to acute care settings with COVID-19 infection.


Subject(s)
COVID-19 , Hospitalization/statistics & numerical data , Proportional Hazards Models , Risk Assessment/methods , COVID-19/mortality , COVID-19/prevention & control , COVID-19/therapy , Clinical Laboratory Techniques/methods , Clinical Laboratory Techniques/statistics & numerical data , Comorbidity , Epidemiological Models , Female , Hospital Mortality , Humans , Louisiana/epidemiology , Male , Middle Aged , Organ Dysfunction Scores , Prognosis , Reproducibility of Results , Risk Factors , Severity of Illness Index
5.
Investig Clin Urol ; 62(6): 672-680, 2021 11.
Article in English | MEDLINE | ID: mdl-34729967

ABSTRACT

PURPOSE: This study aimed to test the clinical efficacy of a portable smartphone-based App assisted semen analysis (SA) system, O'VIEW-M PRO® to clinically accurate in comparison with results of laboratory-based conventional semen analyses including manual microscopic and computer-assisted semen analysis (CASA) for self-evaluation of seminal parameters. MATERIALS AND METHODS: From January to May 2021, a total of 39 semen samples were analyzed for the sperm concentration and motility with new smartphone-based App assisted semen analyzer, O'VIEW-M PRO®, and results compared with those from laboratory-based manual microscopic SA with Makler Counting Chamber and CASA. RESULTS: The coefficient factors among the results of the measurement with Makler chamber and laboratory-based CASA comparing to O'VIEW-M PRO® were 0.666 and 0.655 for sperm density, 0.662 and 0.658 for sperm motility, respectively. There were no particular problems with clinical use of the O'VIEW-M PRO®. Device performance in classifying samples is positive (<15×106 sperm/mL) and negative (>15×106 sperm/mL) for sperm concentration criteria, and positive (<40%) and negative (>40%) for sperm motility criteria. The smartphone-based App assisted SA O'VIEW-M PRO® showed a sensitivity of 92.6%, a specificity of 66.7%, and overall accuracy rate of 84.6%. CONCLUSIONS: This study shows a novel smartphone-based App assisted SA system. O'VIEW-M PRO® can easily obtain semen parameter information through self-diagnosis at home and induce infertile men's treatment and help patients after receiving infertile men's treatment before receiving treatment.


Subject(s)
Clinical Laboratory Techniques , Data Collection , Diagnostic Self Evaluation , Infertility, Male/diagnosis , Semen Analysis , Smartphone , Adult , Clinical Laboratory Techniques/methods , Clinical Laboratory Techniques/statistics & numerical data , Data Collection/methods , Data Collection/statistics & numerical data , Humans , Infertility, Male/etiology , Male , Mobile Applications , Reproducibility of Results , Semen Analysis/instrumentation , Semen Analysis/methods , Sperm Count/methods , Sperm Motility
6.
JAMA Netw Open ; 4(10): e2127827, 2021 10 01.
Article in English | MEDLINE | ID: mdl-34596670

ABSTRACT

Importance: Recognition of iron deficiency anemia (IDA) is important to initiate timely evaluation for gastrointestinal tract cancer. Retrospective studies have reported delays in diagnostic evaluation of IDA as a common factor associated with delayed diagnosis of colorectal cancer. Objective: To assess how US primary care physicians (PCPs) approach testing for anemia, interpret iron laboratory studies, and refer patients with IDA for gastrointestinal endoscopy. Design, Setting, and Participants: This survey study, conducted in August 2019, included members of the American College of Physicians Internal Medicine Insiders Panel, a nationally representative group of American College of Physicians membership, who self-identified as PCPs. Participants completed a vignette-based survey to assess practices related to screening for anemia, interpretation of laboratory-based iron studies, and appropriate diagnostic evaluation of IDA. Main Outcomes and Measures: Descriptive statistics based on survey responses were evaluated for frequency of anemia screening, correct interpretation of iron laboratory studies, and proportion of patients with new-onset IDA referred for gastrointestinal tract evaluation. Results: Of 631 PCPs who received an invitation to participate in the survey, 356 (56.4%) responded and 31 (4.9%) were excluded, for an adjusted eligible sample size of 600, yielding 325 completed surveys (response rate, 54.2%). Of the 325 participants who completed surveys, 180 (55.4%) were men; age of participants was not assessed. The mean (SD) duration of clinical experience was 19.8 (11.2) years (range, 1.0-45.0 years). A total of 250 participants (76.9%) screened at least some patients for anemia. Interpretation of iron studies was least accurate in a scenario of a borderline low ferritin level (40 ng/mL) with low transferrin saturation (2%); 86 participants (26.5%) incorrectly responded that this scenario did not indicate IDA, and 239 (73.5%) correctly identified this scenario as IDA. Of 312 participants, 170 (54.5%) recommended bidirectional endoscopy (upper endoscopy and colonoscopy) for new IDA for women aged 65 years; of 305 respondents, 168 (55.1%) recommended bidirectional endoscopy for men aged 65 years. Conclusions and Relevance: In this survey study, US PCPs' self-reported testing practices for anemia suggest overuse of screening laboratory tests, misinterpretation of iron studies, and underuse of bidirectional endoscopy for evaluation of new-onset IDA. Both misinterpretation of iron studies and underuse of bidirectional endoscopy can lead to delayed diagnosis of gastrointestinal tract cancers and warrant additional interventions.


Subject(s)
Anemia, Iron-Deficiency/diagnosis , Clinical Laboratory Techniques/methods , Physicians, Primary Care/standards , Adult , Clinical Laboratory Techniques/statistics & numerical data , Female , Humans , Male , Mass Screening/methods , Mass Screening/statistics & numerical data , Middle Aged , Physicians, Primary Care/statistics & numerical data , Retrospective Studies , Surveys and Questionnaires , United States
7.
BMJ Health Care Inform ; 28(1)2021 Oct.
Article in English | MEDLINE | ID: mdl-34642176

ABSTRACT

BACKGROUND: Despite wide usage across all areas of medicine, it is uncertain how useful standard reference ranges of laboratory values are for critically ill patients. OBJECTIVES: The aim of this study is to assess the distributions of standard laboratory measurements in more than 330 selected intensive care units (ICUs) across the USA, Amsterdam, Beijing and Tarragona; compare differences and similarities across different geographical locations and evaluate how they may be associated with differences in length of stay (LOS) and mortality in the ICU. METHODS: A multi-centre, retrospective, cross-sectional study of data from five databases for adult patients first admitted to an ICU between 2001 and 2019 was conducted. The included databases contained patient-level data regarding demographics, interventions, clinical outcomes and laboratory results. Kernel density estimation functions were applied to the distributions of laboratory tests, and the overlapping coefficient and Cohen standardised mean difference were used to quantify differences in these distributions. RESULTS: The 259 382 patients studied across five databases in four countries showed a high degree of heterogeneity with regard to demographics, case mix, interventions and outcomes. A high level of divergence in the studied laboratory results (creatinine, haemoglobin, lactate, sodium) from the locally used reference ranges was observed, even when stratified by outcome. CONCLUSION: Standardised reference ranges have limited relevance to ICU patients across a range of geographies. The development of context-specific reference ranges, especially as it relates to clinical outcomes like LOS and mortality, may be more useful to clinicians.


Subject(s)
Clinical Laboratory Techniques , Critical Illness , Outcome Assessment, Health Care , Adult , Asia , Clinical Laboratory Techniques/statistics & numerical data , Cross-Sectional Studies , Europe , Humans , North America , Outcome Assessment, Health Care/methods , Reference Values , Retrospective Studies
8.
JAMA Intern Med ; 181(11): 1490-1500, 2021 11 01.
Article in English | MEDLINE | ID: mdl-34570170

ABSTRACT

Importance: Low-value health care remains prevalent in the US despite decades of work to measure and reduce such care. Efforts have been only modestly effective in part because the measurement of low-value care has largely been restricted to the national or regional level, limiting actionability. Objectives: To measure and report low-value care use across and within individual health systems and identify system characteristics associated with higher use using Medicare administrative data. Design, Setting, and Participants: This retrospective cohort study of health system-attributed Medicare beneficiaries was conducted among 556 health systems in the Agency for Healthcare Research and Quality Compendium of US Health Systems and included system-attributed beneficiaries who were older than 65 years, continuously enrolled in Medicare Parts A and B for at least 12 months in 2016 or 2017, and eligible for specific low-value services. Statistical analysis was conducted from January 26 to July 15, 2021. Main Outcomes and Measures: Use of 41 individual low-value services and a composite measure of the 28 most common services among system-attributed beneficiaries, standardized to distance from the mean value. Measures were based on the Milliman MedInsight Health Waste Calculator and published claims-based definitions. Results: Across 556 health systems serving a total of 11 637 763 beneficiaries, the mean (SD) use of each of the 41 low-value services ranged from 0% (0.01%) to 28% (4%) of eligible beneficiaries. The most common low-value services were preoperative laboratory testing (mean [SD] rate, 28% [4%] of eligible beneficiaries), prostate-specific antigen testing in men older than 70 years (mean [SD] rate, 27% [8%]), and use of antipsychotic medications in patients with dementia (mean [SD] rate, 24% [8%]). In multivariable analysis, the health system characteristics associated with higher use of low-value care were smaller proportion of primary care physicians (adjusted composite score, 0.15 [95% CI, 0.04-0.26] for systems with less than the median percentage of primary care physicians vs -0.16 [95% CI, -0.27 to -0.05] for those with more than the median percentage of primary care physicians; P < .001), no major teaching hospital (adjusted composite, 0.10 [95% CI, -0.01 to 0.20] without a teaching hospital vs -0.18 [95% CI, -0.34 to -0.02] with a teaching hospital; P = .01), larger proportion of non-White patients (adjusted composite, 0.15 [95% CI, -0.02 to 0.32] for systems with >20% of non-White beneficiaries vs -0.06 [95% CI, -0.16 to 0.03] for systems with ≤20% of non-White beneficiaries; P = .04), headquartered in the South or West (adjusted composite, 0.28 [95% CI, 0.14-0.43] for the South and 0.22 [95% CI, 0.02-0.42] for the West compared with -0.09 [95% CI, -0.26 to 0.08] for the Northeast and -0.44 [95% CI, -0.60 to -0.28] for the Midwest; P < .001), and serving areas with more health care spending (adjusted composite, 0.23 [95% CI, 0.11-0.35] for areas above the median level of spending vs -0.24 [95% CI, -0.36 to -0.12] for areas below the median level of spending; P < .001). Conclusions and Relevance: The findings of this large cohort study suggest that system-level measurement and reporting of specific low-value services is feasible, enables cross-system comparisons, and reveals a broad range of low-value care use.


Subject(s)
Low-Value Care , Patient Acceptance of Health Care/statistics & numerical data , Primary Health Care , Aged , Antipsychotic Agents/therapeutic use , Clinical Laboratory Techniques/methods , Clinical Laboratory Techniques/statistics & numerical data , Dementia/drug therapy , Health Expenditures , Humans , Medical Assistance , Medicare/statistics & numerical data , Preoperative Care/methods , Primary Health Care/economics , Primary Health Care/methods , Prostate-Specific Antigen/analysis , United States
9.
Parasit Vectors ; 14(1): 439, 2021 Aug 31.
Article in English | MEDLINE | ID: mdl-34465379

ABSTRACT

BACKGROUND: Companion animal endoparasites play a substantial role in both veterinary medicine and public health. Updated epidemiological studies are necessary to identify trends in occurrence and distribution of these parasites, and their associated risk factors. This study aimed to assess the occurrence of canine endoparasites  retrospectively, using fecal flotation  test data available through participating academic veterinary parasitology diagnostic laboratories across the United States of America (USA). METHODS: Canine fecal flotation records from ten veterinary diagnostic laboratories located in nine states in the USA acquired from January 1, 2018, to December 31, 2018, were included. RESULTS: A total of 4692 fecal flotation test results were obtained, with a majority comprised of client-owned dogs (3262; 69.52%), followed by research dogs (375; 8.00%), and shelter dogs (122; 2.60%). Samples from 976 (20.80%) dogs were positive for at least one parasite, and co-infections of two or more parasites were found in 3.82% (179/4692) of the samples. The five most commonly detected parasites were: Giardia sp., (8.33%; 391/4692), Ancylostomatidae (5.63%; 264/4692), Cystoisospora spp. (4.35%; 204/4692), Toxocara canis (2.49%;117/4692), and Trichuris vulpis (2.43%; 114/4692). Various other internal parasites, including gastrointestinal and respiratory nematodes, cestodes, trematodes, and protozoans were detected in less than 1% of samples. CONCLUSIONS: These data illustrate the importance of parasite prevention, routine fecal screening, and treatment of pet dogs. Additionally, pet owners should be educated about general parasite prevalence, prevention, and anthelmintic treatment regimens to reduce the risks of environmental contamination and zoonotic transmission.


Subject(s)
Clinical Laboratory Techniques/methods , Clinical Laboratory Techniques/veterinary , Dog Diseases/diagnosis , Feces/parasitology , Intestinal Diseases, Parasitic/diagnosis , Parasites/isolation & purification , Animals , Clinical Laboratory Techniques/statistics & numerical data , Dog Diseases/parasitology , Dogs , Female , Intestinal Diseases, Parasitic/epidemiology , Male , Parasites/classification , Parasites/genetics , Retrospective Studies , United States/epidemiology
10.
Sex Transm Infect ; 97(7): 507-513, 2021 11.
Article in English | MEDLINE | ID: mdl-34413201

ABSTRACT

BACKGROUND: Due to rising numbers of STI diagnosis and increasing prevalence of antimicrobial resistance, we explored trends in STI testing frequency and diagnoses, alongside sexual decision making and attitudes concerning condom use and HIV pre-exposure prophylaxis (PrEP) at a large urban UK sexual health clinic. METHODS: We examined 66 528 electronic patient records covering 40 321 attendees between 2016 and 2019, 3977 of whom were men who have sex with men or trans persons who have sex with men (MSM/TPSM). We also explored responses from MSM/TPSM attendees sent an electronic questionnaire between November 2018 and 2019 (n=1975) examining behaviours/attitudes towards PrEP. We measured trends in STI diagnoses and sexual behaviours including condomless anal intercourse (CAI), using linear and logistic regression analyses. RESULTS: Tests resulting in gonorrhoea, chlamydia or syphilis diagnoses increased among MSM/TPSM from 13.5% to 18.5% between 2016 and 2019 (p<0.001). The average MSM/TPSM STI testing frequency increased from 1.5/person/year to 2.1/person/year (p=0.017). Gay MSM/TPSM had the highest proportions of attendances resulting in diagnoses, increasing from 15.1% to 19.6% between 2016 and 2019 (p<0.001) compared with bisexual/other MSM/TPSM increasing from 6.9% to 14.5% (p<0.001), alongside smaller but significant increases in non-MSM/TPSM from 5.9% to 7.7% (p<0.001).The proportion of MSM/TPSM clinic attendees reporting CAI in the previous 3 months prior to at least one appointment in a given year increased significantly from 40.6% to 45.5% between 2016 and 2019 (p<0.0001) and average number of partners from 3.8 to 4.5 (p=0.002). Of 617 eligible questionnaire responses, 339/578 (58.7%) HIV-negative and 29/39 (74.4%) HIV-positive MSM/TPSM indicated they would be more likely to have CAI with someone on PrEP versus not on PrEP. 358/578 (61.9%) HIV-negative respondents said that PrEP use would make them more likely to have CAI with HIV-negative partners. CONCLUSION: Rising numbers of STI diagnoses among MSM/TPSM are not attributable to increased testing alone. Increased CAI and number of partners may be attributable to evolving sexual decision making among PrEP users and their partners. Proportionally, bisexual/other MSM/TPSM have the steepest increase in STI diagnoses.


Subject(s)
Clinical Laboratory Techniques/trends , Homosexuality, Male/statistics & numerical data , Pre-Exposure Prophylaxis , Sexual Behavior/statistics & numerical data , Sexually Transmitted Diseases/diagnosis , Sexually Transmitted Diseases/microbiology , Transgender Persons/statistics & numerical data , Adult , Attitude to Health , Chlamydia Infections/diagnosis , Chlamydia Infections/prevention & control , Clinical Laboratory Techniques/statistics & numerical data , Gonorrhea/diagnosis , Gonorrhea/prevention & control , Humans , Male , Middle Aged , Safe Sex/statistics & numerical data , Sexually Transmitted Diseases/prevention & control , Surveys and Questionnaires , Syphilis/diagnosis , Syphilis/prevention & control , Unsafe Sex/statistics & numerical data , Young Adult
11.
South Med J ; 114(7): 401-403, 2021 07.
Article in English | MEDLINE | ID: mdl-34215891

ABSTRACT

OBJECTIVES: The American Society of Hematology's 4T scoring system is a validated tool to assess a patient's probability of having heparin-induced thrombocytopenia (HIT) before testing is performed. There is no benefit to testing patients with a low probability 4T score for HIT. This study aimed to assess for inappropriate HIT testing at our institution based on 4T scoring. METHODS: We retrospectively reviewed 201 patient charts and calculated 4T scores and testing costs to assess for inappropriate testing and the economic impact of such testing. RESULTS: HIT testing often occurred in the least appropriate patients and resulted in tens of thousands of dollars of waste for unnecessary testing. CONCLUSIONS: Inappropriate testing for HIT is still a prevalent issue despite literature supporting the 4T score for guidance in testing appropriateness.


Subject(s)
Cost-Benefit Analysis/classification , Heparin/adverse effects , Overtreatment/economics , Thrombocytopenia/etiology , Adult , Aged , Anticoagulants/adverse effects , Anticoagulants/therapeutic use , Clinical Laboratory Techniques/economics , Clinical Laboratory Techniques/standards , Clinical Laboratory Techniques/statistics & numerical data , Cost-Benefit Analysis/methods , Female , Heparin/therapeutic use , Humans , Male , Middle Aged , Overtreatment/prevention & control , ROC Curve , Retrospective Studies
12.
J Epidemiol Glob Health ; 11(2): 208-215, 2021 06.
Article in English | MEDLINE | ID: mdl-33969948

ABSTRACT

INTRODUCTION: Influenza infection poses a significant public health threat. The core for disease prevention and control relies on strengthened surveillance activities, particularly in Saudi Arabia, the country that hosts the largest annual mass gathering event worldwide. This study aimed to assess the molecular and seasonal pattern of influenza virus subtypes in western Saudi Arabia to inform policy decisions on influenza vaccine. METHODS: This cross-sectional study was conducted at King Abdulaziz Medical City, western Saudi Arabia. Medical records and surveillance database of laboratory-confirmed influenza cases were reviewed from October 2015 to 2019. A panel of real-time polymerase chain reactions was performed to detect influenza A and B. Extracted RNA from a subset of positive samples was used to determine influenza A subtypes and influenza B lineages. RESULTS: This study included a total of 1928 patients with laboratory-confirmed influenza infections. Influenza peaks were observed in October each season, with variant predominant strains. Influenza virus subtypes co-circulate with no reports of co-infection. Influenza A(H3N2) was reported in 42% of the cases, then influenza B (30.7%) and influenza A(H1N1)pdm09 (27.3%). Healthcare workers represented 9.4% of the cases. One-third of the cases (30.4%) were admitted to the hospital with a median admission duration of 4 days. The influenza B viruses were subtyped in 218 cases. Victoria lineage was predominant (64.1%) in 2015 and 2016; however, Yamagata was predominant in the next two consecutive seasons (94.4% and 85.4%, respectively). CONCLUSION: The burden due to influenza B may be underestimated with an observed vaccine mismatch. A quadrivalent influenza vaccine is recommended to reduce the health impact associated with influenza B infections. Molecular surveillance of the influenza viruses should be enhanced continuously for a better understanding of the influenza activity and assessment of vaccine effectiveness.


Subject(s)
Influenza A Virus, H1N1 Subtype , Influenza A Virus, H3N2 Subtype , Influenza B virus , Influenza, Human , Adolescent , Adult , Child , Child, Preschool , Clinical Laboratory Techniques/statistics & numerical data , Cross-Sectional Studies , Female , Humans , Influenza A Virus, H1N1 Subtype/isolation & purification , Influenza A Virus, H3N2 Subtype/isolation & purification , Influenza B virus/isolation & purification , Influenza, Human/diagnosis , Influenza, Human/epidemiology , Influenza, Human/virology , Male , Middle Aged , Saudi Arabia/epidemiology , Seasons , Tertiary Care Centers , Young Adult
13.
PLoS One ; 16(5): e0250901, 2021.
Article in English | MEDLINE | ID: mdl-34038430

ABSTRACT

BACKGROUND: Despite national guidelines promoting hepatitis C virus (HCV) testing in prisons, there is substantial heterogeneity on the implementation of HCV testing in jails. We sought to better understand barriers and opportunities for HCV testing by interviewing a broad group of stakeholders involved in HCV testing and treatment policies and procedures in Massachusetts jails. METHODS: We conducted semi-structured interviews with people incarcerated in Middlesex County Jail (North Billerica, MA), clinicians working in jail and community settings, corrections administrators, and representatives from public health, government, and industry between November 2018-April 2019. RESULTS: 51/120 (42%) of people agreed to be interviewed including 21 incarcerated men (mean age 32 [IQR 25, 39], 60% non-White). Themes that emerged from these interviews included gaps in knowledge about HCV testing and treatment opportunities in jail, the impact of captivity and transience, and interest in improving linkage to HCV care after release. Many stakeholders discussed stigma around HCV infection as a factor in reluctance to provide HCV testing or treatment in the jail setting. Some stakeholders expressed that stigma often led decisionmakers to estimate a lower "worth" of incarcerated individuals living with HCV and therefore to decide against paying for HCV testing.". CONCLUSION: All stakeholders agreed that HCV in the jail setting is a public health issue that needs to be addressed. Exploring stakeholders' many ideas about how HCV testing and treatment can be approached is the first step in developing feasible and acceptable strategies.


Subject(s)
Hepatitis C/diagnosis , Jails/statistics & numerical data , Prisoners/psychology , Prisoners/statistics & numerical data , Prisons/statistics & numerical data , Adult , Clinical Laboratory Techniques/statistics & numerical data , Female , Hepatitis C/virology , Humans , Male , Massachusetts , Public Health/statistics & numerical data , Social Stigma , Surveys and Questionnaires
14.
Crit Care Med ; 49(10): 1651-1663, 2021 10 01.
Article in English | MEDLINE | ID: mdl-33938716

ABSTRACT

OBJECTIVES: Host gene expression signatures discriminate bacterial and viral infection but have not been translated to a clinical test platform. This study enrolled an independent cohort of patients to describe and validate a first-in-class host response bacterial/viral test. DESIGN: Subjects were recruited from 2006 to 2016. Enrollment blood samples were collected in an RNA preservative and banked for later testing. The reference standard was an expert panel clinical adjudication, which was blinded to gene expression and procalcitonin results. SETTING: Four U.S. emergency departments. PATIENTS: Six-hundred twenty-three subjects with acute respiratory illness or suspected sepsis. INTERVENTIONS: Forty-five-transcript signature measured on the BioFire FilmArray System (BioFire Diagnostics, Salt Lake City, UT) in ~45 minutes. MEASUREMENTS AND MAIN RESULTS: Host response bacterial/viral test performance characteristics were evaluated in 623 participants (mean age 46 yr; 45% male) with bacterial infection, viral infection, coinfection, or noninfectious illness. Performance of the host response bacterial/viral test was compared with procalcitonin. The test provided independent probabilities of bacterial and viral infection in ~45 minutes. In the 213-subject training cohort, the host response bacterial/viral test had an area under the curve for bacterial infection of 0.90 (95% CI, 0.84-0.94) and 0.92 (95% CI, 0.87-0.95) for viral infection. Independent validation in 209 subjects revealed similar performance with an area under the curve of 0.85 (95% CI, 0.78-0.90) for bacterial infection and 0.91 (95% CI, 0.85-0.94) for viral infection. The test had 80.1% (95% CI, 73.7-85.4%) average weighted accuracy for bacterial infection and 86.8% (95% CI, 81.8-90.8%) for viral infection in this validation cohort. This was significantly better than 68.7% (95% CI, 62.4-75.4%) observed for procalcitonin (p < 0.001). An additional cohort of 201 subjects with indeterminate phenotypes (coinfection or microbiology-negative infections) revealed similar performance. CONCLUSIONS: The host response bacterial/viral measured using the BioFire System rapidly and accurately discriminated bacterial and viral infection better than procalcitonin, which can help support more appropriate antibiotic use.


Subject(s)
Bacterial Infections/diagnosis , Clinical Laboratory Techniques/standards , Transcriptome , Virus Diseases/diagnosis , Adult , Bacterial Infections/genetics , Biomarkers/analysis , Biomarkers/blood , Clinical Laboratory Techniques/methods , Clinical Laboratory Techniques/statistics & numerical data , Emergency Service, Hospital/organization & administration , Emergency Service, Hospital/statistics & numerical data , Female , Humans , Male , Middle Aged , Virus Diseases/genetics
15.
J Clin Epidemiol ; 136: 146-156, 2021 08.
Article in English | MEDLINE | ID: mdl-33864930

ABSTRACT

OBJECTIVES: This article provides GRADE guidance on how authors of evidence syntheses and health decision makers, including guideline developers, can rate the certainty across a body of evidence for comparative test accuracy questions. STUDY DESIGN AND SETTING: This guidance extends the previously published GRADE guidance for assessing certainty of evidence for test accuracy to scenarios in which two or more index tests are compared. Through an iterative brainstorm-discussion-feedback process within the GRADE working group, we developed a guidance accompanied by practical examples. RESULTS: Rating the certainty of evidence for comparative test accuracy shares many concepts and ideas with the existing GRADE guidance for test accuracy. The rating in comparisons of test accuracy requires additional considerations, such as the selection of appropriate comparative study designs, additional criteria for judging risk of bias, and the consequences of using comparative measures of test accuracy. Distinct approaches to rating certainty are required for comparative test accuracy studies and between-study (indirect) comparisons. CONCLUSION: This GRADE guidance will support transparent assessment of the certainty for a body of comparative test accuracy evidence.


Subject(s)
Biomedical Research/standards , Clinical Laboratory Techniques/standards , Data Accuracy , GRADE Approach/standards , Guidelines as Topic , Publication Bias/statistics & numerical data , Research Design/standards , Clinical Laboratory Techniques/statistics & numerical data , Humans
16.
Am J Trop Med Hyg ; 104(6): 2108-2116, 2021 04 19.
Article in English | MEDLINE | ID: mdl-33872208

ABSTRACT

In 2006, Haiti committed to malaria elimination when the transmission was thought to be low, but before robust national parasite prevalence estimates were available. In 2011, the first national population-based survey confirmed the national malaria parasite prevalence was < 1%. In both 2014 and 2015, Haiti reported approximately 17,000 malaria cases identified passively at health facilities. To detect malaria transmission hotspots for targeting interventions, the National Malaria Control Program (NMCP) piloted an enhanced geographic information surveillance system in three departments with relatively high-, medium-, and low-transmission areas. From October 2014-September 2015, NMCP staff abstracted health facility records of confirmed malaria cases from 59 health facilities and geo-located patients' households. Household locations were aggregated to 1-km2 grid cells to calculate cumulative incidence rates (CIRs) per 1,000 persons. Spatial clustering of CIRs were tested using Getis-Ord Gi* analysis. Space-time permutation models searched for clusters up to 6 km in distance using a 1-month malaria transmission window. Of the 2,462 confirmed cases identified from health facility records, 58% were geo-located. Getis-Ord Gi* analysis identified 43 1-km2 hotspots in coastal and inland areas that overlapped primarily with 13 space-time clusters (size: 0.26-2.97 km). This pilot describes the feasibility of detecting malaria hotspots in resource-poor settings. More data from multiple years and serological household surveys are needed to assess completeness and hotspot stability. The NMCP can use these pilot methods and results to target foci investigations and malaria interventions more accurately.


Subject(s)
Health Facilities , Malaria/epidemiology , Spatial Analysis , Adolescent , Adult , Aged , Aged, 80 and over , Child , Child, Preschool , Clinical Laboratory Techniques/statistics & numerical data , Haiti/epidemiology , Health Facilities/statistics & numerical data , Humans , Incidence , Infant , Malaria/diagnosis , Malaria/transmission , Middle Aged , Pilot Projects , Prevalence , Retrospective Studies , Young Adult
17.
Eur J Clin Microbiol Infect Dis ; 40(9): 1899-1907, 2021 Sep.
Article in English | MEDLINE | ID: mdl-33837879

ABSTRACT

To explore the diagnostic value of a galactomannan (GM) detection for non-immunocompromised critically ill patients with influenza-associated aspergillosis (IAA). In this retrospective case-control study, we explored the sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and area under the receiver operating characteristic (ROC) curve (AUC) of serum and bronchoalveolar lavage fluid (BALF) GM tests by four detection strategies at different detection time points and with different compound modes. In total, 90 patients were evaluated. The AUC values of the second serum GM test, the first and second BALF GM tests, were significantly higher (0.839 (95% CI 0.716 to 0.963), P < 0.01; 0.904 (95% CI 0.820 to 0.988), P < 0.01; 0.827 (95% CI 0.694 to 0.961), P = 0.043) than that of the first serum GM test (0.548 (95% CI 0.377 to 0.718)). We found that at least one positive result on two consecutive serum GM tests (0.719 (95% CI 0.588 to 0.849)) was the best compared with the first positive test (0.419 (95% CI 0.342 to 0.641), P < 0.01) and positives on two consecutive tests (0.636 (95% CI 0.483 to 0.790), P = 0.014). However, there were no differences between those three detection strategies of BALF GM. The BALF GM test might have a better diagnostic value for IAA in the ICU than the serum GM test. A possible cutoff value of 1.0 to 1.3 was set for GM from BALF specimens for IAA. A single serum GM test is not routinely recommended, but at least one positive result on two consecutive tests appeared to be useful.


Subject(s)
Aspergillosis/diagnosis , Bronchoalveolar Lavage Fluid/chemistry , Clinical Laboratory Techniques/statistics & numerical data , Galactose/analogs & derivatives , Influenza, Human/complications , Invasive Pulmonary Aspergillosis/diagnosis , Mannans/analysis , Adult , Aged , Case-Control Studies , Clinical Laboratory Techniques/methods , Clinical Laboratory Techniques/standards , Critical Illness , Female , Galactose/analysis , Humans , Influenza, Human/microbiology , Male , Middle Aged , Predictive Value of Tests , ROC Curve , Retrospective Studies , Seasons , Sensitivity and Specificity
18.
Sci Rep ; 11(1): 7567, 2021 04 07.
Article in English | MEDLINE | ID: mdl-33828178

ABSTRACT

The use of deep learning and machine learning (ML) in medical science is increasing, particularly in the visual, audio, and language data fields. We aimed to build a new optimized ensemble model by blending a DNN (deep neural network) model with two ML models for disease prediction using laboratory test results. 86 attributes (laboratory tests) were selected from datasets based on value counts, clinical importance-related features, and missing values. We collected sample datasets on 5145 cases, including 326,686 laboratory test results. We investigated a total of 39 specific diseases based on the International Classification of Diseases, 10th revision (ICD-10) codes. These datasets were used to construct light gradient boosting machine (LightGBM) and extreme gradient boosting (XGBoost) ML models and a DNN model using TensorFlow. The optimized ensemble model achieved an F1-score of 81% and prediction accuracy of 92% for the five most common diseases. The deep learning and ML models showed differences in predictive power and disease classification patterns. We used a confusion matrix and analyzed feature importance using the SHAP value method. Our new ML model achieved high efficiency of disease prediction through classification of diseases. This study will be useful in the prediction and diagnosis of diseases.


Subject(s)
Clinical Laboratory Techniques/statistics & numerical data , Diagnosis, Computer-Assisted/methods , Machine Learning , Databases, Factual/statistics & numerical data , Deep Learning , Diagnosis, Computer-Assisted/statistics & numerical data , Disease/classification , Humans , Neural Networks, Computer , ROC Curve
19.
Lupus ; 30(5): 785-794, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33554715

ABSTRACT

BACKGROUND: Age at disease onset may modulate systemic lupus erythematosus (SLE), but its relation to cutaneous/extracutaneous manifestation remains understudied. OBJECTIVE: To compare the cutaneous, systemic features, laboratory characteristics, and disease severity between late- and adult-onset SLE patients. METHODS: Analyses of the cutaneous, systemic involvement, laboratory investigations, SLE disease activity index 2000 (SLEDAI-2K), and disease damage were performed to compare between groups. RESULTS: Of 1006 SLE patients, 740 and 226 had adult- (15-50 years) and late-onset (>50 years), respectively. Among 782 with cutaneous lupus erythematosus (CLE), acute CLE (ACLE) and chronic CLE (CCLE) were more common in the adult- and late-onset SLE, respectively (p = 0.001). Multivariable logistic regression analysis demonstrated that male patients and skin signs, including papulosquamous subacute CLE, discoid lupus erythematosus, and lupus profundus, were associated with late-onset SLE (all p < 0.05). Late-onset SLE had lower lupus-associated autoantibodies, and systemic involvement (all p < 0.05). ACLE, CCLE, mucosal lupus, alopecia, and non-specific lupus were related to higher disease activity in adult-onset SLE (all p < 0.001). There was no difference in the damage index between the two groups. CONCLUSIONS: Late-onset SLE had a distinct disease expression with male predominance, milder disease activity, and lower systemic involvement. Cutaneous manifestations may hold prognostic values for SLE.


Subject(s)
Lupus Erythematosus, Cutaneous/immunology , Lupus Erythematosus, Cutaneous/pathology , Lupus Erythematosus, Discoid/immunology , Lupus Erythematosus, Discoid/pathology , Lupus Erythematosus, Systemic/immunology , Lupus Erythematosus, Systemic/pathology , Acute Disease , Adult , Age of Onset , Aged , Alopecia/diagnosis , Alopecia/etiology , Alopecia/immunology , Autoantibodies/blood , Clinical Laboratory Techniques/statistics & numerical data , Clinical Laboratory Techniques/trends , Cross-Sectional Studies , Female , Humans , Lupus Erythematosus, Cutaneous/diagnosis , Lupus Erythematosus, Discoid/diagnosis , Lupus Erythematosus, Systemic/diagnosis , Male , Middle Aged , Prognosis , Retrospective Studies , Severity of Illness Index , Sex Factors , Thailand/epidemiology
20.
J Clin Lab Anal ; 35(3): e23699, 2021 Mar.
Article in English | MEDLINE | ID: mdl-33458892

ABSTRACT

BACKGROUND: Various errors in the procedure of specimen collection have been reported as the primary causes of pre-analytical errors. The aim of this study was to monitor and assess the reasons and frequencies of rejected samples in China. METHODS: A pre-analytical external quality assessment (EQA) scheme involving six quality indicators (QIs) was conducted from 2017 to 2019. Rejection rate was calculated for each QI. The difference of the rejection rates over the time was checked by Chi-square test. Furthermore, the 25th, 50th, and 75th percentiles of the results from total laboratories each year were calculated as optimum, desirable, and minimum level of performance specifications. RESULTS: In total, 423 laboratories submitted data continuously for six EQA rounds. The overall rejection rates were 0.2042%, 0.1709%, 0.1942%, 0.1689%, 0.1593%, and 0.1491%, respectively. The most common error was sample hemolysed (0.0514%-0.0635%), and the least one was sample not received (0.0008%-0.0014%). A significant reduction in percentages was observed for all QIs. For biochemistry and immunology, hemolysis accounted for more than half of the rejection causes, while for hematology, the primary cause shifted from incorrect fill level to sample clotted. The quality specifications had improved over time, except for the optimum level. CONCLUSION: The significant reduction in error rates on sample rejection we observed suggested that laboratories should pay more attention to the standardized specimen collection. We also provide a benchmark for QIs performance specification to help laboratories increase awareness about the critical aspects in the need of improvement actions.


Subject(s)
Clinical Laboratory Techniques/standards , Specimen Handling/standards , China , Clinical Laboratory Techniques/statistics & numerical data , Hematologic Tests/standards , Hemolysis , Humans , Immunologic Tests/standards , Laboratories/standards , Laboratories/statistics & numerical data , Quality Control , Specimen Handling/statistics & numerical data
SELECTION OF CITATIONS
SEARCH DETAIL