Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 121
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Malar J ; 23(1): 200, 2024 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-38943203

RESUMEN

BACKGROUND: Microscopic detection of malaria parasites is labour-intensive, time-consuming, and expertise-demanding. Moreover, the slide interpretation is highly dependent on the staining technique and the technician's expertise. Therefore, there is a growing interest in next-generation, fully- or semi-integrated microscopes that can improve slide preparation and examination. This study aimed to evaluate the clinical performance of miLab™ (Noul Inc., Republic of Korea), a fully-integrated automated microscopy device for the detection of malaria parasites in symptomatic patients at point-of-care in Sudan. METHODS: This was a prospective, case-control diagnostic accuracy study conducted in primary health care facilities in rural Khartoum, Sudan in 2020. According to the outcomes of routine on-site microscopy testing, 100 malaria-positive and 90 malaria-negative patients who presented at the health facility and were 5 years of age or older were enrolled consecutively. All consenting patients underwent miLab™ testing and received a negative or suspected result. For the primary analysis, the suspected results were regarded as positive (automated mode). For the secondary analysis, the operator reviewed the suspected results and categorized them as either negative or positive (corrected mode). Nested polymerase chain reaction (PCR) was used as the reference standard, and expert light microscopy as the comparator. RESULTS: Out of the 190 patients, malaria diagnosis was confirmed by PCR in 112 and excluded in 78. The sensitivity of miLab™ was 91.1% (95% confidence interval [CI] 84.2-95.6%) and the specificity was 66.7% (95% Cl 55.1-67.7%) in the automated mode. The specificity increased to 96.2% (95% Cl 89.6-99.2%), with operator intervention in the corrected mode. Concordance of miLab with expert microscopy was substantial (kappa 0.65 [95% CI 0.54-0.76]) in the automated mode, but almost perfect (kappa 0.97 [95% CI 0.95-0.99]) in the corrected mode. A mean difference of 0.359 was found in the Bland-Altman analysis of the agreement between expert microscopy and miLab™ for quantifying parasite counts. CONCLUSION: When used in a clinical context, miLab™ demonstrated high sensitivity but low specificity. Expert intervention was shown to be required to improve the device's specificity in its current version. miLab™ in the corrected mode performed similar to expert microscopy. Before clinical application, more refinement is needed to ensure full workflow automation and eliminate human intervention. Trial registration ClinicalTrials.gov: NCT04558515.


Asunto(s)
Malaria , Microscopía , Sistemas de Atención de Punto , Sensibilidad y Especificidad , Sudán , Microscopía/métodos , Humanos , Estudios de Casos y Controles , Estudios Prospectivos , Femenino , Masculino , Niño , Preescolar , Adulto , Adolescente , Malaria/diagnóstico , Adulto Joven , Persona de Mediana Edad
2.
Clin Infect Dis ; 77(Suppl 2): S145-S155, 2023 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-37490745

RESUMEN

BACKGROUND: Inappropriate antibiotic prescriptions are a known driver of antimicrobial resistance in settings with limited diagnostic capacity. This study aimed to assess the impact of diagnostic algorithms incorporating rapid diagnostic tests on clinical outcomes and antibiotic prescriptions compared with standard-of-care practices, of acute febrile illness cases at outpatient clinics in Shai-Osudoku and Prampram districts in Ghana. METHODS: This was an open-label, centrally randomized controlled trial in 4 health facilities. Participants aged 6 months to <18 years of both sexes with acute febrile illness were randomized to receive a package of interventions to guide antibiotic prescriptions or standard care. Clinical outcomes were assessed on day 7. RESULTS: In total, 1512 patients were randomized to either the intervention (n = 761) or control (n = 751) group. Majority were children aged <5 years (1154 of 1512, 76.3%) and male (809 of 1512, 53.5%). There was 11% relative risk reduction of antibiotic prescription in intervention group (RR, 0.89; 95% CI, .79 to 1.01); 14% in children aged <5 years (RR, 0.86; 95% CI, .75 to .98), 15% in nonmalaria patients (RR, 0.85; 95% CI, .75 to .96), and 16% in patients with respiratory symptoms (RR, 0.84; 95% CI, .73 to .96). Almost all participants had favorable outcomes (759 of 761, 99.7% vs 747 of 751, 99.4%). CONCLUSIONS: In low- and middle-income countries, the combination of point-of-care diagnostics, diagnostic algorithms, and communication training can be used at the primary healthcare level to reduce antibiotic prescriptions among children with acute febrile illness, patients with nonmalarial fevers, and respiratory symptoms. CLINICAL TRIALS REGISTRATION: NCT04081051.


Asunto(s)
Antibacterianos , Sistemas de Atención de Punto , Niño , Femenino , Humanos , Masculino , Ghana , Antibacterianos/uso terapéutico , Prueba de Diagnóstico Rápido , Pruebas en el Punto de Atención , Prescripciones , Fiebre/diagnóstico , Fiebre/tratamiento farmacológico , Instituciones de Atención Ambulatoria , Atención Primaria de Salud
3.
Clin Infect Dis ; 77(Suppl 2): S206-S210, 2023 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-37490738

RESUMEN

In this Viewpoint, the authors explore the determinants of patients' prescription adherence behaviors as part of FIND's Advancing Access to Diagnostic Innovation essential for Universal Health Coverage and AMR Prevention (ADIP) trials (ClinicalTrials.gov identifier: NCT04081051). Research findings from Burkina Faso, Ghana, and Uganda show that basic knowledge and understanding of prescription instructions are essential for adherence and can be improved through better communication. However, there are a range of other factors that influence adherence, some of which can be influenced through tailored communication messages from healthcare workers. These messages may contribute to changes in adherence behavior but may require other reinforcing interventions to be effective. Finally, there are some drivers of nonadherence centered around costs and time pressure that require other forms of intervention.


Asunto(s)
Antibacterianos , Farmacorresistencia Bacteriana , Humanos , Antibacterianos/uso terapéutico , Comunicación , Prescripciones , Investigación Cualitativa
4.
Clin Infect Dis ; 77(Suppl 2): S134-S144, 2023 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-37490742

RESUMEN

BACKGROUND: Low- and middle-income countries face significant challenges in differentiating bacterial from viral causes of febrile illnesses, leading to inappropriate use of antibiotics. This trial aimed to evaluate the impact of an intervention package comprising diagnostic tests, a diagnostic algorithm, and a training-and-communication package on antibiotic prescriptions and clinical outcomes. METHODS: Patients aged 6 months to 18 years with fever or history of fever within the past 7 days with no focus, or a suspected respiratory tract infection, arriving at 2 health facilities were randomized to either the intervention package or standard practice. The primary outcomes were the proportions of patients who recovered at day 7 (D7) and patients prescribed antibiotics at day 0. RESULTS: Of 1718 patients randomized, 1681 (97.8%; intervention: 844; control: 837) completed follow-up: 99.5% recovered at D7 in the intervention arm versus 100% in standard practice (P = .135). Antibiotics were prescribed to 40.6% of patients in the intervention group versus 57.5% in the control arm (risk ratio: 29.3%; 95% CI: 21.8-36.0%; risk difference [RD]: -16.8%; 95% CI: -21.7% to -12.0%; P < .001), which translates to 1 additional antibiotic prescription saved every 6 (95% CI: 5-8) consultations. This reduction was significant regardless of test results for malaria, but was greater in patients without malaria (RD: -46.0%; -54.7% to -37.4%; P < .001), those with a respiratory diagnosis (RD: -38.2%; -43.8% to -32.6%; P < .001), and in children 6-59 months old (RD: -20.4%; -26.0% to -14.9%; P < .001). Except for the period July-September, the reduction was consistent across the other quarters (P < .001). CONCLUSIONS: The implementation of the package can reduce inappropriate antibiotic prescription without compromising clinical outcomes. CLINICAL TRIALS REGISTRATION: clinicaltrials.gov; NCT04081051.


Asunto(s)
Antibacterianos , Malaria , Humanos , Niño , Adolescente , Lactante , Preescolar , Burkina Faso , Antibacterianos/uso terapéutico , Prescripciones , Malaria/tratamiento farmacológico , Instituciones de Salud , Algoritmos
5.
Clin Infect Dis ; 77(Suppl 2): S156-S170, 2023 07 25.
Artículo en Inglés | MEDLINE | ID: mdl-37490746

RESUMEN

BACKGROUND: Increasing trends of antimicrobial resistance are observed around the world, driven in part by excessive use of antimicrobials. Limited access to diagnostics, particularly in low- and middle-income countries, contributes to diagnostic uncertainty, which may promote unnecessary antibiotic use. We investigated whether introducing a package of diagnostic tools, clinical algorithm, and training-and-communication messages could safely reduce antibiotic prescribing compared with current standard-of-care for febrile patients presenting to outpatient clinics in Uganda. METHODS: This was an open-label, multicenter, 2-arm randomized controlled trial conducted at 3 public health facilities (Aduku, Nagongera, and Kihihi health center IVs) comparing the proportions of antibiotic prescriptions and clinical outcomes for febrile outpatients aged ≥1 year. The intervention arm included a package of point-of-care tests, a diagnostic and treatment algorithm, and training-and-communication messages. Standard-of-care was provided to patients in the control arm. RESULTS: A total of 2400 patients were enrolled, with 49.5% in the intervention arm. Overall, there was no significant difference in antibiotic prescriptions between the study arms (relative risk [RR]: 1.03; 95% CI: .96-1.11). In the intervention arm, patients with positive malaria test results (313/500 [62.6%] vs 170/473 [35.9%]) had a higher RR of being prescribed antibiotics (1.74; 1.52-2.00), while those with negative malaria results (348/688 [50.6%] vs 376/508 [74.0%]) had a lower RR (.68; .63-.75). There was no significant difference in clinical outcomes. CONCLUSIONS: This study found that a diagnostic intervention for management of febrile outpatients did not achieve the desired impact on antibiotic prescribing at 3 diverse and representative health facility sites in Uganda.


Asunto(s)
Manejo de Caso , Malaria , Humanos , Uganda , Pacientes Ambulatorios , Malaria/tratamiento farmacológico , Fiebre/diagnóstico , Fiebre/tratamiento farmacológico , Antibacterianos/uso terapéutico , Instituciones de Atención Ambulatoria , Comunicación , Algoritmos
6.
Malar J ; 22(1): 33, 2023 Jan 27.
Artículo en Inglés | MEDLINE | ID: mdl-36707822

RESUMEN

BACKGROUND: Microscopic examination is commonly used for malaria diagnosis in the field. However, the lack of well-trained microscopists in malaria-endemic areas impacted the most by the disease is a severe problem. Besides, the examination process is time-consuming and prone to human error. Automated diagnostic systems based on machine learning offer great potential to overcome these problems. This study aims to evaluate Malaria Screener, a smartphone-based application for malaria diagnosis. METHODS: A total of 190 patients were recruited at two sites in rural areas near Khartoum, Sudan. The Malaria Screener mobile application was deployed to screen Giemsa-stained blood smears. Both expert microscopy and nested PCR were performed to use as reference standards. First, Malaria Screener was evaluated using the two reference standards. Then, during post-study experiments, the evaluation was repeated for a newly developed algorithm, PlasmodiumVF-Net. RESULTS: Malaria Screener reached 74.1% (95% CI 63.5-83.0) accuracy in detecting Plasmodium falciparum malaria using expert microscopy as the reference after a threshold calibration. It reached 71.8% (95% CI 61.0-81.0) accuracy when compared with PCR. The achieved accuracies meet the WHO Level 3 requirement for parasite detection. The processing time for each smear varies from 5 to 15 min, depending on the concentration of white blood cells (WBCs). In the post-study experiment, Malaria Screener reached 91.8% (95% CI 83.8-96.6) accuracy when patient-level results were calculated with a different method. This accuracy meets the WHO Level 1 requirement for parasite detection. In addition, PlasmodiumVF-Net, a newly developed algorithm, reached 83.1% (95% CI 77.0-88.1) accuracy when compared with expert microscopy and 81.0% (95% CI 74.6-86.3) accuracy when compared with PCR, reaching the WHO Level 2 requirement for detecting both Plasmodium falciparum and Plasmodium vivax malaria, without using the testing sites data for training or calibration. Results reported for both Malaria Screener and PlasmodiumVF-Net used thick smears for diagnosis. In this paper, both systems were not assessed in species identification and parasite counting, which are still under development. CONCLUSION: Malaria Screener showed the potential to be deployed in resource-limited areas to facilitate routine malaria screening. It is the first smartphone-based system for malaria diagnosis evaluated on the patient-level in a natural field environment. Thus, the results in the field reported here can serve as a reference for future studies.


Asunto(s)
Malaria Falciparum , Malaria Vivax , Malaria , Aplicaciones Móviles , Humanos , Teléfono Inteligente , Malaria/parasitología , Malaria Falciparum/diagnóstico , Malaria Falciparum/parasitología , Malaria Vivax/diagnóstico , Plasmodium falciparum , Sensibilidad y Especificidad , Plasmodium vivax
7.
Malar J ; 22(1): 60, 2023 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-36803858

RESUMEN

BACKGROUND: Rapid diagnostic tests (RDTs) are effective tools to diagnose and inform the treatment of malaria in adults and children. The recent development of a highly sensitive rapid diagnostic test (HS-RDT) for Plasmodium falciparum has prompted questions over whether it could improve the diagnosis of malaria in pregnancy and pregnancy outcomes in malaria endemic areas. METHODS: This landscape review collates studies addressing the clinical performance of the HS-RDT. Thirteen studies were identified comparing the HS-RDT and conventional RDT (co-RDT) to molecular methods to detect malaria in pregnancy. Using data from five completed studies, the association of epidemiological and pregnancy-related factors on the sensitivity of HS-RDT, and comparisons with co-RDT were investigated. The studies were conducted in 4 countries over a range of transmission intensities in largely asymptomatic women. RESULTS: Sensitivity of both RDTs varied widely (HS-RDT range 19.6 to 85.7%, co-RDT range 22.8 to 82.8% compared to molecular testing) yet HS-RDT detected individuals with similar parasite densities across all the studies including different geographies and transmission areas [geometric mean parasitaemia around 100 parasites per µL (p/µL)]. HS-RDTs were capable of detecting low-density parasitaemias and in one study detected around 30% of infections with parasite densities of 0-2 p/µL compared to the co-RDT in the same study which detected around 15%. CONCLUSION: The HS-RDT has a slightly higher analytical sensitivity to detect malaria infections in pregnancy than co-RDT but this mostly translates to only fractional and not statistically significant improvement in clinical performance by gravidity, trimester, geography or transmission intensity. The analysis presented here highlights the need for larger and more studies to evaluate incremental improvements in RDTs. The HS-RDT could be used in any situation where co-RDT are currently used for P. falciparum diagnosis, if storage conditions can be adhered to.


Asunto(s)
Malaria Falciparum , Malaria , Adulto , Embarazo , Niño , Humanos , Femenino , Plasmodium falciparum , Prueba de Diagnóstico Rápido , Sensibilidad y Especificidad , Malaria Falciparum/diagnóstico , Malaria Falciparum/epidemiología , Pruebas Diagnósticas de Rutina/métodos , Antígenos de Protozoos/análisis
8.
Clin Infect Dis ; 75(1): e368-e379, 2022 08 24.
Artículo en Inglés | MEDLINE | ID: mdl-35323932

RESUMEN

BACKGROUND: In locations where few people have received coronavirus disease 2019 (COVID-19) vaccines, health systems remain vulnerable to surges in severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infections. Tools to identify patients suitable for community-based management are urgently needed. METHODS: We prospectively recruited adults presenting to 2 hospitals in India with moderate symptoms of laboratory-confirmed COVID-19 to develop and validate a clinical prediction model to rule out progression to supplemental oxygen requirement. The primary outcome was defined as any of the following: SpO2 < 94%; respiratory rate > 30 BPM; SpO2/FiO2 < 400; or death. We specified a priori that each model would contain three clinical parameters (age, sex, and SpO2) and 1 of 7 shortlisted biochemical biomarkers measurable using commercially available rapid tests (C-reactive protein [CRP], D-dimer, interleukin 6 [IL-6], neutrophil-to-lymphocyte ratio [NLR], procalcitonin [PCT], soluble triggering receptor expressed on myeloid cell-1 [sTREM-1], or soluble urokinase plasminogen activator receptor [suPAR]), to ensure the models would be suitable for resource-limited settings. We evaluated discrimination, calibration, and clinical utility of the models in a held-out temporal external validation cohort. RESULTS: In total, 426 participants were recruited, of whom 89 (21.0%) met the primary outcome; 257 participants comprised the development cohort, and 166 comprised the validation cohort. The 3 models containing NLR, suPAR, or IL-6 demonstrated promising discrimination (c-statistics: 0.72-0.74) and calibration (calibration slopes: 1.01-1.05) in the validation cohort and provided greater utility than a model containing the clinical parameters alone. CONCLUSIONS: We present 3 clinical prediction models that could help clinicians identify patients with moderate COVID-19 suitable for community-based management. The models are readily implementable and of particular relevance for locations with limited resources.


Asunto(s)
COVID-19 , Adulto , COVID-19/diagnóstico , Progresión de la Enfermedad , Humanos , Interleucina-6 , Modelos Estadísticos , Alta del Paciente , Seguridad del Paciente , Pronóstico , Estudios Prospectivos , Receptores del Activador de Plasminógeno Tipo Uroquinasa , Reproducibilidad de los Resultados , SARS-CoV-2
9.
Emerg Infect Dis ; 28(4): 860-864, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35318932

RESUMEN

We tested animals from wildlife trade sites in Laos for the presence of zoonotic pathogens. Leptospira spp. were the most frequently detected infectious agents, found in 20.1% of animals. Rickettsia typhi and R. felis were also detected. These findings suggest a substantial risk for exposure through handling and consumption of wild animal meat.


Asunto(s)
Leptospira , Zoonosis , Animales , Animales Salvajes , Humanos , Laos/epidemiología , Rickettsia typhi , Zoonosis/epidemiología
10.
PLoS Med ; 19(12): e1004111, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36472973

RESUMEN

BACKGROUND: Cardiovascular diseases (CVDs) are the leading cause of mortality globally with almost a third of all annual deaths worldwide. Low- and middle-income countries (LMICs) are disproportionately highly affected covering 80% of these deaths. For CVD, hypertension (HTN) is the leading modifiable risk factor. The comparative impact of diagnostic interventions that improve either the accuracy, the reach, or the completion of HTN screening in comparison to the current standard of care has not been estimated. METHODS AND FINDINGS: This microsimulation study estimated the impact on HTN-induced morbidity and mortality in LMICs for four different scenarios: (S1) lower HTN diagnostic accuracy; (S2) improved HTN diagnostic accuracy; (S3) better implementation strategies to reach more persons with existing tools; and, lastly, (S4) the wider use of easy-to-use tools, such as validated, automated digital blood pressure measurement devices to enhance screening completion, in comparison to the current standard of care (S0). Our hypothetical population was parametrized using nationally representative, individual-level HPACC data and the global burden of disease data. The prevalence of HTN in the population was 31% out of which 60% remained undiagnosed. We investigated how the alteration of a yearly blood pressure screening event impacts morbidity and mortality in the population over a period of 10 years. The study showed that while improving test accuracy avoids 0.6% of HTN-induced deaths over 10 years (13,856,507 [9,382,742; 17,395,833]), almost 40 million (39,650,363 [31,34,233, 49,298,921], i.e., 12.7% [9.9, 15.8]) of the HTN-induced deaths could be prevented by increasing coverage and completion of a screening event in the same time frame. Doubling the coverage only would still prevent 3,304,212 million ([2,274,664; 4,164,180], 12.1% [8.3, 15.2]) CVD events 10 years after the rollout of the program. Our study is limited by the scarce data available on HTN and CVD from LMICs. We had to pool some parameters across stratification groups, and additional information, such as dietary habits, lifestyle choice, or the blood pressure evolution, could not be considered. Nevertheless, the microsimulation enabled us to include substantial heterogeneity and stochasticity toward the different income groups and personal CVD risk scores in the model. CONCLUSIONS: While it is important to consider investing in newer diagnostics for blood pressure testing to continuously improve ease of use and accuracy, more emphasis should be placed on screening completion.


Asunto(s)
Hipertensión , Humanos , Hipertensión/diagnóstico , Hipertensión/epidemiología
11.
J Clin Microbiol ; 60(12): e0100022, 2022 12 21.
Artículo en Inglés | MEDLINE | ID: mdl-36448816

RESUMEN

Blood and bone marrow cultures are considered the gold standard for the diagnosis of typhoid, but these methods require infrastructure and skilled staff that are not always available in low- and middle-income countries where typhoid is endemic. The objective of the study is to evaluate the sensitivity and specificity of nine commercially available Salmonella Typhi rapid diagnostic tests (RDTs) using blood culture as a reference standard in a multicenter study. This was a prospective and retrospective multicenter diagnostic accuracy study conducted in two geographically distant areas where typhoid is endemic (Pakistan and Kenya; NCT04801602). Nine RDTs were evaluated, including the Widal test. Point estimates for sensitivity and specificity were calculated using the Wilson method. Latent class analyses were performed using R to address the imperfect gold standard. A total of 531 serum samples were evaluated (264 blood culture positive; 267 blood culture negative). The sensitivity of RDTs varied widely (range, 0 to 78.8%), with the best overall performance shown by Enterocheck WB (72.7% sensitivity, 86.5% specificity). In latent class modeling, CTK IgG was found to have the highest sensitivity (79.1%), while the highest overall accuracy was observed with Enterocheck (73.8% sensitivity, 94.5% specificity). All commercially available Salmonella Typhi RDTs evaluated in the study had sensitivity and specificity values that fell below the required levels to be recommended for an accurate diagnosis. There were minimal differences in RDT performances between regions of endemicity. These findings highlight the clear need for new and more-accurate Salmonella Typhi tests.


Asunto(s)
Fiebre Tifoidea , Humanos , Fiebre Tifoidea/diagnóstico , Sistemas de Atención de Punto , Kenia , Pakistán , Estudios Prospectivos , Anticuerpos Antibacterianos , Salmonella typhi , Sensibilidad y Especificidad
12.
BMC Infect Dis ; 22(1): 434, 2022 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-35509024

RESUMEN

BACKGROUND: The management of febrile illnesses is challenging in settings where diagnostic laboratory facilities are limited, and there are few published longitudinal data on children presenting with fever in such settings. We have previously conducted the first comprehensive study of infectious aetiologies of febrile children presenting to a tertiary care facility in Ethiopia. We now report on clinicians' prescribing adherence with guidelines and outcomes of management in this cohort. METHODS: We consecutively enrolled febrile children aged 2 months and under 13 years, who were then managed by clinicians based on presentation and available laboratory and radiologic findings on day of enrolment. We prospectively collected outcome data on days 7 and 14, and retrospectively evaluated prescribing adherence with national clinical management guidelines. RESULTS: Of 433 children enrolled, the most common presenting syndromes were pneumonia and acute diarrhoea, diagnosed in 177 (40.9%) and 82 (18.9%), respectively. Antibacterial agents were prescribed to 360 (84.7%) of 425 children, including 36 (34.0%) of 106 children without an initial indication for antibacterials according to guidelines. Antimalarial drugs were prescribed to 47 (11.1%) of 425 children, including 30 (7.3%) of 411 children with negative malaria microscopy. Fever had resolved in 357 (89.7%) of 398 children assessed at day 7, and in-hospital death within 7 days occurred in 9 (5.9%) of 153 admitted patients. Among children with pneumonia, independent predictors of persisting fever or death by 7 days were young age and underweight for age. Antibacterial prescribing in the absence of a guideline-specified indication (overprescribing) was more likely among infants and those without tachypnea, while overprescribing antimalarials was associated with older age, anaemia, absence of cough, and higher fevers. CONCLUSION: Our study underscores the need for improving diagnostic support to properly guide management decisions and enhance adherence by clinicians to treatment guidelines.


Asunto(s)
Antimaláricos , Fiebre , Antibacterianos/uso terapéutico , Antimaláricos/uso terapéutico , Niño , Etiopía/epidemiología , Fiebre/tratamiento farmacológico , Fiebre/etiología , Mortalidad Hospitalaria , Humanos , Lactante , Estudios Retrospectivos , Centros de Atención Terciaria
13.
BMC Infect Dis ; 22(1): 121, 2022 Feb 04.
Artículo en Inglés | MEDLINE | ID: mdl-35120441

RESUMEN

BACKGROUND: A new more highly sensitive rapid diagnostic test (HS-RDT) for Plasmodium falciparum malaria (Alere™/Abbott Malaria Ag P.f RDT [05FK140], now called NxTek™ Eliminate Malaria Ag Pf) was launched in 2017. The test has already been used in many research studies in a wide range of geographies and use cases. METHODS: In this study, we collate all published and available unpublished studies that use the HS-RDT and assess its performance in (i) prevalence surveys, (ii) clinical diagnosis, (iii) screening pregnant women, and (iv) active case detection. Two individual-level data sets from asymptomatic populations are used to fit logistic regression models to estimate the probability of HS-RDT positivity based on histidine-rich protein 2 (HRP2) concentration and parasite density. The performance of the HS-RDT in prevalence surveys is estimated by calculating the sensitivity and positive proportion in comparison to polymerase chain reaction (PCR) and conventional malaria RDTs. RESULTS: We find that across 18 studies, in prevalence surveys, the mean sensitivity of the HS-RDT is estimated to be 56.1% (95% confidence interval [CI] 46.9-65.4%) compared to 44.3% (95% CI 32.6-56.0%) for a conventional RDT (co-RDT) when using nucleic acid amplification techniques as the reference standard. In studies where prevalence was estimated using both the HS-RDT and a co-RDT, we found that prevalence was on average 46% higher using a HS-RDT compared to a co-RDT. For use in clinical diagnosis and screening pregnant women, the HS-RDT was not significantly more sensitive than a co-RDT. CONCLUSIONS: Overall, the evidence presented here suggests that the HS-RDT is more sensitive in asymptomatic populations and could provide a marginal improvement in clinical diagnosis and screening pregnant women. Although the HS-RDT has limited temperature stability and shelf-life claims compared to co-RDTs, there is no evidence to suggest, given this test has the same cost as current RDTs, it would have any negative impacts in terms of malaria misdiagnosis if it were widely used in all four population groups explored here.


Asunto(s)
Malaria Falciparum , Malaria , Antígenos de Protozoos , Estudios Transversales , Pruebas Diagnósticas de Rutina , Femenino , Humanos , Malaria/diagnóstico , Malaria/epidemiología , Malaria Falciparum/diagnóstico , Malaria Falciparum/epidemiología , Plasmodium falciparum , Embarazo , Proteínas Protozoarias , Sensibilidad y Especificidad
14.
Cochrane Database Syst Rev ; 7: CD013705, 2022 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-35866452

RESUMEN

BACKGROUND: Accurate rapid diagnostic tests for SARS-CoV-2 infection would be a useful tool to help manage the COVID-19 pandemic. Testing strategies that use rapid antigen tests to detect current infection have the potential to increase access to testing, speed detection of infection, and inform clinical and public health management decisions to reduce transmission. This is the second update of this review, which was first published in 2020. OBJECTIVES: To assess the diagnostic accuracy of rapid, point-of-care antigen tests for diagnosis of SARS-CoV-2 infection. We consider accuracy separately in symptomatic and asymptomatic population groups. Sources of heterogeneity investigated included setting and indication for testing, assay format, sample site, viral load, age, timing of test, and study design. SEARCH METHODS: We searched the COVID-19 Open Access Project living evidence database from the University of Bern (which includes daily updates from PubMed and Embase and preprints from medRxiv and bioRxiv) on 08 March 2021. We included independent evaluations from national reference laboratories, FIND and the Diagnostics Global Health website. We did not apply language restrictions. SELECTION CRITERIA: We included studies of people with either suspected SARS-CoV-2 infection, known SARS-CoV-2 infection or known absence of infection, or those who were being screened for infection. We included test accuracy studies of any design that evaluated commercially produced, rapid antigen tests. We included evaluations of single applications of a test (one test result reported per person) and evaluations of serial testing (repeated antigen testing over time). Reference standards for presence or absence of infection were any laboratory-based molecular test (primarily reverse transcription polymerase chain reaction (RT-PCR)) or pre-pandemic respiratory sample. DATA COLLECTION AND ANALYSIS: We used standard screening procedures with three people. Two people independently carried out quality assessment (using the QUADAS-2 tool) and extracted study results. Other study characteristics were extracted by one review author and checked by a second. We present sensitivity and specificity with 95% confidence intervals (CIs) for each test, and pooled data using the bivariate model. We investigated heterogeneity by including indicator variables in the random-effects logistic regression models. We tabulated results by test manufacturer and compliance with manufacturer instructions for use and according to symptom status. MAIN RESULTS: We included 155 study cohorts (described in 166 study reports, with 24 as preprints). The main results relate to 152 evaluations of single test applications including 100,462 unique samples (16,822 with confirmed SARS-CoV-2). Studies were mainly conducted in Europe (101/152, 66%), and evaluated 49 different commercial antigen assays. Only 23 studies compared two or more brands of test. Risk of bias was high because of participant selection (40, 26%); interpretation of the index test (6, 4%); weaknesses in the reference standard for absence of infection (119, 78%); and participant flow and timing 41 (27%). Characteristics of participants (45, 30%) and index test delivery (47, 31%) differed from the way in which and in whom the test was intended to be used. Nearly all studies (91%) used a single RT-PCR result to define presence or absence of infection. The 152 studies of single test applications reported 228 evaluations of antigen tests. Estimates of sensitivity varied considerably between studies, with consistently high specificities. Average sensitivity was higher in symptomatic (73.0%, 95% CI 69.3% to 76.4%; 109 evaluations; 50,574 samples, 11,662 cases) compared to asymptomatic participants (54.7%, 95% CI 47.7% to 61.6%; 50 evaluations; 40,956 samples, 2641 cases). Average sensitivity was higher in the first week after symptom onset (80.9%, 95% CI 76.9% to 84.4%; 30 evaluations, 2408 cases) than in the second week of symptoms (53.8%, 95% CI 48.0% to 59.6%; 40 evaluations, 1119 cases). For those who were asymptomatic at the time of testing, sensitivity was higher when an epidemiological exposure to SARS-CoV-2 was suspected (64.3%, 95% CI 54.6% to 73.0%; 16 evaluations; 7677 samples, 703 cases) compared to where COVID-19 testing was reported to be widely available to anyone on presentation for testing (49.6%, 95% CI 42.1% to 57.1%; 26 evaluations; 31,904 samples, 1758 cases). Average specificity was similarly high for symptomatic (99.1%) or asymptomatic (99.7%) participants. We observed a steady decline in summary sensitivities as measures of sample viral load decreased. Sensitivity varied between brands. When tests were used according to manufacturer instructions, average sensitivities by brand ranged from 34.3% to 91.3% in symptomatic participants (20 assays with eligible data) and from 28.6% to 77.8% for asymptomatic participants (12 assays). For symptomatic participants, summary sensitivities for seven assays were 80% or more (meeting acceptable criteria set by the World Health Organization (WHO)). The WHO acceptable performance criterion of 97% specificity was met by 17 of 20 assays when tests were used according to manufacturer instructions, 12 of which demonstrated specificities above 99%. For asymptomatic participants the sensitivities of only two assays approached but did not meet WHO acceptable performance standards in one study each; specificities for asymptomatic participants were in a similar range to those observed for symptomatic people. At 5% prevalence using summary data in symptomatic people during the first week after symptom onset, the positive predictive value (PPV) of 89% means that 1 in 10 positive results will be a false positive, and around 1 in 5 cases will be missed. At 0.5% prevalence using summary data for asymptomatic people, where testing was widely available and where epidemiological exposure to COVID-19 was suspected, resulting PPVs would be 38% to 52%, meaning that between 2 in 5 and 1 in 2 positive results will be false positives, and between 1 in 2 and 1 in 3 cases will be missed. AUTHORS' CONCLUSIONS: Antigen tests vary in sensitivity. In people with signs and symptoms of COVID-19, sensitivities are highest in the first week of illness when viral loads are higher. Assays that meet appropriate performance standards, such as those set by WHO, could replace laboratory-based RT-PCR when immediate decisions about patient care must be made, or where RT-PCR cannot be delivered in a timely manner. However, they are more suitable for use as triage to RT-PCR testing. The variable sensitivity of antigen tests means that people who test negative may still be infected. Many commercially available rapid antigen tests have not been evaluated in independent validation studies. Evidence for testing in asymptomatic cohorts has increased, however sensitivity is lower and there is a paucity of evidence for testing in different settings. Questions remain about the use of antigen test-based repeat testing strategies. Further research is needed to evaluate the effectiveness of screening programmes at reducing transmission of infection, whether mass screening or targeted approaches including schools, healthcare setting and traveller screening.


Asunto(s)
COVID-19 , COVID-19/diagnóstico , Prueba de COVID-19 , Humanos , Pandemias , Sistemas de Atención de Punto , SARS-CoV-2 , Sensibilidad y Especificidad
15.
Cochrane Database Syst Rev ; 11: CD013652, 2022 11 17.
Artículo en Inglés | MEDLINE | ID: mdl-36394900

RESUMEN

BACKGROUND: The diagnostic challenges associated with the COVID-19 pandemic resulted in rapid development of diagnostic test methods for detecting SARS-CoV-2 infection. Serology tests to detect the presence of antibodies to SARS-CoV-2 enable detection of past infection and may detect cases of SARS-CoV-2 infection that were missed by earlier diagnostic tests. Understanding the diagnostic accuracy of serology tests for SARS-CoV-2 infection may enable development of effective diagnostic and management pathways, inform public health management decisions and understanding of SARS-CoV-2 epidemiology. OBJECTIVES: To assess the accuracy of antibody tests, firstly, to determine if a person presenting in the community, or in primary or secondary care has current SARS-CoV-2 infection according to time after onset of infection and, secondly, to determine if a person has previously been infected with SARS-CoV-2. Sources of heterogeneity investigated included: timing of test, test method, SARS-CoV-2 antigen used, test brand, and reference standard for non-SARS-CoV-2 cases. SEARCH METHODS: The COVID-19 Open Access Project living evidence database from the University of Bern (which includes daily updates from PubMed and Embase and preprints from medRxiv and bioRxiv) was searched on 30 September 2020. We included additional publications from the Evidence for Policy and Practice Information and Co-ordinating Centre (EPPI-Centre) 'COVID-19: Living map of the evidence' and the Norwegian Institute of Public Health 'NIPH systematic and living map on COVID-19 evidence'. We did not apply language restrictions. SELECTION CRITERIA: We included test accuracy studies of any design that evaluated commercially produced serology tests, targeting IgG, IgM, IgA alone, or in combination. Studies must have provided data for sensitivity, that could be allocated to a predefined time period after onset of symptoms, or after a positive RT-PCR test. Small studies with fewer than 25 SARS-CoV-2 infection cases were excluded. We included any reference standard to define the presence or absence of SARS-CoV-2 (including reverse transcription polymerase chain reaction tests (RT-PCR), clinical diagnostic criteria, and pre-pandemic samples). DATA COLLECTION AND ANALYSIS: We use standard screening procedures with three reviewers. Quality assessment (using the QUADAS-2 tool) and numeric study results were extracted independently by two people. Other study characteristics were extracted by one reviewer and checked by a second. We present sensitivity and specificity with 95% confidence intervals (CIs) for each test and, for meta-analysis, we fitted univariate random-effects logistic regression models for sensitivity by eligible time period and for specificity by reference standard group. Heterogeneity was investigated by including indicator variables in the random-effects logistic regression models. We tabulated results by test manufacturer and summarised results for tests that were evaluated in 200 or more samples and that met a modification of UK Medicines and Healthcare products Regulatory Agency (MHRA) target performance criteria. MAIN RESULTS: We included 178 separate studies (described in 177 study reports, with 45 as pre-prints) providing 527 test evaluations. The studies included 64,688 samples including 25,724 from people with confirmed SARS-CoV-2; most compared the accuracy of two or more assays (102/178, 57%). Participants with confirmed SARS-CoV-2 infection were most commonly hospital inpatients (78/178, 44%), and pre-pandemic samples were used by 45% (81/178) to estimate specificity. Over two-thirds of studies recruited participants based on known SARS-CoV-2 infection status (123/178, 69%). All studies were conducted prior to the introduction of SARS-CoV-2 vaccines and present data for naturally acquired antibody responses. Seventy-nine percent (141/178) of studies reported sensitivity by week after symptom onset and 66% (117/178) for convalescent phase infection. Studies evaluated enzyme-linked immunosorbent assays (ELISA) (165/527; 31%), chemiluminescent assays (CLIA) (167/527; 32%) or lateral flow assays (LFA) (188/527; 36%). Risk of bias was high because of participant selection (172, 97%); application and interpretation of the index test (35, 20%); weaknesses in the reference standard (38, 21%); and issues related to participant flow and timing (148, 82%). We judged that there were high concerns about the applicability of the evidence related to participants in 170 (96%) studies, and about the applicability of the reference standard in 162 (91%) studies. Average sensitivities for current SARS-CoV-2 infection increased by week after onset for all target antibodies. Average sensitivity for the combination of either IgG or IgM was 41.1% in week one (95% CI 38.1 to 44.2; 103 evaluations; 3881 samples, 1593 cases), 74.9% in week two (95% CI 72.4 to 77.3; 96 evaluations, 3948 samples, 2904 cases) and 88.0% by week three after onset of symptoms (95% CI 86.3 to 89.5; 103 evaluations, 2929 samples, 2571 cases). Average sensitivity during the convalescent phase of infection (up to a maximum of 100 days since onset of symptoms, where reported) was 89.8% for IgG (95% CI 88.5 to 90.9; 253 evaluations, 16,846 samples, 14,183 cases), 92.9% for IgG or IgM combined (95% CI 91.0 to 94.4; 108 evaluations, 3571 samples, 3206 cases) and 94.3% for total antibodies (95% CI 92.8 to 95.5; 58 evaluations, 7063 samples, 6652 cases). Average sensitivities for IgM alone followed a similar pattern but were of a lower test accuracy in every time slot. Average specificities were consistently high and precise, particularly for pre-pandemic samples which provide the least biased estimates of specificity (ranging from 98.6% for IgM to 99.8% for total antibodies). Subgroup analyses suggested small differences in sensitivity and specificity by test technology however heterogeneity in study results, timing of sample collection, and smaller sample numbers in some groups made comparisons difficult. For IgG, CLIAs were the most sensitive (convalescent-phase infection) and specific (pre-pandemic samples) compared to both ELISAs and LFAs (P < 0.001 for differences across test methods). The antigen(s) used (whether from the Spike-protein or nucleocapsid) appeared to have some effect on average sensitivity in the first weeks after onset but there was no clear evidence of an effect during convalescent-phase infection. Investigations of test performance by brand showed considerable variation in sensitivity between tests, and in results between studies evaluating the same test. For tests that were evaluated in 200 or more samples, the lower bound of the 95% CI for sensitivity was 90% or more for only a small number of tests (IgG, n = 5; IgG or IgM, n = 1; total antibodies, n = 4). More test brands met the MHRA minimum criteria for specificity of 98% or above (IgG, n = 16; IgG or IgM, n = 5; total antibodies, n = 7). Seven assays met the specified criteria for both sensitivity and specificity. In a low-prevalence (2%) setting, where antibody testing is used to diagnose COVID-19 in people with symptoms but who have had a negative PCR test, we would anticipate that 1 (1 to 2) case would be missed and 8 (5 to 15) would be falsely positive in 1000 people undergoing IgG or IgM testing in week three after onset of SARS-CoV-2 infection. In a seroprevalence survey, where prevalence of prior infection is 50%, we would anticipate that 51 (46 to 58) cases would be missed and 6 (5 to 7) would be falsely positive in 1000 people having IgG tests during the convalescent phase (21 to 100 days post-symptom onset or post-positive PCR) of SARS-CoV-2 infection. AUTHORS' CONCLUSIONS: Some antibody tests could be a useful diagnostic tool for those in whom molecular- or antigen-based tests have failed to detect the SARS-CoV-2 virus, including in those with ongoing symptoms of acute infection (from week three onwards) or those presenting with post-acute sequelae of COVID-19. However, antibody tests have an increasing likelihood of detecting an immune response to infection as time since onset of infection progresses and have demonstrated adequate performance for detection of prior infection for sero-epidemiological purposes. The applicability of results for detection of vaccination-induced antibodies is uncertain.


Asunto(s)
COVID-19 , SARS-CoV-2 , Humanos , COVID-19/diagnóstico , COVID-19/epidemiología , Anticuerpos Antivirales , Inmunoglobulina G , Vacunas contra la COVID-19 , Pandemias , Estudios Seroepidemiológicos , Inmunoglobulina M
16.
Epidemiology ; 32(6): 811-819, 2021 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-34292212

RESUMEN

BACKGROUND: SARS-CoV-2 antigen-detection rapid diagnostic tests can diagnose COVID-19 rapidly and at low cost, but lower sensitivity compared with reverse-transcriptase polymerase chain reaction (PCR) has limited clinical adoption. METHODS: We compared antigen testing, PCR testing, and clinical judgment alone for diagnosing symptomatic COVID-19 in an outpatient setting (10% COVID-19 prevalence among the patients tested, 3-day PCR turnaround) and a hospital setting (40% prevalence, 24-hour PCR turnaround). We simulated transmission from cases and contacts, and relationships between time, viral burden, transmission, and case detection. We compared diagnostic approaches using a measure of net benefit that incorporated both clinical and public health benefits and harms of the intervention. RESULTS: In the outpatient setting, we estimated that using antigen testing instead of PCR to test 200 individuals could be equivalent to preventing all symptomatic transmission from one person with COVID-19 (one "transmission-equivalent"). In a hospital, net benefit analysis favored PCR and testing 25 patients with PCR instead of antigen testing achieved one transmission-equivalent of benefit. In both settings, antigen testing was preferable to PCR if PCR turnaround time exceeded 2 days. Both tests provided greater net benefit than management based on clinical judgment alone unless intervention carried minimal harm and was provided equally regardless of diagnostic approach. CONCLUSIONS: For diagnosis of symptomatic COVID-19, we estimated that the speed of diagnosis with antigen testing is likely to outweigh its lower accuracy compared with PCR, wherever PCR turnaround time is 2 days or longer. This advantage may be even greater if antigen tests are also less expensive.


Asunto(s)
COVID-19 , Técnicas y Procedimientos Diagnósticos , Pruebas Diagnósticas de Rutina , Humanos , SARS-CoV-2 , Sensibilidad y Especificidad
17.
Cochrane Database Syst Rev ; 3: CD013705, 2021 03 24.
Artículo en Inglés | MEDLINE | ID: mdl-33760236

RESUMEN

BACKGROUND: Accurate rapid diagnostic tests for SARS-CoV-2 infection could contribute to clinical and public health strategies to manage the COVID-19 pandemic. Point-of-care antigen and molecular tests to detect current infection could increase access to testing and early confirmation of cases, and expediate clinical and public health management decisions that may reduce transmission. OBJECTIVES: To assess the diagnostic accuracy of point-of-care antigen and molecular-based tests for diagnosis of SARS-CoV-2 infection. We consider accuracy separately in symptomatic and asymptomatic population groups. SEARCH METHODS: Electronic searches of the Cochrane COVID-19 Study Register and the COVID-19 Living Evidence Database from the University of Bern (which includes daily updates from PubMed and Embase and preprints from medRxiv and bioRxiv) were undertaken on 30 Sept 2020. We checked repositories of COVID-19 publications and included independent evaluations from national reference laboratories, the Foundation for Innovative New Diagnostics and the Diagnostics Global Health website to 16 Nov 2020. We did not apply language restrictions. SELECTION CRITERIA: We included studies of people with either suspected SARS-CoV-2 infection, known SARS-CoV-2 infection or known absence of infection, or those who were being screened for infection. We included test accuracy studies of any design that evaluated commercially produced, rapid antigen or molecular tests suitable for a point-of-care setting (minimal equipment, sample preparation, and biosafety requirements, with results within two hours of sample collection). We included all reference standards that define the presence or absence of SARS-CoV-2 (including reverse transcription polymerase chain reaction (RT-PCR) tests and established diagnostic criteria). DATA COLLECTION AND ANALYSIS: Studies were screened independently in duplicate with disagreements resolved by discussion with a third author. Study characteristics were extracted by one author and checked by a second; extraction of study results and assessments of risk of bias and applicability (made using the QUADAS-2 tool) were undertaken independently in duplicate. We present sensitivity and specificity with 95% confidence intervals (CIs) for each test and pooled data using the bivariate model separately for antigen and molecular-based tests. We tabulated results by test manufacturer and compliance with manufacturer instructions for use and according to symptom status. MAIN RESULTS: Seventy-eight study cohorts were included (described in 64 study reports, including 20 pre-prints), reporting results for 24,087 samples (7,415 with confirmed SARS-CoV-2). Studies were mainly from Europe (n = 39) or North America (n = 20), and evaluated 16 antigen and five molecular assays. We considered risk of bias to be high in 29 (50%) studies because of participant selection; in 66 (85%) because of weaknesses in the reference standard for absence of infection; and in 29 (45%) for participant flow and timing. Studies of antigen tests were of a higher methodological quality compared to studies of molecular tests, particularly regarding the risk of bias for participant selection and the index test. Characteristics of participants in 35 (45%) studies differed from those in whom the test was intended to be used and the delivery of the index test in 39 (50%) studies differed from the way in which the test was intended to be used. Nearly all studies (97%) defined the presence or absence of SARS-CoV-2 based on a single RT-PCR result, and none included participants meeting case definitions for probable COVID-19. Antigen tests Forty-eight studies reported 58 evaluations of antigen tests. Estimates of sensitivity varied considerably between studies. There were differences between symptomatic (72.0%, 95% CI 63.7% to 79.0%; 37 evaluations; 15530 samples, 4410 cases) and asymptomatic participants (58.1%, 95% CI 40.2% to 74.1%; 12 evaluations; 1581 samples, 295 cases). Average sensitivity was higher in the first week after symptom onset (78.3%, 95% CI 71.1% to 84.1%; 26 evaluations; 5769 samples, 2320 cases) than in the second week of symptoms (51.0%, 95% CI 40.8% to 61.0%; 22 evaluations; 935 samples, 692 cases). Sensitivity was high in those with cycle threshold (Ct) values on PCR ≤25 (94.5%, 95% CI 91.0% to 96.7%; 36 evaluations; 2613 cases) compared to those with Ct values >25 (40.7%, 95% CI 31.8% to 50.3%; 36 evaluations; 2632 cases). Sensitivity varied between brands. Using data from instructions for use (IFU) compliant evaluations in symptomatic participants, summary sensitivities ranged from 34.1% (95% CI 29.7% to 38.8%; Coris Bioconcept) to 88.1% (95% CI 84.2% to 91.1%; SD Biosensor STANDARD Q). Average specificities were high in symptomatic and asymptomatic participants, and for most brands (overall summary specificity 99.6%, 95% CI 99.0% to 99.8%). At 5% prevalence using data for the most sensitive assays in symptomatic people (SD Biosensor STANDARD Q and Abbott Panbio), positive predictive values (PPVs) of 84% to 90% mean that between 1 in 10 and 1 in 6 positive results will be a false positive, and between 1 in 4 and 1 in 8 cases will be missed. At 0.5% prevalence applying the same tests in asymptomatic people would result in PPVs of 11% to 28% meaning that between 7 in 10 and 9 in 10 positive results will be false positives, and between 1 in 2 and 1 in 3 cases will be missed. No studies assessed the accuracy of repeated lateral flow testing or self-testing. Rapid molecular assays Thirty studies reported 33 evaluations of five different rapid molecular tests. Sensitivities varied according to test brand. Most of the data relate to the ID NOW and Xpert Xpress assays. Using data from evaluations following the manufacturer's instructions for use, the average sensitivity of ID NOW was 73.0% (95% CI 66.8% to 78.4%) and average specificity 99.7% (95% CI 98.7% to 99.9%; 4 evaluations; 812 samples, 222 cases). For Xpert Xpress, the average sensitivity was 100% (95% CI 88.1% to 100%) and average specificity 97.2% (95% CI 89.4% to 99.3%; 2 evaluations; 100 samples, 29 cases). Insufficient data were available to investigate the effect of symptom status or time after symptom onset. AUTHORS' CONCLUSIONS: Antigen tests vary in sensitivity. In people with signs and symptoms of COVID-19, sensitivities are highest in the first week of illness when viral loads are higher. The assays shown to meet appropriate criteria, such as WHO's priority target product profiles for COVID-19 diagnostics ('acceptable' sensitivity ≥ 80% and specificity ≥ 97%), can be considered as a replacement for laboratory-based RT-PCR when immediate decisions about patient care must be made, or where RT-PCR cannot be delivered in a timely manner. Positive predictive values suggest that confirmatory testing of those with positive results may be considered in low prevalence settings. Due to the variable sensitivity of antigen tests, people who test negative may still be infected. Evidence for testing in asymptomatic cohorts was limited. Test accuracy studies cannot adequately assess the ability of antigen tests to differentiate those who are infectious and require isolation from those who pose no risk, as there is no reference standard for infectiousness. A small number of molecular tests showed high accuracy and may be suitable alternatives to RT-PCR. However, further evaluations of the tests in settings as they are intended to be used are required to fully establish performance in practice. Several important studies in asymptomatic individuals have been reported since the close of our search and will be incorporated at the next update of this review. Comparative studies of antigen tests in their intended use settings and according to test operator (including self-testing) are required.


Asunto(s)
Antígenos Virales/análisis , Prueba Serológica para COVID-19/métodos , COVID-19/diagnóstico , Técnicas de Diagnóstico Molecular/métodos , Sistemas de Atención de Punto , SARS-CoV-2/inmunología , Adulto , Infecciones Asintomáticas , Sesgo , Prueba de Ácido Nucleico para COVID-19 , Prueba Serológica para COVID-19/normas , Niño , Estudios de Cohortes , Reacciones Falso Negativas , Reacciones Falso Positivas , Humanos , Técnicas de Diagnóstico Molecular/normas , Valor Predictivo de las Pruebas , Estándares de Referencia , Sensibilidad y Especificidad
18.
Ann Intern Med ; 172(11): 726-734, 2020 06 02.
Artículo en Inglés | MEDLINE | ID: mdl-32282894

RESUMEN

Diagnostic testing to identify persons infected with severe acute respiratory syndrome-related coronavirus 2 (SARS-CoV-2) infection is central to control the global pandemic of COVID-19 that began in late 2019. In a few countries, the use of diagnostic testing on a massive scale has been a cornerstone of successful containment strategies. In contrast, the United States, hampered by limited testing capacity, has prioritized testing for specific groups of persons. Real-time reverse transcriptase polymerase chain reaction-based assays performed in a laboratory on respiratory specimens are the reference standard for COVID-19 diagnostics. However, point-of-care technologies and serologic immunoassays are rapidly emerging. Although excellent tools exist for the diagnosis of symptomatic patients in well-equipped laboratories, important gaps remain in screening asymptomatic persons in the incubation phase, as well as in the accurate determination of live viral shedding during convalescence to inform decisions to end isolation. Many affluent countries have encountered challenges in test delivery and specimen collection that have inhibited rapid increases in testing capacity. These challenges may be even greater in low-resource settings. Urgent clinical and public health needs currently drive an unprecedented global effort to increase testing capacity for SARS-CoV-2 infection. Here, the authors review the current array of tests for SARS-CoV-2, highlight gaps in current diagnostic capacity, and propose potential solutions.


Asunto(s)
Infecciones por Coronavirus/diagnóstico , Neumonía Viral/diagnóstico , Betacoronavirus , Biomarcadores/sangre , COVID-19 , Prueba de COVID-19 , Vacunas contra la COVID-19 , Técnicas de Laboratorio Clínico , Humanos , Pandemias , Pruebas en el Punto de Atención , Radiografía Torácica , Reacción en Cadena en Tiempo Real de la Polimerasa , SARS-CoV-2 , Pruebas Serológicas , Manejo de Especímenes/métodos
19.
Ann Intern Med ; 173(6): 450-460, 2020 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-32496919

RESUMEN

Accurate serologic tests to detect host antibodies to severe acute respiratory syndrome-related coronavirus 2 (SARS-CoV-2) will be critical for the public health response to the coronavirus disease 2019 pandemic. Many use cases are envisaged, including complementing molecular methods for diagnosis of active disease and estimating immunity for individuals. At the population level, carefully designed seroepidemiologic studies will aid in the characterization of transmission dynamics and refinement of disease burden estimates and will provide insight into the kinetics of humoral immunity. Yet, despite an explosion in the number and availability of serologic assays to test for antibodies against SARS-CoV-2, most have undergone minimal external validation to date. This hinders assay selection and implementation, as well as interpretation of study results. In addition, critical knowledge gaps remain regarding serologic correlates of protection from infection or disease, and the degree to which these assays cross-react with antibodies against related coronaviruses. This article discusses key use cases for SARS-CoV-2 antibody detection tests and their application to serologic studies, reviews currently available assays, highlights key areas of ongoing research, and proposes potential strategies for test implementation.


Asunto(s)
Betacoronavirus/inmunología , Técnicas de Laboratorio Clínico , Infecciones por Coronavirus/diagnóstico , Infecciones por Coronavirus/inmunología , Neumonía Viral/diagnóstico , Neumonía Viral/inmunología , Pruebas Serológicas/métodos , COVID-19 , Prueba de COVID-19 , Humanos , Pandemias , SARS-CoV-2 , Estudios Seroepidemiológicos
20.
Clin Infect Dis ; 70(11): 2262-2269, 2020 05 23.
Artículo en Inglés | MEDLINE | ID: mdl-31313805

RESUMEN

BACKGROUND: In the absence of proper guidelines and algorithms, available rapid diagnostic tests (RDTs) for common acute undifferentiated febrile illnesses are often used inappropriately. METHODS: Using prevalence data of 5 common febrile illnesses from India and Cambodia, and performance characteristics (sensitivity and specificity) of relevant pathogen-specific RDTs, we used a mathematical model to predict the probability of correct identification of each disease when diagnostic testing occurs either simultaneously or sequentially in various algorithms. We developed a web-based application of the model so as to visualize and compare output diagnostic algorithms when different disease prevalence and test performance characteristics are introduced. RESULTS: Diagnostic algorithms with appropriate sequential testing predicted correct identification of etiology in 74% and 89% of patients in India and Cambodia, respectively, compared with 46% and 49% with simultaneous testing. The optimally performing sequential diagnostic algorithms differed in India and Cambodia due to varying disease prevalence. CONCLUSIONS: Simultaneous testing is not appropriate for the diagnosis of acute undifferentiated febrile illnesses with presently available tests, which should deter the unsupervised use of multiplex diagnostic tests. The implementation of adaptive algorithms can predict better diagnosis and add value to the available RDTs. The web application of the model can serve as a tool to identify the optimal diagnostic algorithm in different epidemiological settings, while taking into account the local epidemiological variables and accuracy of available tests.


Asunto(s)
Algoritmos , Pruebas Diagnósticas de Rutina , Cambodia/epidemiología , Humanos , India/epidemiología , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA