Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 160
Filtrar
Más filtros

Intervalo de año de publicación
1.
J Clin Microbiol ; 62(4): e0164923, 2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-38470024

RESUMEN

Scaling up of newer innovations that address the limitations of the dried blood spot and the logistics of plasma monitoring is needed. We employed a multi-site, cross-sectional assessment of the plasma separation card (PSC) on blood specimens collected from all consenting adults, assenting young and pediatric patients living with HIV from 10 primary healthcare clinics in South Africa. Venous blood for EDTA-plasma samples was collected and analyzed according to the standard of care assay, while collected capillary blood for the PSC samples was analyzed using the Roche COBAS AmpliPrep/Cobas TaqMan (CAP/CTM) HIV-1 Test at the National Reference laboratories. McNemar tests assessed the differences in concordance between the centrifuged plasma and dried plasma spots. The usability of PSC by blood spotting, PSC preparation, and pre-analytical work was assessed by collecting seven-point Likert-scale data from healthcare and laboratory workers. We enrolled 538 patients, mostly adults [n = 515, 95.7% (95% CI: 93.7%-97.1%)] and females [n = 322, 64.2% (95% CI: 60.0%-68.1%)]. Overall, 536 paired samples were collected using both PSC- and EDTA-plasma diagnostics, and 502 paired PSC- and EDTA-plasma samples assessed. Concordance between the paired samples was obtained for 446 samples. Analysis of these 446 paired samples at 1,000 copies per milliliter threshold yielded an overall sensitivity of 87.5% [95% CI: 73.2%-95.8%] and specificity of 99.3% [95% CI: 97.9%-99.8%]. Laboratory staff reported technical difficulties in most tasks. The usability of the PSC by healthcare workers was favorable. For policymakers to consider PSC scale-up for viral load monitoring, technical challenges around using PSC at the clinic and laboratory level need to be addressed. IMPORTANCE: Findings from this manuscript emphasize the reliability of the plasma separation card (PSC), a novel diagnostic method that can be implemented in healthcare facilities in resource-constrained settings. The agreement of the PSC with the standard of care EDTA plasma for viral load monitoring is high. Since the findings showed that these tests were highly specific, we recommend a scale-up of PSC in South Africa for diagnosis of treatment failure.


Asunto(s)
Infecciones por VIH , VIH-1 , Adulto , Femenino , Humanos , Niño , Sensibilidad y Especificidad , VIH-1/genética , Carga Viral/métodos , Sudáfrica , Estudios Transversales , Ácido Edético , Reproducibilidad de los Resultados , ARN Viral
2.
Diabet Med ; : e15349, 2024 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-38808524

RESUMEN

AIMS: To examine the impact of current age, age at diagnosis, and duration of diabetes on the incidence rate of complications among people with type 2 diabetes. METHODS: Baseline data from 19,327 individuals with type 2 diabetes in the UK Biobank were analysed. Poisson regression was used to model incidence rates by current age, age at diagnosis, and duration of diabetes for the following outcomes: myocardial infarction (MI), heart failure (HF), stroke, end-stage kidney diseases (ESKD), chronic kidney diseases (CKD), liver diseases, depression, and anxiety. RESULTS: The mean age at baseline was 60.2 years, and median follow-up was 13.9 years. Diabetes duration was significantly longer among those with younger-onset type 2 diabetes (diagnosed at <40 years) compared to later-onset type 2 diabetes (diagnosed at ≥40 years), 16.2 and 5.3 years, respectively. Incidence rates of MI, HF, stroke, and CKD had strong positive associations with age and duration of diabetes, whereas incidence rates of ESKD liver diseases, and anxiety mainly depended on duration of diabetes. The incidence rates of depression showed minor variation by age and duration of diabetes and were highest among those diagnosed at earlier ages. No clear evidence of an effect of age of onset of diabetes on risk of complications was apparent after accounting for current age and duration of diabetes. CONCLUSIONS: Our study indicates age at diagnosis of diabetes does not significantly impact the incidence of complications, independently of the duration of diabetes. Instead, complications are primarily influenced by current age and diabetes duration.

3.
Virol J ; 21(1): 99, 2024 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-38685117

RESUMEN

BACKGROUND: During the COVID-19 pandemic, antigen diagnostic tests were frequently used for screening, triage, and diagnosis. Novel instrument-based antigen tests (iAg tests) hold the promise of outperforming their instrument-free, visually-read counterparts. Here, we provide a systematic review and meta-analysis of the SARS-CoV-2 iAg tests' clinical accuracy. METHODS: We systematically searched MEDLINE (via PubMed), Web of Science, medRxiv, and bioRxiv for articles published before November 7th, 2022, evaluating the accuracy of iAg tests for SARS-CoV-2 detection. We performed a random effects meta-analysis to estimate sensitivity and specificity and used the QUADAS-2 tool to assess study quality and risk of bias. Sub-group analysis was conducted based on Ct value range, IFU-conformity, age, symptom presence and duration, and the variant of concern. RESULTS: We screened the titles and abstracts of 20,431 articles and included 114 publications that fulfilled the inclusion criteria. Additionally, we incorporated three articles sourced from the FIND website, totaling 117 studies encompassing 95,181 individuals, which evaluated the clinical accuracy of 24 commercial COVID-19 iAg tests. The studies varied in risk of bias but showed high applicability. Of 24 iAg tests from 99 studies assessed in the meta-analysis, the pooled sensitivity and specificity compared to molecular testing of a paired NP swab sample were 76.7% (95% CI 73.5 to 79.7) and 98.4% (95% CI 98.0 to 98.7), respectively. Higher sensitivity was noted in individuals with high viral load (99.6% [95% CI 96.8 to 100] at Ct-level ≤ 20) and within the first week of symptom onset (84.6% [95% CI 78.2 to 89.3]), but did not differ between tests conducted as per manufacturer's instructions and those conducted differently, or between point-of-care and lab-based testing. CONCLUSION: Overall, iAg tests have a high pooled specificity but a moderate pooled sensitivity, according to our analysis. The pooled sensitivity increases with lower Ct-values (a proxy for viral load), or within the first week of symptom onset, enabling reliable identification of most COVID-19 cases and highlighting the importance of context in test selection. The study underscores the need for careful evaluation considering performance variations and operational features of iAg tests.


Asunto(s)
Antígenos Virales , Prueba Serológica para COVID-19 , COVID-19 , SARS-CoV-2 , Sensibilidad y Especificidad , Humanos , COVID-19/diagnóstico , COVID-19/virología , SARS-CoV-2/inmunología , Prueba Serológica para COVID-19/métodos , Antígenos Virales/inmunología , Antígenos Virales/análisis , Prueba de COVID-19/métodos
4.
Int J Mol Sci ; 25(1)2024 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-38203800

RESUMEN

Tendinopathy (TP) is a complex clinical syndrome characterized by local inflammation, pain in the affected area, and loss of performance, preceded by tendon injury. The disease develops in three phases: Inflammatory phase, proliferative phase, and remodeling phase. There are currently no proven treatments for early reversal of this type of injury. However, the metabolic pathways of the transition metabolism, which are necessary for the proper functioning of the organism, are known. These metabolic pathways can be modified by a number of external factors, such as nutritional supplements. In this study, the modulatory effect of four dietary supplements, maslinic acid (MA), hydroxytyrosol (HT), glycine, and aspartate (AA), on hepatic intermediary metabolism was observed in Wistar rats with induced tendinopathy at different stages of the disease. Induced tendinopathy in rats produces alterations in the liver intermediary metabolism. Nutraceutical treatments modify the intermediary metabolism in the different phases of tendinopathy, so AA treatment produced a decrease in carbohydrate metabolism. In lipid metabolism, MA and AA caused a decrease in lipogenesis at the tendinopathy and increased fatty acid oxidation. In protein metabolism, MA treatment increased GDH and AST activity; HT decreased ALT activity; and the AA treatment does not cause any alteration. Use of nutritional supplements of diet could help to regulate the intermediary metabolism in the TP.


Asunto(s)
Enfermedades Musculoesqueléticas , Ácido Oleanólico/análogos & derivados , Alcohol Feniletílico/análogos & derivados , Tendinopatía , Ratas , Animales , Ratas Wistar , Suplementos Dietéticos , Metabolismo de los Lípidos , Tendinopatía/etiología , Ácido Aspártico
5.
Mol Biol Evol ; 39(3)2022 03 02.
Artículo en Inglés | MEDLINE | ID: mdl-35143670

RESUMEN

Bioinformatic research relies on large-scale computational infrastructures which have a nonzero carbon footprint but so far, no study has quantified the environmental costs of bioinformatic tools and commonly run analyses. In this work, we estimate the carbon footprint of bioinformatics (in kilograms of CO2 equivalent units, kgCO2e) using the freely available Green Algorithms calculator (www.green-algorithms.org, last accessed 2022). We assessed 1) bioinformatic approaches in genome-wide association studies (GWAS), RNA sequencing, genome assembly, metagenomics, phylogenetics, and molecular simulations, as well as 2) computation strategies, such as parallelization, CPU (central processing unit) versus GPU (graphics processing unit), cloud versus local computing infrastructure, and geography. In particular, we found that biobank-scale GWAS emitted substantial kgCO2e and simple software upgrades could make it greener, for example, upgrading from BOLT-LMM v1 to v2.3 reduced carbon footprint by 73%. Moreover, switching from the average data center to a more efficient one can reduce carbon footprint by approximately 34%. Memory over-allocation can also be a substantial contributor to an algorithm's greenhouse gas emissions. The use of faster processors or greater parallelization reduces running time but can lead to greater carbon footprint. Finally, we provide guidance on how researchers can reduce power consumption and minimize kgCO2e. Overall, this work elucidates the carbon footprint of common analyses in bioinformatics and provides solutions which empower a move toward greener research.


Asunto(s)
Huella de Carbono , Biología Computacional , Algoritmos , Estudio de Asociación del Genoma Completo , Programas Informáticos
6.
J Antimicrob Chemother ; 78(5): 1160-1167, 2023 05 03.
Artículo en Inglés | MEDLINE | ID: mdl-37017009

RESUMEN

BACKGROUND: Minimal data exist on HIV drug resistance patterns and prevalence among paediatric patients failing ART in resource-limited settings. We assessed levels of HIV drug resistance in children with virological failure. METHODS: This cross-sectional study, performed from March 2017 to March 2019 in South Africa, enrolled HIV-positive children aged ≤19 years, receiving ART through public health facilities with recent evidence suggestive of virological failure (at least one viral load ≥1000 copies/mL), across 45 randomly selected high-volume clinics from all nine provinces. Resistance genotyping was performed using next-generation sequencing technologies. Descriptive analysis taking into account survey design was used to determine outcomes. RESULTS: Among 899 participants enrolled, the adjusted proportion of HIV drug resistance among children with virological failure was 87.5% (95% CI 83.0%-90.9%). Resistance to NNRTIs was detected in 77.4% (95% CI 72.5%-81.7%) of participants, and resistance to NRTIs in 69.5% (95% CI 62.9%-75.4%) of participants. Overall, resistance to PIs was detected in 7.7% (95% CI 4.4%-13.0%) of children. CONCLUSIONS: HIV drug resistance was highly prevalent in paediatric patients failing ART in South Africa, with 9 in 10 patients harbouring resistance to NNRTIs and/or NRTIs. PI-based regimens are predicted to be highly efficacious in achieving virological suppression amongst patients failing NNRTI-based regimens. Scaling up resistance testing amongst patients would facilitate access to second- and third-line regimens in South Africa.


Asunto(s)
Fármacos Anti-VIH , Infecciones por VIH , Humanos , Niño , Fármacos Anti-VIH/uso terapéutico , Infecciones por VIH/tratamiento farmacológico , Infecciones por VIH/epidemiología , Sudáfrica/epidemiología , Estudios Transversales , Farmacorresistencia Viral , Carga Viral , Insuficiencia del Tratamiento
7.
Sex Transm Infect ; 99(6): 420-428, 2023 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-36990696

RESUMEN

BACKGROUND: Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (GC) resulted in over 200 million new sexually transmitted infections last year. Self-sampling strategies alone or combined with digital innovations (ie, online, mobile or computing technologies supporting self-sampling) could improve screening methods. Evidence on all outcomes has not yet been synthesised, so we conducted a systematic review and meta-analysis to address this limitation. METHODS: We searched three databases (period: 1 January 2000-6 January 2023) for reports on self-sampling for CT/GC testing. Outcomes considered for inclusion were: accuracy, feasibility, patient-centred and impact (ie, changes in linkage to care, first-time testers, uptake, turnaround time or referrals attributable to self-sampling).We used bivariate regression models to meta-analyse accuracy measures from self-sampled CT/GC tests and obtain pooled sensitivity/specificity estimates. We assessed quality with Cochrane Risk of Bias Tool-2, Newcastle-Ottawa Scale and Quality Assessment of Diagnostic Accuracy Studies-2 tool. RESULTS: We summarised results from 45 studies reporting self-sampling alone (73.3%; 33 of 45) or combined with digital innovations (26.7%; 12 of 45) conducted in 10 high-income (HICs; n=34) and 8 low/middle-income countries (LMICs; n=11). 95.6% (43 of 45) were observational, while 4.4% (2 of 45) were randomised clinical trials.We noted that pooled sensitivity (n=13) for CT/GC was higher in extragenital self-sampling (>91.6% (86.0%-95.1%)) than in vaginal self-sampling (79.6% (62.1%-90.3%)), while pooled specificity remained high (>99.0% (98.2%-99.5%)).Participants found self-sampling highly acceptable (80.0%-100.0%; n=24), but preference varied (23.1%-83.0%; n=16).Self-sampling reached 51.0%-70.0% (n=3) of first-time testers and resulted in 89.0%-100.0% (n=3) linkages to care. Digital innovations led to 65.0%-92% engagement and 43.8%-57.1% kit return rates (n=3).Quality of studies varied. DISCUSSION: Self-sampling had mixed sensitivity, reached first-time testers and was accepted with high linkages to care. We recommend self-sampling for CT/GC in HICs but additional evaluations in LMICs. Digital innovations impacted engagement and may reduce disease burden in hard-to-reach populations. PROSPERO REGISTRATION NUMBER: CRD42021262950.


Asunto(s)
Infecciones por Chlamydia , Gonorrea , Femenino , Humanos , Neisseria gonorrhoeae , Chlamydia trachomatis , Gonorrea/diagnóstico , Infecciones por Chlamydia/diagnóstico , Factores de Riesgo
8.
Eur J Neurol ; 30(6): 1785-1790, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36752029

RESUMEN

BACKGROUND AND PURPOSE: Differentiating between peripheral and central aetiologies can be challenging in patients with acute vertigo, given substantial symptom overlap. A detailed clinical history and focused physical eye movement examination such as the HINTS eye examination appear to be the most reliable approach to identify acute cerebellar/brainstem stroke, outperforming even acute brain imaging. We have observed, however, that isolated vertigo of central cause may be accompanied by acute truncal ataxia, in the absence of nystagmus. METHODS: We explored the frequency of ataxia without concurrent nystagmus in a cross section of patients with acute vertigo who presented to the emergency department at two centres in Argentina (Group A) and the UK (Group B). Patients underwent detailed clinical neuro-otological assessments (Groups A and B), which included instrumented head impulse testing and oculography (Group B). RESULTS: A total of 71 patients in Group A and 24 patients in Group B were included in this study. We found acute truncal ataxia-without nystagmus-in 15% (n = 14) of our overall cohort. Lesions involved stroke syndromes affecting the posterior inferior cerebellar artery, anterior inferior cerebellar artery, and superior cerebellar artery, thalamic stroke, cerebral hemisphere stroke, multiple sclerosis, and a cerebellar tumour. Additional oculomotor deficits did not reliably identify a central cause in these individuals, even with oculography. CONCLUSIONS: We have identified a significant subpopulation of patients with acute vertigo in whom the current standard approaches such as the HINTS examination that focus on oculomotor assessment may not be applicable, highlighting the need for a formal assessment of gait in this setting.


Asunto(s)
Infartos del Tronco Encefálico , Nistagmo Patológico , Accidente Cerebrovascular , Humanos , Vértigo/complicaciones , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/diagnóstico por imagen , Cerebelo , Ataxia , Nistagmo Patológico/etiología , Nistagmo Patológico/diagnóstico
9.
Circulation ; 143(21): 2061-2073, 2021 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-33853383

RESUMEN

BACKGROUND: Exertional intolerance is a limiting and often crippling symptom in patients with chronic thromboembolic pulmonary hypertension (CTEPH). Traditionally the pathogenesis has been attributed to central factors, including ventilation/perfusion mismatch, increased pulmonary vascular resistance, and right heart dysfunction and uncoupling. Pulmonary endarterectomy and balloon pulmonary angioplasty provide substantial improvement of functional status and hemodynamics. However, despite normalization of pulmonary hemodynamics, exercise capacity often does not return to age-predicted levels. By systematically evaluating the oxygen pathway, we aimed to elucidate the causes of functional limitations in patients with CTEPH before and after pulmonary vascular intervention. METHODS: Using exercise cardiac magnetic resonance imaging with simultaneous invasive hemodynamic monitoring, we sought to quantify the steps of the O2 transport cascade from the mouth to the mitochondria in patients with CTEPH (n=20) as compared with healthy participants (n=10). Furthermore, we evaluated the effect of pulmonary vascular intervention (pulmonary endarterectomy or balloon angioplasty) on the individual components of the cascade (n=10). RESULTS: Peak Vo2 (oxygen uptake) was significantly reduced in patients with CTEPH relative to controls (56±17 versus 112±20% of predicted; P<0.0001). The difference was attributable to impairments in multiple steps of the O2 cascade, including O2 delivery (product of cardiac output and arterial O2 content), skeletal muscle diffusion capacity, and pulmonary diffusion. The total O2 extracted in the periphery (ie, ΔAVo2 [arteriovenous O2 content difference]) was not different. After pulmonary vascular intervention, peak Vo2 increased significantly (from 12.5±4.0 to 17.8±7.5 mL/[kg·min]; P=0.036) but remained below age-predicted levels (70±11%). The O2 delivery was improved owing to an increase in peak cardiac output and lung diffusion capacity. However, peak exercise ΔAVo2 was unchanged, as was skeletal muscle diffusion capacity. CONCLUSIONS: We demonstrated that patients with CTEPH have significant impairment of all steps in the O2 use cascade, resulting in markedly impaired exercise capacity. Pulmonary vascular intervention increased peak Vo2 by partly correcting O2 delivery but had no effect on abnormalities in peripheral O2 extraction. This suggests that current interventions only partially address patients' limitations and that additional therapies may improve functional capacity.


Asunto(s)
Hipertensión Pulmonar/fisiopatología , Oxígeno/fisiología , Enfermedad Crónica , Femenino , Voluntarios Sanos , Humanos , Masculino , Persona de Mediana Edad
10.
PLoS Med ; 19(12): e1004111, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36472973

RESUMEN

BACKGROUND: Cardiovascular diseases (CVDs) are the leading cause of mortality globally with almost a third of all annual deaths worldwide. Low- and middle-income countries (LMICs) are disproportionately highly affected covering 80% of these deaths. For CVD, hypertension (HTN) is the leading modifiable risk factor. The comparative impact of diagnostic interventions that improve either the accuracy, the reach, or the completion of HTN screening in comparison to the current standard of care has not been estimated. METHODS AND FINDINGS: This microsimulation study estimated the impact on HTN-induced morbidity and mortality in LMICs for four different scenarios: (S1) lower HTN diagnostic accuracy; (S2) improved HTN diagnostic accuracy; (S3) better implementation strategies to reach more persons with existing tools; and, lastly, (S4) the wider use of easy-to-use tools, such as validated, automated digital blood pressure measurement devices to enhance screening completion, in comparison to the current standard of care (S0). Our hypothetical population was parametrized using nationally representative, individual-level HPACC data and the global burden of disease data. The prevalence of HTN in the population was 31% out of which 60% remained undiagnosed. We investigated how the alteration of a yearly blood pressure screening event impacts morbidity and mortality in the population over a period of 10 years. The study showed that while improving test accuracy avoids 0.6% of HTN-induced deaths over 10 years (13,856,507 [9,382,742; 17,395,833]), almost 40 million (39,650,363 [31,34,233, 49,298,921], i.e., 12.7% [9.9, 15.8]) of the HTN-induced deaths could be prevented by increasing coverage and completion of a screening event in the same time frame. Doubling the coverage only would still prevent 3,304,212 million ([2,274,664; 4,164,180], 12.1% [8.3, 15.2]) CVD events 10 years after the rollout of the program. Our study is limited by the scarce data available on HTN and CVD from LMICs. We had to pool some parameters across stratification groups, and additional information, such as dietary habits, lifestyle choice, or the blood pressure evolution, could not be considered. Nevertheless, the microsimulation enabled us to include substantial heterogeneity and stochasticity toward the different income groups and personal CVD risk scores in the model. CONCLUSIONS: While it is important to consider investing in newer diagnostics for blood pressure testing to continuously improve ease of use and accuracy, more emphasis should be placed on screening completion.


Asunto(s)
Hipertensión , Humanos , Hipertensión/diagnóstico , Hipertensión/epidemiología
11.
PLoS Med ; 19(5): e1004011, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35617375

RESUMEN

BACKGROUND: Comprehensive information about the accuracy of antigen rapid diagnostic tests (Ag-RDTs) for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) is essential to guide public health decision makers in choosing the best tests and testing policies. In August 2021, we published a systematic review and meta-analysis about the accuracy of Ag-RDTs. We now update this work and analyze the factors influencing test sensitivity in further detail. METHODS AND FINDINGS: We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched preprint and peer-reviewed databases for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 until August 31, 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity with reverse transcription polymerase chain reaction (RT-PCR) testing as a reference. To evaluate factors influencing test sensitivity, we performed 3 different analyses using multivariable mixed-effects meta-regression models. We included 194 studies with 221,878 Ag-RDTs performed. Overall, the pooled estimates of Ag-RDT sensitivity and specificity were 72.0% (95% confidence interval [CI] 69.8 to 74.2) and 98.9% (95% CI 98.6 to 99.1). When manufacturer instructions were followed, sensitivity increased to 76.3% (95% CI 73.7 to 78.7). Sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values (97.9% [95% CI 96.9 to 98.9] and 90.6% [95% CI 88.3 to 93.0] for Ct-values <20 and <25, compared to 54.4% [95% CI 47.3 to 61.5] and 18.7% [95% CI 13.9 to 23.4] for Ct-values ≥25 and ≥30) and was estimated to increase by 2.9 percentage points (95% CI 1.7 to 4.0) for every unit decrease in mean Ct-value when adjusting for testing procedure and patients' symptom status. Concordantly, we found the mean Ct-value to be lower for true positive (22.2 [95% CI 21.5 to 22.8]) compared to false negative (30.4 [95% CI 29.7 to 31.1]) results. Testing in the first week from symptom onset resulted in substantially higher sensitivity (81.9% [95% CI 77.7 to 85.5]) compared to testing after 1 week (51.8%, 95% CI 41.5 to 61.9). Similarly, sensitivity was higher in symptomatic (76.2% [95% CI 73.3 to 78.9]) compared to asymptomatic (56.8% [95% CI 50.9 to 62.4]) persons. However, both effects were mainly driven by the Ct-value of the sample. With regards to sample type, highest sensitivity was found for nasopharyngeal (NP) and combined NP/oropharyngeal samples (70.8% [95% CI 68.3 to 73.2]), as well as in anterior nasal/mid-turbinate samples (77.3% [95% CI 73.0 to 81.0]). Our analysis was limited by the included studies' heterogeneity in viral load assessment and sample origination. CONCLUSIONS: Ag-RDTs detect most of the individuals infected with SARS-CoV-2, and almost all (>90%) when high viral loads are present. With viral load, as estimated by Ct-value, being the most influential factor on their sensitivity, they are especially useful to detect persons with high viral load who are most likely to transmit the virus. To further quantify the effects of other factors influencing test sensitivity, standardization of clinical accuracy studies and access to patient level Ct-values and duration of symptoms are needed.


Asunto(s)
COVID-19 , SARS-CoV-2 , COVID-19/diagnóstico , Prueba de COVID-19 , Humanos , Sistemas de Atención de Punto , Sensibilidad y Especificidad
12.
PLoS Med ; 19(8): e1004076, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35994520

RESUMEN

BACKGROUND: Accurate routine HIV viral load testing is essential for assessing the efficacy of antiretroviral treatment (ART) regimens and the emergence of drug resistance. While the use of plasma specimens is the standard for viral load testing, its use is restricted by the limited ambient temperature stability of viral load biomarkers in whole blood and plasma during storage and transportation and the limited cold chain available between many health care facilities in resource-limited settings. Alternative specimen types and technologies, such as dried blood spots, may address these issues and increase access to viral load testing; however, their technical performance is unclear. To address this, we conducted a meta-analysis comparing viral load results from paired dried blood spot and plasma specimens analyzed with commonly used viral load testing technologies. METHODS AND FINDINGS: Standard databases, conferences, and gray literature were searched in 2013 and 2018. Nearly all studies identified (60) were conducted between 2007 and 2018. Data from 40 of the 60 studies were included in the meta-analysis, which accounted for a total of 10,871 paired dried blood spot:plasma data points. We used random effects models to determine the bias, accuracy, precision, and misclassification for each viral load technology and to account for between-study variation. Dried blood spot specimens produced consistently higher mean viral loads across all technologies when compared to plasma specimens. However, when used to identify treatment failure, each technology compared best to plasma at a threshold of 1,000 copies/ml, the present World Health Organization recommended treatment failure threshold. Some heterogeneity existed between technologies; however, 5 technologies had a sensitivity greater than 95%. Furthermore, 5 technologies had a specificity greater than 85% yet 2 technologies had a specificity less than 60% using a treatment failure threshold of 1,000 copies/ml. The study's main limitation was the direct applicability of findings as nearly all studies to date used dried blood spot samples prepared in laboratories using precision pipetting that resulted in consistent input volumes. CONCLUSIONS: This analysis provides evidence to support the implementation and scale-up of dried blood spot specimens for viral load testing using the same 1,000 copies/ml treatment failure threshold as used with plasma specimens. This may support improved access to viral load testing in resource-limited settings lacking the required infrastructure and cold chain storage for testing with plasma specimens.


Asunto(s)
Infecciones por VIH , VIH-1 , Pruebas con Sangre Seca/métodos , Infecciones por VIH/diagnóstico , Infecciones por VIH/tratamiento farmacológico , VIH-1/genética , Humanos , ARN Viral , Sensibilidad y Especificidad , Carga Viral/métodos
13.
Int J Mol Sci ; 23(9)2022 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-35563148

RESUMEN

The prediction of how a ligand binds to its target is an essential step for Structure-Based Drug Design (SBDD) methods. Molecular docking is a standard tool to predict the binding mode of a ligand to its macromolecular receptor and to quantify their mutual complementarity, with multiple applications in drug design. However, docking programs do not always find correct solutions, either because they are not sampled or due to inaccuracies in the scoring functions. Quantifying the docking performance in real scenarios is essential to understanding their limitations, managing expectations and guiding future developments. Here, we present a fully automated pipeline for pose prediction validated by participating in the Continuous Evaluation of Ligand Pose Prediction (CELPP) Challenge. Acknowledging the intrinsic limitations of the docking method, we devised a strategy to automatically mine and exploit pre-existing data, defining-whenever possible-empirical restraints to guide the docking process. We prove that the pipeline is able to generate predictions for most of the proposed targets as well as obtain poses with low RMSD values when compared to the crystal structure. All things considered, our pipeline highlights some major challenges in the automatic prediction of protein-ligand complexes, which will be addressed in future versions of the pipeline.


Asunto(s)
Diseño de Fármacos , Sitios de Unión , Cristalografía por Rayos X , Ligandos , Simulación del Acoplamiento Molecular , Unión Proteica , Conformación Proteica
14.
Neurologia ; 2022 Aug 03.
Artículo en Español | MEDLINE | ID: mdl-35936979

RESUMEN

INTRODUCTION: Patients with post-COVID-19 syndrome may present cognitive and emotional symptomatology. This study aims to analyse the results of an outpatient neuropsychological intervention program for post-COVID-19 syndrome. METHOD: In June 2020 Institut Guttmann started an outpatient post-COVID-19 neurorehabilitation program, including respiratory therapy, physiotherapy, and neuropsychological rehabilitation. Before and after the program, the cognitive-emotional state of all participants is assessed. Six months after treatment, a follow-up assessment is administered (which includes a collection of information on various aspects of daily life). RESULTS: The sample analysed consisted of 123 patients (mean age: 51 years, SD: 12.41). Seventy-four per cent (n=91) had cognitive impairment and underwent cognitive treatment (experimental group); the remaining 26% (n=32) constituted the control group. After the intervention, the experimental group improved in working memory, verbal memory (learning, recall and recognition), verbal fluency and anxious-depressive symptomatology. The control group showed changes in immediate memory, verbal memory (learning and recognition) and depressive symptomatology, although the effect size in the latter two was smaller than in the experimental group. Six months after treatment, 44.9% of the patients were unable to perform their pre-COVID-19 work activity, and 81.2% reported difficulties in their activities of daily living. CONCLUSIONS: Neuropsychological rehabilitation is an effective tool to treat the cognitive-emotional deficits present in post-COVID-19 syndrome. However, months after the end of treatment, not all patients recover their pre-COVID-19 functional level.

15.
PLoS Med ; 18(3): e1003479, 2021 03.
Artículo en Inglés | MEDLINE | ID: mdl-33789340

RESUMEN

BACKGROUND: Despite widespread availability of HIV treatment, patient outcomes differ across facilities. We propose and evaluate an approach to measure quality of HIV care at health facilities in South Africa's national HIV program using routine laboratory data. METHODS AND FINDINGS: Data were extracted from South Africa's National Health Laboratory Service (NHLS) Corporate Data Warehouse. All CD4 counts, viral loads (VLs), and other laboratory tests used in HIV monitoring were linked, creating a validated patient identifier. We constructed longitudinal HIV care cascades for all patients in the national HIV program, excluding data from the Western Cape and very small facilities. We then estimated for each facility in each year (2011 to 2015) the following cascade measures identified a priori as reflecting quality of HIV care: median CD4 count among new patients; retention 12 months after presentation; 12-month retention among patients established in care; viral suppression; CD4 recovery; monitoring after an elevated VL. We used factor analysis to identify an underlying measure of quality of care, and we assessed the persistence of this quality measure over time. We then assessed spatiotemporal variation and facility and population predictors in a multivariable regression context. We analyzed data on 3,265 facilities with a median (IQR) annual size of 441 (189 to 988) lab-monitored HIV patients. Retention 12 months after presentation increased from 42% to 47% during the study period, and viral suppression increased from 66% to 79%, although there was substantial variability across facilities. We identified an underlying measure of quality of HIV care that correlated with all cascade measures except median CD4 count at presentation. Averaging across the 5 years of data, this quality score attained a reliability of 0.84. Quality was higher for clinics (versus hospitals), in rural (versus urban) areas, and for larger facilities. Quality was lower in high-poverty areas but was not independently associated with percent Black. Quality increased by 0.49 (95% CI 0.46 to 0.53) standard deviations from 2011 to 2015, and there was evidence of geospatial autocorrelation (p < 0.001). The study's limitations include an inability to fully adjust for underlying patient risk, reliance on laboratory data which do not capture all relevant domains of quality, potential for errors in record linkage, and the omission of Western Cape. CONCLUSIONS: We observed persistent differences in HIV care and treatment outcomes across South African facilities. Targeting low-performing facilities for additional support could reduce overall burden of disease.


Asunto(s)
Infecciones por VIH/tratamiento farmacológico , Instituciones de Salud/estadística & datos numéricos , Adulto , Anciano , Recuento de Linfocito CD4/estadística & datos numéricos , Estudios de Cohortes , Atención a la Salud/organización & administración , Humanos , Persona de Mediana Edad , Reproducibilidad de los Resultados , Sudáfrica , Resultado del Tratamiento , Carga Viral/estadística & datos numéricos , Adulto Joven
16.
PLoS Med ; 18(8): e1003735, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34383750

RESUMEN

BACKGROUND: SARS-CoV-2 antigen rapid diagnostic tests (Ag-RDTs) are increasingly being integrated in testing strategies around the world. Studies of the Ag-RDTs have shown variable performance. In this systematic review and meta-analysis, we assessed the clinical accuracy (sensitivity and specificity) of commercially available Ag-RDTs. METHODS AND FINDINGS: We registered the review on PROSPERO (registration number: CRD42020225140). We systematically searched multiple databases (PubMed, Web of Science Core Collection, medRvix, bioRvix, and FIND) for publications evaluating the accuracy of Ag-RDTs for SARS-CoV-2 up until 30 April 2021. Descriptive analyses of all studies were performed, and when more than 4 studies were available, a random-effects meta-analysis was used to estimate pooled sensitivity and specificity in comparison to reverse transcription polymerase chain reaction (RT-PCR) testing. We assessed heterogeneity by subgroup analyses, and rated study quality and risk of bias using the QUADAS-2 assessment tool. From a total of 14,254 articles, we included 133 analytical and clinical studies resulting in 214 clinical accuracy datasets with 112,323 samples. Across all meta-analyzed samples, the pooled Ag-RDT sensitivity and specificity were 71.2% (95% CI 68.2% to 74.0%) and 98.9% (95% CI 98.6% to 99.1%), respectively. Sensitivity increased to 76.3% (95% CI 73.1% to 79.2%) if analysis was restricted to studies that followed the Ag-RDT manufacturers' instructions. LumiraDx showed the highest sensitivity, with 88.2% (95% CI 59.0% to 97.5%). Of instrument-free Ag-RDTs, Standard Q nasal performed best, with 80.2% sensitivity (95% CI 70.3% to 87.4%). Across all Ag-RDTs, sensitivity was markedly better on samples with lower RT-PCR cycle threshold (Ct) values, i.e., <20 (96.5%, 95% CI 92.6% to 98.4%) and <25 (95.8%, 95% CI 92.3% to 97.8%), in comparison to those with Ct ≥ 25 (50.7%, 95% CI 35.6% to 65.8%) and ≥30 (20.9%, 95% CI 12.5% to 32.8%). Testing in the first week from symptom onset resulted in substantially higher sensitivity (83.8%, 95% CI 76.3% to 89.2%) compared to testing after 1 week (61.5%, 95% CI 52.2% to 70.0%). The best Ag-RDT sensitivity was found with anterior nasal sampling (75.5%, 95% CI 70.4% to 79.9%), in comparison to other sample types (e.g., nasopharyngeal, 71.6%, 95% CI 68.1% to 74.9%), although CIs were overlapping. Concerns of bias were raised across all datasets, and financial support from the manufacturer was reported in 24.1% of datasets. Our analysis was limited by the included studies' heterogeneity in design and reporting. CONCLUSIONS: In this study we found that Ag-RDTs detect the vast majority of SARS-CoV-2-infected persons within the first week of symptom onset and those with high viral load. Thus, they can have high utility for diagnostic purposes in the early phase of disease, making them a valuable tool to fight the spread of SARS-CoV-2. Standardization in conduct and reporting of clinical accuracy studies would improve comparability and use of data.


Asunto(s)
Prueba Serológica para COVID-19/métodos , Factores de Edad , Antígenos Virales/análisis , COVID-19/diagnóstico , COVID-19/etiología , Prueba Serológica para COVID-19/normas , Portador Sano/diagnóstico , Portador Sano/virología , Humanos , Nasofaringe/virología , Juego de Reactivos para Diagnóstico , Estándares de Referencia , SARS-CoV-2/inmunología , Sensibilidad y Especificidad , Carga Viral
17.
J Clin Microbiol ; 59(3)2021 02 18.
Artículo en Inglés | MEDLINE | ID: mdl-33268535

RESUMEN

Failure to rapidly identify drug-resistant tuberculosis (TB) increases the risk of patient mismanagement, the amplification of drug resistance, and ongoing transmission. We generated comparative analytical data for four automated assays for the detection of TB and multidrug-resistant TB (MDR-TB): Abbott RealTime MTB and MTB RIF/INH (Abbott), Hain Lifescience FluoroType MTBDR (Hain), BD Max MDR-TB (BD), and Roche cobas MTB and MTB-RIF/INH (Roche). We included Xpert MTB/RIF (Xpert) and GenoType MTBDRplus as comparators for TB and drug resistance detection, respectively. We assessed analytical sensitivity for the detection of the Mycobacterium tuberculosis complex using inactivated strains (M. tuberculosis H37Rv and M. bovis) spiked into TB-negative sputa and computed the 95% limits of detection (LOD95). We assessed the accuracy of rifampicin and isoniazid resistance detection using well-characterized M. tuberculosis strains with high-confidence mutations accounting for >85% of first-line resistance mechanisms globally. For H37Rv and M. bovis, we measured LOD95 values of 3,781 and 2,926 (Xpert), 322 and 2,182 (Abbott), 826 and 4,301 (BD), 10,398 and 23,139 (Hain), and 2,416 and 2,136 (Roche) genomes/ml, respectively. Assays targeting multicopy genes or targets (Abbott, BD, and Roche) showed increased analytical sensitivity compared to Xpert. Quantification of the panel by quantitative real-time PCR prevents the determination of absolute values, and results reported here can be interpreted for comparison purposes only. All assays showed accuracy comparable to that of Genotype MTBDRplus for the detection of rifampicin and isoniazid resistance. The data from this analytical study suggest that the assays may have clinical performances similar to those of WHO-recommended molecular TB and MDR-TB assays.


Asunto(s)
Mycobacterium tuberculosis , Tuberculosis Resistente a Múltiples Medicamentos , Humanos , Isoniazida/farmacología , Pruebas de Sensibilidad Microbiana , Mycobacterium tuberculosis/genética , Rifampin/farmacología , Sensibilidad y Especificidad , Tuberculosis Resistente a Múltiples Medicamentos/diagnóstico
18.
Cerebellum ; 20(5): 673-677, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31396823

RESUMEN

In clinical practice, the head impulse test paradigm (HIMP) and the suppression head impulse paradigm (SHIMP) stimulate high-frequency head movements so that the visual system is temporarily suppressed. The two tests could also be useful tools for vestibular assessment at low frequencies: VVOR (visually enhanced vestibulo-ocular reflex) and VORS (vestibulo-ocular reflex suppression). The aim of this study is to analyze the eye movements typically found during VVOR and VORS testing in patients with unilateral and bilateral vestibular hypofunction. Twenty patients with unilateral vestibular hypofunction, three patients with bilateral vestibular hypofunction, and ten patients with normal vestibular function (control group) were analyzed through VVOR and VORS testing with an Otometrics ICS Impulse system. During the VVOR test, patients with unilateral vestibular hypofunction exhibited corrective saccades to the same direction of the nystagmus fast phase toward the healthy side when the head rotates toward the affected side, while patients with bilateral vestibular hypofunction exhibited corrective saccades to the opposite side of head movements to each side. During the VORS test, patients with unilateral vestibular hypofunction seem to exhibit larger corrective saccades to the healthy side when the head was moved to this side, while patients with bilateral vestibular hypofunction did not exhibit corrective saccades during head movements to either side. Our data suggest that the VVOR and VORS tests yield the same diagnostic information as the HIMP and SHIMP tests in unilateral and bilateral vestibular hypofunction, and can contribute to the diagnosis of a peripheral vestibular loss as well as the affected side.


Asunto(s)
Reflejo Vestibuloocular , Movimientos Sacádicos , Cerebelo , Prueba de Impulso Cefálico , Humanos , Rotación
19.
Cerebellum ; 20(1): 4-8, 2021 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-32794025

RESUMEN

The virtual practice has made major advances in the way that we care for patients in the modern era. The culture of virtual practice, consulting, and telemedicine, which had started several years ago, took an accelerated leap as humankind was challenged by the novel coronavirus pandemic (COVID19). The social distancing measures and lockdowns imposed in many countries left medical care providers with limited options in evaluating ambulatory patients, pushing the rapid transition to assessments via virtual platforms. In this novel arena of medical practice, which may form new norms beyond the current pandemic crisis, we found it critical to define guidelines on the recommended practice in neurotology, including remote methods in examining the vestibular and eye movement function. The proposed remote examination methods aim to reliably diagnose acute and subacute diseases of the inner-ear, brainstem, and the cerebellum. A key aim was to triage patients into those requiring urgent emergency room assessment versus non-urgent but expedited outpatient management. Physicians who had expertise in managing patients with vestibular disorders were invited to participate in the taskforce. The focus was on two topics: (1) an adequate eye movement and vestibular examination strategy using virtual platforms and (2) a decision pathway providing guidance about which patient should seek urgent medical care and which patient should have non-urgent but expedited outpatient management.


Asunto(s)
COVID-19 , Examen Neurológico/métodos , Telemedicina/métodos , Triaje/métodos , Enfermedades Vestibulares/diagnóstico , Consenso , Humanos , SARS-CoV-2
20.
MMWR Morb Mortal Wkly Rep ; 70(21): 775-778, 2021 May 28.
Artículo en Inglés | MEDLINE | ID: mdl-34043612

RESUMEN

One component of the Joint United Nations Programme on HIV/AIDS (UNAIDS) goal to end the HIV/AIDS epidemic by 2030, is that 95% of all persons receiving antiretroviral therapy (ART) achieve viral suppression.† Thus, testing all HIV-positive persons for viral load (number of copies of viral RNA per mL) is a global health priority (1). CDC and other U.S. government agencies, as part of the U.S. President's Emergency Plan for AIDS Relief (PEPFAR), together with other stakeholders, have provided technical assistance and supported the cost for multiple countries in sub-Saharan Africa to expand viral load testing as the preferred monitoring strategy for clinical response to ART. The individual and population-level benefits of ART are well understood (2). Persons receiving ART who achieve and sustain an undetectable viral load do not transmit HIV to their sex partners, thereby disrupting onward transmission (2,3). Viral load testing is a cost-effective and sustainable programmatic approach for monitoring treatment success, allowing reduced frequency of health care visits for patients who are virally suppressed (4). Viral load monitoring enables early and accurate detection of treatment failure before immunologic decline. This report describes progress on the scale-up of viral load testing in eight sub-Saharan African countries from 2013 to 2018 and examines the trajectory of improvement with viral load testing scale-up that has paralleled government commitments, sustained technical assistance, and financial resources from international donors. Viral load testing in low- and middle-income countries enables monitoring of viral load suppression at the individual and population level, which is necessary to achieve global epidemic control. Although there has been substantial achievement in improving viral load coverage for all patients receiving ART, continued engagement is needed to reach global targets.


Asunto(s)
Fármacos Anti-VIH/uso terapéutico , Infecciones por VIH/virología , Vigilancia de la Población , Carga Viral , África del Sur del Sahara/epidemiología , Infecciones por VIH/tratamiento farmacológico , Infecciones por VIH/epidemiología , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA