Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-22276907

RESUMO

IntroductionThroughout the SARS-CoV-2 pandemic, resources for various aspects of patient care have been limited, necessitating risk-stratification. The need for good risk-stratification tools has been enhanced by the availability of new Covid-19 therapeutics that are effective at preventing severe disease among high-risk patients if given promptly following SARS-CoV-2 infection. We describe the development of two points-based models for predicting the risk of deterioration to severe disease from an Omicron-variant SARS-CoV-2 infection. MethodsWe developed two logistic regression-based models for predicting the risk of severe Covid-19 within a 21-days follow-up period among Clalit Health Services members aged 18 and older, with confirmed SARS-CoV-2 infection from December 25, 2021 to March 16, 2022. In the first model, aimed for the use of healthcare providers, the model coefficients were linearly transformed into integer risk points. In the second model, a simplified version designed for self-assessment by the general public, the risk points were further scaled down to smaller numbers with less variability across risk factors. Results613,513 individuals met the inclusion criteria, of which 1,763 (0.287%) developed the outcome. The AUROC estimates for both models were 0.95, although the full model demonstrated more granular risk-stratification capabilities (77 vs. 27 potential thresholds on the test set). Both models proved effective in identifying small subsets of the population enriched with individuals who ended up deteriorating. For example, prioritizing the top 1%, 5% or 10% individuals in the population for interventions with the full model results in coverage of 36%, 68% or 83% (respectively) of the individuals that actually end up deteriorating. Risk point count increased with age, number of chronic conditions and previous hospitalizations, and decreased with recent vaccination and infection. DiscussionThe models presented, one more expressive and one more accessible, are transparent and explainable models applicable to the general population that can be used in the prioritization of Covid-19-related resources, including therapeutics.

2.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-21262465

RESUMO

BackgroundMethodologically rigorous studies on Covid-19 vaccine effectiveness (VE) in preventing SARS-CoV-2 infection are critically needed to inform national and global policy on Covid-19 vaccine use. In Israel, healthcare personnel (HCP) were initially prioritized for Covid-19 vaccination, creating an ideal setting to evaluate real-world VE in a closely monitored population. MethodsWe conducted a prospective study among HCP in 6 hospitals to estimate the effectiveness of the BNT162b2 mRNA Covid-19 vaccine in preventing SARS-CoV-2 infection. Participants filled out weekly symptom questionnaires, provided weekly nasal specimens, and three serology samples - at enrollment, 30 days and 90 days. We estimated VE against PCR-confirmed SARS-CoV-2 infection using the Cox Proportional Hazards model and against a combined PCR/serology endpoint using Fishers exact test. FindingsOf the 1,567 HCP enrolled between December 27, 2020 and February 15, 2021, 1,250 previously uninfected participants were included in the primary analysis; 998 (79.8%) were vaccinated with their first dose prior to or at enrollment, all with Pfizer BNT162b2 mRNA vaccine. There were four PCR-positive events among vaccinated participants, and nine among unvaccinated participants. Adjusted two-dose VE against any PCR- confirmed infection was 94.5% (95% CI: 82.6%-98.2%); adjusted two-dose VE against a combined endpoint of PCR and seroconversion for a 60-day follow-up period was 94.5% (95% CI: 63.0%-99.0%). Five PCR-positive samples from study participants were sequenced; all were alpha variant. InterpretationOur prospective VE study of HCP in Israel with rigorous weekly surveillance found very high VE for two doses of Pfizer BNT162b2 mRNA vaccine against SARS-CoV-2 during a period of predominant alpha variant circulation. FundingClalit Health Services

3.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-20248148

RESUMO

HLA haplotypes were found to be associated with increased risk for viral infections or disease severity in various diseases, including SARS. Several genetic variants are associated with Covid-19 severity. However, no clear association between HLA and Covid-19 incidence or severity has been reported. We conducted a large scale HLA analysis of Israeli individuals who tested positive for SARS-CoV-2 infection by PCR. Overall, 72,912 individuals with known HLA haplotypes were included in the study, of whom 6,413 (8.8%) were found to have SARS-CoV-2 by PCR. a Total of 20,937 subjects were of Ashkenazi origin (at least 2/4 grandparents). One hundred eighty-one patients (2.8% of the infected) were hospitalized due to the disease. None of the 66 most common HLA loci (within the five HLA subgroups; A, B, C, DQB1, DRB1) was found to be associated with SARS-CoV-2 infection or hospitalization. Similarly, no association was detected in the Ashkenazi Jewish subset. Moreover, no association was found between heterozygosity in any of the HLA loci and either infection or hospitalization. We conclude that HLA haplotypes are not a major risk/protecting factor among the Israeli population for SARS-CoV-2 infection or severity.

4.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-20108571

RESUMO

BackgroundAs many countries consider and employ various lockdown exit strategies, health authorities seek tools to provide differential targeted advice for social distancing based on personal risk for severe COVID-19. However, striking a balance between a scientifically precise multivariable risk prediction model, and a model which can easily be used by the general public, remains a challenge. A list of risk criteria, as defined by the CDC for example, provides a simple solution, but may be too inclusive by classifying a substantial portion of the population at high risk. Score-based risk classification tools may provide a good compromise between accuracy and simplicity. ObjectiveTo create a score-based risk classification tool for severe COVID-19. MethodsThe outcome was defined as a composite of being labeled severe during hospitalization or dying due to COVID-19. The risk classification tool was developed using retrospective data from all COVID-19 patients that were diagnosed until April 1st, 2020 in a large healthcare organization ("training set"). The developed tool combines 10 risk factors using simple summation, and defines three risk levels according to the patients age and number of accumulated risk points - basic risk, high risk and very-high risk (the last two levels are also considered together as the elevated risk group). The tools performance in accurately identifying individuals at risk was evaluated using a "temporal test set" of COVID-19 patients diagnosed between April 2nd and April 22nd, 2020, later than those used for model development. The tools performance was also compared to that of the CDCs criteria. The healthcare organizations general population was used to evaluate the proportion of patients that would be classified to each of the models risk levels and as elevated risk by the CDC criteria. ResultsA total of 2,421, 2,624 and 4,631,168 individuals were included in the training, test, and general population cohorts, respectively. The outcome rate in the training and test sets was 5%. Overall, 18% of the general population would be classified at elevated risk by the model, with a resulting sensitivity of 92%, compared to 35% that would be defined as elevated risk by the CDC criteria, with a resulting sensitivity of 96%. Within the models elevated risk groups, the high and very-high risk groups comprised 15% and 3% of the general population, with an incidence rate (PPV) of 15% and 33%, respectively. DiscussionA simple to communicate score-based risk classification tool classifies at elevated risk about half of the population that is considered to have an elevated risk by the CDC risk criteria, with only a 4% reduction in sensitivity. The models ability to further divide the elevated risk population into two markedly different subgroups allows providing more refined recommendations to the general public and limiting the restrictions of social distancing to a smaller and more manageable subset of the population. This model was adopted by the Israeli ministry of health as its risk classification tool for COVID-19 lab tests prioritization and for targeting its instructions on risk management during the lockdown exit strategy.

5.
Preprint em Inglês | medRxiv | ID: ppmedrxiv-20076976

RESUMO

With the global coronavirus disease 2019 (COVID-19) pandemic, there is an urgent need for risk stratification tools to support prevention and treatment decisions. The Centers for Disease Control and Prevention (CDC) listed several criteria that define high-risk individuals, but multivariable prediction models may allow for a more accurate and granular risk evaluation. In the early days of the pandemic, when individual level data required for training prediction models was not available, a large healthcare organization developed a prediction model for supporting its COVID-19 policy using a hybrid strategy. The model was constructed on a baseline predictor to rank patients according to their risk for severe respiratory infection or sepsis (trained using over one-million patient records) and was then post-processed to calibrate the predictions to reported COVID-19 case fatality rates. Since its deployment in mid-March, this predictor was integrated into many decision-processes in the organization that involved allocating limited resources. With the accumulation of enough COVID-19 patients, the predictor was validated for its accuracy in predicting COVID-19 mortality among all COVID-19 cases in the organization (3,176, 3.1% death rate). The predictor was found to have good discrimination, with an area under the receiver-operating characteristics curve of 0.942. Calibration was also good, with a marked improvement compared to the calibration of the baseline model when evaluated for the COVID-19 mortality outcome. While the CDC criteria identify 41% of the population as high-risk with a resulting sensitivity of 97%, a 5% absolute risk cutoff by the model tags only 14% to be at high-risk while still achieving a sensitivity of 90%. To summarize, we found that even in the midst of a pandemic, shrouded in epidemiologic "fog of war" and with no individual level data, it was possible to provide a useful predictor with good discrimination and calibration.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA