RESUMO
OBJECTIVES: This study aimed to evaluate the cost-effectiveness of anti-vascular endothelial growth factor drugs (anti-VEGFs) compared with panretinal photocoagulation (PRP) for treating proliferative diabetic retinopathy (PDR) in the United Kingdom. METHODS: A discrete event simulation model was developed, informed by individual participant data meta-analysis. The model captures treatment effects on best corrected visual acuity in both eyes, and the occurrence of diabetic macular edema and vitreous hemorrhage. The model also estimates the value of undertaking further research to resolve decision uncertainty. RESULTS: Anti-VEGFs are unlikely to generate clinically meaningful benefits over PRP. The model predicted anti-VEGFs be more costly and similarly effective as PRP, generating 0.029 fewer quality-adjusted life-years at an additional cost of £3688, with a net health benefit of -0.214 at a £20 000 willingness-to-pay threshold. Scenario analysis results suggest that only under very select conditions may anti-VEGFs offer potential for cost-effective treatment of PDR. The consequences of loss to follow-up were an important driver of model outcomes. CONCLUSIONS: Anti-VEGFs are unlikely to be a cost-effective treatment for early PDR compared with PRP. Anti-VEGFs are generally associated with higher costs and similar health outcomes across various scenarios. Although anti-VEGFs were associated with lower diabetic macular edema rates, the number of cases avoided is insufficient to offset the additional treatment costs. Key uncertainties relate to the long-term comparative effectiveness of anti-VEGFs, particularly considering the real-world rates and consequences of treatment nonadherence. Further research on long-term visual acuity and rates of vision-threatening complications may be beneficial in resolving uncertainties.
Assuntos
Inibidores da Angiogênese , Retinopatia Diabética , Anos de Vida Ajustados por Qualidade de Vida , Fator A de Crescimento do Endotélio Vascular , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Inibidores da Angiogênese/economia , Inibidores da Angiogênese/uso terapêutico , Análise de Custo-Efetividade , Retinopatia Diabética/tratamento farmacológico , Retinopatia Diabética/economia , Retinopatia Diabética/terapia , Retinopatia Diabética/cirurgia , Fotocoagulação a Laser/economia , Fotocoagulação a Laser/métodos , Fotocoagulação/economia , Fotocoagulação/métodos , Edema Macular/tratamento farmacológico , Edema Macular/economia , Edema Macular/terapia , Modelos Econômicos , Resultado do Tratamento , Reino Unido , Fator A de Crescimento do Endotélio Vascular/antagonistas & inibidores , Acuidade VisualRESUMO
Human activities are increasingly impacting our oceans and the focus tends to be on their environmental impacts, rather than consequences for animal welfare. Global shipping density has quadrupled since 1992. Unsurprisingly, increased levels of vessel collisions with cetaceans have followed this global expansion of shipping. This paper is the first to attempt to consider the severity of ship-strike on individual whale welfare. The methodology of the 'Welfare Assessment Tool for Wild Cetaceans' (WATWC) was used, which is itself based upon the Five Domains model. Expert opinion was sought on six hypothetical but realistic case studies involving humpback whales (Megaptera novaeangliae) struck by ships. Twenty-nine experts in the cetacean and welfare sector took part. They were split into two groups; Group 1 first assessed a case we judged to be the least severe and Group 2 first assessed the most severe. Both groups then additionally assessed the same four further cases. This was to investigate whether the severity of the first case influenced judgements regarding subsequent cases (i.e. expert judgements were relative) or not (i.e. judgements were absolute). No significant difference between the two groups of assessors was found; therefore, the hypothesis of relative scoring was rejected. Experts judged whales may suffer some level (>1) of overall (Domain 5) harm for the rest of their lives following a ship-strike incident. Health, closely followed by Behaviour were found to be the welfare aspects most affected by ship-strikes. Overall, the WATWC shows a robust potential to aid decision-making on wild cetacean welfare.
RESUMO
BACKGROUND: Pulse oximeters are a standard non-invasive tool to measure blood oxygen levels, and are used in multiple healthcare settings. It is important to understand the factors affecting their accuracy to be able to use them optimally and safely. This analysis aimed to explore the association of the measurement error of pulse oximeters with systolic BP, diastolic BP and heart rate (HR) within ranges of values commonly observed in clinical practice. METHODS: The study design was a retrospective observational study of all patients admitted to a large teaching hospital with suspected or confirmed COVID-19 infection from February 2020 to December 2021. Data on systolic and diastolic BPs and HR levels were available from the same time period as the pulse oximetry measurements. RESULTS: Data were available for 3420 patients with 5927 observations of blood oxygen saturations as measured by pulse oximetry and ABG sampling within 30 min. The difference in oxygen saturation using the paired pulse oximetry and arterial oxygen saturation difference measurements was inversely associated with systolic BP, increasing by 0.02% with each mm Hg decrease in systolic BP (95% CI 0.00% to 0.03%) over a range of 80-180 mm Hg. Inverse associations were also observed between the error for oxygen saturation as measured by pulse oximetry and with both diastolic BP (+0.03%; 95% CI 0.00% to 0.05%) and HR (+0.04%; 95% CI 0.02% to 0.06% for each unit decrease in the HR). CONCLUSIONS: Care needs to be taken in interpreting pulse oximetry measurements in patients with lower systolic and diastolic BPs, and HRs, as oxygen saturation is overestimated as BP and HR decrease. Confirmation of the oxygen saturation with an ABG may be appropriate in some clinical scenarios.
Assuntos
COVID-19 , Humanos , Pressão Sanguínea , Oximetria , Oxigênio , Frequência CardíacaRESUMO
We compared the performance of prognostic tools for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) using parameters fitted either at the time of hospital admission or across all time points of an admission. This cohort study used clinical data to model the dynamic change in prognosis of SARS-CoV-2 at a single hospital center in the United Kingdom, including all patients admitted from February 1, 2020, to December 31, 2020, and then followed up for 60 days for intensive care unit (ICU) admission, death, or discharge from the hospital. We incorporated clinical observations and blood tests into 2 time-varying Cox proportional hazards models predicting daily 24- to 48-hour risk of admission to the ICU for those eligible for escalation of care or death for those ineligible for escalation. In developing the model, 491 patients were eligible for ICU escalation and 769 were ineligible for escalation. Our model had good discrimination of daily risk of ICU admission in the validation cohort (n = 1,141; C statistic: C = 0.91, 95% confidence interval: 0.89, 0.94) and our score performed better than other scores (National Early Warning Score 2, International Severe Acute Respiratory and Emerging Infection Comprehensive Clinical Characterisation Collaboration score) calculated using only parameters measured on admission, but it overestimated the risk of escalation (calibration slope = 0.7). A bespoke daily SARS-CoV-2 escalation risk prediction score can predict the need for clinical escalation better than a generic early warning score or a single estimation of risk calculated at admission.
Assuntos
COVID-19 , SARS-CoV-2 , Humanos , Estudos de Coortes , Unidades de Terapia Intensiva , Hospitalização , Estudos RetrospectivosRESUMO
Background Radiographic severity may help predict patient deterioration and outcomes from COVID-19 pneumonia. Purpose To assess the reliability and reproducibility of three chest radiograph reporting systems (radiographic assessment of lung edema [RALE], Brixia, and percentage opacification) in patients with proven SARS-CoV-2 infection and examine the ability of these scores to predict adverse outcomes both alone and in conjunction with two clinical scoring systems, National Early Warning Score 2 (NEWS2) and International Severe Acute Respiratory and Emerging Infection Consortium: Coronavirus Clinical Characterization Consortium (ISARIC-4C) mortality. Materials and Methods This retrospective cohort study used routinely collected clinical data of patients with polymerase chain reaction-positive SARS-CoV-2 infection admitted to a single center from February 2020 through July 2020. Initial chest radiographs were scored for RALE, Brixia, and percentage opacification by one of three radiologists. Intra- and interreader agreement were assessed with intraclass correlation coefficients. The rate of admission to the intensive care unit (ICU) or death up to 60 days after scored chest radiograph was estimated. NEWS2 and ISARIC-4C mortality at hospital admission were calculated. Daily risk for admission to ICU or death was modeled with Cox proportional hazards models that incorporated the chest radiograph scores adjusted for NEWS2 or ISARIC-4C mortality. Results Admission chest radiographs of 50 patients (mean age, 74 years ± 16 [standard deviation]; 28 men) were scored by all three radiologists, with good interreader reliability for all scores, as follows: intraclass correlation coefficients were 0.87 for RALE (95% CI: 0.80, 0.92), 0.86 for Brixia (95% CI: 0.76, 0.92), and 0.72 for percentage opacification (95% CI: 0.48, 0.85). Of 751 patients with a chest radiograph, those with greater than 75% opacification had a median time to ICU admission or death of just 1-2 days. Among 628 patients for whom data were available (median age, 76 years [interquartile range, 61-84 years]; 344 men), opacification of 51%-75% increased risk for ICU admission or death by twofold (hazard ratio, 2.2; 95% CI: 1.6, 2.8), and opacification greater than 75% increased ICU risk by fourfold (hazard ratio, 4.0; 95% CI: 3.4, 4.7) compared with opacification of 0%-25%, when adjusted for NEWS2 score. Conclusion Brixia, radiographic assessment of lung edema, and percentage opacification scores all reliably helped predict adverse outcomes in SARS-CoV-2 infection. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Little in this issue.
Assuntos
COVID-19/diagnóstico por imagem , Pulmão/diagnóstico por imagem , Radiografia/métodos , Adulto , Idoso , Idoso de 80 Anos ou mais , Estudos de Coortes , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Prognóstico , Reprodutibilidade dos Testes , Estudos Retrospectivos , Fatores de Risco , SARS-CoV-2 , Índice de Gravidade de DoençaRESUMO
A key goal of conservation is to protect biodiversity by supporting the long-term persistence of viable, natural populations of wild species. Conservation practice has long been guided by genetic, ecological and demographic indicators of risk. Emerging evidence of animal culture across diverse taxa and its role as a driver of evolutionary diversification, population structure and demographic processes may be essential for augmenting these conventional conservation approaches and decision-making. Animal culture was the focus of a ground-breaking resolution under the Convention on the Conservation of Migratory Species of Wild Animals (CMS), an international treaty operating under the UN Environment Programme. Here, we synthesize existing evidence to demonstrate how social learning and animal culture interact with processes important to conservation management. Specifically, we explore how social learning might influence population viability and be an important resource in response to anthropogenic change, and provide examples of how it can result in phenotypically distinct units with different, socially learnt behavioural strategies. While identifying culture and social learning can be challenging, indirect identification and parsimonious inferences may be informative. Finally, we identify relevant methodologies and provide a framework for viewing behavioural data through a cultural lens which might provide new insights for conservation management.
Assuntos
Biodiversidade , Conservação dos Recursos Naturais , Animais , Animais Selvagens , Evolução Biológica , AprendizagemRESUMO
BACKGROUND: Efforts to safely reduce length of stay for emergency department patients with symptoms suggestive of acute coronary syndrome (ACS) have had mixed success. Few system-wide efforts affecting multiple hospital emergency departments have ever been evaluated. We evaluated the effectiveness of a nationwide implementation of clinical pathways for potential ACS in disparate hospitals. METHODS: This was a multicenter pragmatic stepped-wedge before-and-after trial in 7 New Zealand acute care hospitals with 31 332 patients investigated for suspected ACS with serial troponin measurements. The implementation was a clinical pathway for the assessment of patients with suspected ACS that included a clinical pathway document in paper or electronic format, structured risk stratification, specified time points for electrocardiographic and serial troponin testing within 3 hours of arrival, and directions for combining risk stratification and electrocardiographic and troponin testing in an accelerated diagnostic protocol. Implementation was monitored for >4 months and compared with usual care over the preceding 6 months. The main outcome measure was the odds of discharge within 6 hours of presentation RESULTS: There were 11 529 participants in the preimplementation phase (range, 284-3465) and 19 803 in the postimplementation phase (range, 395-5039). Overall, the mean 6-hour discharge rate increased from 8.3% (range, 2.7%-37.7%) to 18.4% (6.8%-43.8%). The odds of being discharged within 6 hours increased after clinical pathway implementation. The odds ratio was 2.4 (95% confidence interval, 2.3-2.6). In patients without ACS, the median length of hospital stays decreased by 2.9 hours (95% confidence interval, 2.4-3.4). For patients discharged within 6 hours, there was no change in 30-day major adverse cardiac event rates (0.52% versus 0.44%; P=0.96). In these patients, no adverse event occurred when clinical pathways were correctly followed. CONCLUSIONS: Implementation of clinical pathways for suspected ACS reduced the length of stay and increased the proportions of patients safely discharged within 6 hours. CLINICAL TRIAL REGISTRATION: URL: https://www.anzctr.org.au/ (Australian and New Zealand Clinical Trials Registry). Unique identifier: ACTRN12617000381381.
Assuntos
Síndrome Coronariana Aguda/diagnóstico , Serviço Hospitalar de Cardiologia/normas , Procedimentos Clínicos/normas , Serviço Hospitalar de Emergência/normas , Hospitalização , Melhoria de Qualidade/normas , Indicadores de Qualidade em Assistência à Saúde/normas , Síndrome Coronariana Aguda/sangue , Síndrome Coronariana Aguda/epidemiologia , Síndrome Coronariana Aguda/terapia , Idoso , Idoso de 80 Anos ou mais , Biomarcadores/sangue , Tomada de Decisão Clínica , Eletrocardiografia , Feminino , Humanos , Tempo de Internação , Masculino , Pessoa de Meia-Idade , Nova Zelândia/epidemiologia , Valor Preditivo dos Testes , Prevalência , Prognóstico , Medição de Risco , Fatores de Risco , Fatores de Tempo , Troponina/sangueRESUMO
BACKGROUND: High-throughput non-invasive prenatal testing (NIPT) for fetal Rhesus D (RhD) status could avoid unnecessary treatment with anti-D immunoglobulin for RhD-negative women found to be carrying an RhD-negative fetus. We aimed to assess the diagnostic accuracy of high-throughput NIPT for fetal RhD status in RhD-negative women not known to be sensitized to the RhD antigen, by performing a systematic review and meta-analysis. METHODS: Prospective cohort studies of high-throughput NIPT used to determine fetal RhD status were included. The eligible population were pregnant women who were RhD negative and not known to be sensitized to RhD antigen. The index test was high-throughput, NIPT cell-free fetal DNA tests of maternal plasma used to determine fetal RhD status. The reference standard considered was serologic cord blood testing at birth. Databases including MEDLINE, EMBASE, and Science Citation Index were searched up to February 2016. Two reviewers independently screened titles and abstracts and assessed full texts identified as potentially relevant. Risk of bias was assessed using QUADAS-2. The bivariate and hierarchical summary receiver-operating characteristic (HSROC) models were fitted to calculate summary estimates of sensitivity, specificity, false positive and false negative rates, and the associated 95% confidence intervals (CIs). RESULTS: A total of 3921 references records were identified through electronic searches. Eight studies were included in the systematic review. Six studies were judged to be at low risk of bias. The HSROC models demonstrated high diagnostic performance of high-throughput NIPT testing for women tested at or after 11 weeks gestation. In the primary analysis for diagnostic accuracy, women with an inconclusive test result were treated as having tested positive. The false negative rate (incorrectly classed as RhD negative) was 0.34% (95% CI 0.15 to 0.76) and the false positive rate (incorrectly classed as RhD positive) was 3.86% (95% CI 2.54 to 5.82). There was limited evidence for non-white women and multiple pregnancies. CONCLUSIONS: High-throughput NIPT is sufficiently accurate to detect fetal RhD status in RhD-negative women and would considerably reduce unnecessary treatment with routine anti-D immunoglobulin. The applicability of these findings to non-white women and women with multiple pregnancies is uncertain.
Assuntos
Feto , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Diagnóstico Pré-Natal/métodos , Sistema do Grupo Sanguíneo Rh-Hr/análise , Feminino , Humanos , Gravidez , Cuidado Pré-Natal , Estudos ProspectivosRESUMO
BACKGROUND: We assessed the effect of a pre-discharge medication checklist on discharge prescription rates of guideline recommended medications following myocardial infarction. In addition, we assessed what proportion of the residual prescribing gap following implementation of the checklist was due to the presence of contraindications. METHODS: We examined baseline prescription rates of guideline recommended medications in 100 patients discharged from our institution following acute myocardial infarction. We then introduced a pre-discharge checklist and reassessed discharge medications and reasons for non-prescription of guideline recommended medications in 447 patients with acute myocardial infarction. RESULTS: We demonstrated a significant gap in the prescription of guideline recommended secondary prevention medications at the time of discharge in our pre-intervention cohort. Introduction of a pre-discharge checklist resulted in a significant improvement in the prescription rates of all guideline recommended secondary prevention medications, with aspirin increasing from 90% to 97% (p=0.004), Adenosine diphosphate (ADP) receptor antagonist from 84% to 96% (p=0.0001), B-blocker from 79% to 87% (p=0.03), statin from 88% to 96% (p=0.002) and angiotensin converting enzyme (ACE) inhibitor from 58% to 70% (p=0.03). The residual gap in prescribing was largely explained by the presence of contraindications or absence of an indication in the case of ACE-inhibitors. Once these were taken into account there was a residual gap of 0-4% which represents genuine non-adherence to the guidelines. CONCLUSIONS: Introduction of a pre-discharge checklist led to significant improvement in prescription rates of all five guideline recommended secondary prevention medications. The residual gap in medication prescription following introduction of the checklist was largely due to the presence of contraindications rather than non-adherence.
Assuntos
Síndrome Coronariana Aguda/prevenção & controle , Fármacos Cardiovasculares/uso terapêutico , Prescrições de Medicamentos/normas , Fidelidade a Diretrizes , Infarto do Miocárdio/complicações , Prevenção Secundária/normas , Síndrome Coronariana Aguda/etiologia , Feminino , Seguimentos , Humanos , Masculino , Pessoa de Meia-Idade , Alta do Paciente , Estudos ProspectivosRESUMO
Objective: In this study, we were interested in determining whether we could alter a pain response in a chronic pain patient population by exposing participants to different videos prior to inducing acute pain. Design: This observational case series study required participants to report their pain level during the cold pressor task after viewing an instruction video. Setting: Recruitment and testing took place in a tertiary care multidisciplinary pain center. Subjects: Forty adults with chronic pain participated in the study and completed the cold pressor test. Methods: Prior to testing, questionnaires measuring pain, empathy, and catastrophic thinking were completed and participants were randomized to view an instructional video where an actress either demonstrated pain behavior or a stoic response during the cold pressor test. Results: Participants with higher levels of catastrophizing reported higher pain levels during the cold pressor test. Personal Distress Empathy measures of participants who viewed the pain catastrophizing video were significantly correlated with their final pain reports. Following the cold pressor task, participants' pain reports for their primary chronic pain sites were significantly reduced. Conclusions: These results support previous findings that people with chronic pain show the tendency toward increased acute pain experience if levels of catastrophizing and Personal Distress Empathy measures are higher. Participants reported attenuated chronic pain following induced pain, also in line with previous research suggesting a central endogenous inhibitory effect. Our findings shed light on the role of emotional and social components affecting the experience of pain in individuals with chronic pain.
Assuntos
Dor Aguda/psicologia , Adaptação Psicológica/fisiologia , Catastrofização/psicologia , Dor Crônica/psicologia , Empatia , Aprendizado Social , Percepção Visual , Dor Aguda/diagnóstico , Catastrofização/diagnóstico , Dor Crônica/diagnóstico , Emoções , Feminino , Humanos , Masculino , Pessoa de Meia-IdadeRESUMO
BACKGROUND/AIM: We studied clinical outcomes and discontinuation rates in a 'real-world' population presenting with myocardial infarction treated with ticagrelor or clopidogrel. METHODS: Between January 2012 and May 2015, 992 patients with acute myocardial infarction undergoing invasive management and adequately pre-treated with dual antiplatelet therapy were prospectively enrolled. Platelet aggregation was measured using the Multiplate analyser. Baseline characteristics, in-hospital outcomes and 1-year outcomes were collected. RESULTS: Patients treated with ticagrelor were younger and less likely to be diabetic, have a previous myocardial infarction or present with a ST-elevation myocardial infarction (all P < 0.05). Those treated with ticagrelor also had lower CRUSADE (Can Rapid risk stratification of Unstable angina patients Suppress ADverse outcomes with Early implementation of the ACC/AHA guidelines; 20 ± 9.4 vs 23 ± 10.1, P < 0.0001) and GRACE (119 ± 28 vs 126 ± 32, P = 0.002) scores. High platelet reactivity was greatly reduced with ticagrelor compared to clopidogrel (16.1% vs 37.0%, respectively; P < 0.0001). Non-coronary artery bypass grafting-related thrombolysis in myocardial infarction major and minor bleeding occurred at similar rates in those treated with ticagrelor and clopidogrel. Rates of drug discontinuation in those treated with ticagrelor and clopidogrel were similar in hospital (20.2% vs 16.2%, P = 0.18) and between discharge and 1 year (29.9% vs 27.9%, P = 0.63). However, discontinuation due to dyspnoea, (3.3% vs 0%, P < 0.0001) and discontinuation due to any possible drug-related adverse event (9.3% vs 2.2%, P = 0.0001) was more common in those treated with ticagrelor compared to clopidogrel CONCLUSION: Ticagrelor is paradoxically being used in lower-risk patients rather than those most likely to benefit. Ticagrelor was associated with similar rates of bleeding but higher discontinuation rates due to adverse effects compared to clopidogrel.
Assuntos
Síndrome Coronariana Aguda/tratamento farmacológico , Adenosina/análogos & derivados , Inibidores da Agregação Plaquetária/administração & dosagem , Antagonistas do Receptor Purinérgico P2Y/administração & dosagem , Ticlopidina/análogos & derivados , Suspensão de Tratamento , Síndrome Coronariana Aguda/diagnóstico , Adenosina/administração & dosagem , Adenosina/efeitos adversos , Idoso , Clopidogrel , Estudos de Coortes , Feminino , Hemorragia/induzido quimicamente , Hemorragia/diagnóstico , Humanos , Masculino , Pessoa de Meia-Idade , Intervenção Coronária Percutânea/efeitos adversos , Intervenção Coronária Percutânea/tendências , Inibidores da Agregação Plaquetária/efeitos adversos , Estudos Prospectivos , Antagonistas do Receptor Purinérgico P2Y/efeitos adversos , Ticagrelor , Ticlopidina/administração & dosagem , Ticlopidina/efeitos adversos , Resultado do Tratamento , Suspensão de Tratamento/tendênciasRESUMO
Cancer-associated thromboembolism is a substantial problem in clinical practice. An increase in the level of fibrinopeptide A (a substance associated with hypercoagulable states) has been observed in humans exposed to fluorouracil. Anti-EGFR monoclonal antibodies cetuximab and panitumumab, which are now widely used in patients with metastatic colorectal cancer, could prolong the uncovering of endothelial structures resulting from flouorouracil or other co-administered agents, thus favouring several factors leading to thromboembolism. We performed a systematic review and meta-analysis of randomised, controlled trials assessing whether cancer patients receiving anti-EGFR monoclonal antibodies cetuximab and panitumumab are at increased risk of thromboembolic events. We searched electronic databases (Medline, Embase, Web of Science, Central) and reference lists. Phase II/III randomised, controlled trials comparing standard anti-cancer regimens with or without anti-EGFR monoclonal antibodies and reporting serious venous thromboembolic events were included in the analysis. Seventeen studies (12,870 patients) were considered for quantitative analysis. The relative risk (RR) for venous thromboembolism (18 comparisons) was 1.46 (95% CI 1.26 to 1.69); the RR of pulmonary embolism, on the basis of eight studies providing nine comparisons, was 1.55 (1.20 to 2.00). Cancer patients receiving anti-EGFR monoclonal antibodies-containing regimens are approximately 1.5 times more likely to experience venous or pulmonary embolism, compared to those treated with the same regimens without anti-EGFR monoclonal antibodies. Clinicians should consider patient's baseline thromboembolic risk when selecting regimens that include cetuximab or panitumumab. Potential non-reporting of these important adverse events remains a concern. PROSPERO registration number is CRD42014009165.