Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 131
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Value Health ; 27(3): 301-312, 2024 03.
Article in English | MEDLINE | ID: mdl-38154593

ABSTRACT

OBJECTIVES: Celiac disease (CD) is thought to affect around 1% of people in the United Kingdom, but only approximately 30% are diagnosed. The aim of this work was to assess the cost-effectiveness of strategies for identifying adults and children with CD in terms of who to test and which tests to use. METHODS: A decision tree and Markov model were used to describe testing strategies and model long-term consequences of CD. The analysis compared a selection of pre-test probabilities of CD above which patients should be screened, as well as the use of different serological tests, with or without genetic testing. Value of information analysis was used to prioritize parameters for future research. RESULTS: Using serological testing alone in adults, immunoglobulin A (IgA) tissue transglutaminase (tTG) at a 1% pre-test probability (equivalent to population screening) was most cost-effective. If combining serological testing with genetic testing, human leukocyte antigen combined with IgA tTG at a 5% pre-test probability was most cost-effective. In children, the most cost-effective strategy was a 10% pre-test probability with human leukocyte antigen plus IgA tTG. Value of information analysis highlighted the probability of late diagnosis of CD and the accuracy of serological tests as important parameters. The analysis also suggested prioritizing research in adult women over adult men or children. CONCLUSIONS: For adults, these cost-effectiveness results suggest UK National Screening Committee Criteria for population-based screening for CD should be explored. Substantial uncertainty in the results indicate a high value in conducting further research.


Subject(s)
Celiac Disease , Child , Male , Adult , Humans , Female , Celiac Disease/diagnosis , Cost-Benefit Analysis , Transglutaminases , Immunoglobulin A , HLA Antigens
2.
Radiology ; 307(3): e221437, 2023 05.
Article in English | MEDLINE | ID: mdl-36916896

ABSTRACT

Systematic reviews of diagnostic accuracy studies can provide the best available evidence to inform decisions regarding the use of a diagnostic test. In this guide, the authors provide a practical approach for clinicians to appraise diagnostic accuracy systematic reviews and apply their results to patient care. The first step is to identify an appropriate systematic review with a research question matching the clinical scenario. The user should evaluate the rigor of the review methods to evaluate its credibility (Did the review use clearly defined eligibility criteria, a comprehensive search strategy, structured data collection, risk of bias and applicability appraisal, and appropriate meta-analysis methods?). If the review is credible, the next step is to decide whether the diagnostic performance is adequate for clinical use (Do sensitivity and specificity estimates exceed the threshold that makes them useful in clinical practice? Are these estimates sufficiently precise? Is variability in the estimates of diagnostic accuracy across studies explained?). Diagnostic accuracy systematic reviews that are judged to be credible and provide diagnostic accuracy estimates with sufficient certainty and relevance are the most useful to inform patient care. This review discusses comparative, noncomparative, and emerging approaches to systematic reviews of diagnostic accuracy using a clinical scenario and examples based on recent publications.


Subject(s)
Diagnosis , Meta-Analysis as Topic , Systematic Reviews as Topic , Humans , Sensitivity and Specificity
3.
Ann Intern Med ; 175(7): 1010-1018, 2022 07.
Article in English | MEDLINE | ID: mdl-35696685

ABSTRACT

Whereas diagnostic tests help detect the cause of signs and symptoms, prognostic tests assist in evaluating the probable course of the disease and future outcome. Studies to evaluate prognostic tests are longitudinal, which introduces sources of bias different from those for diagnostic accuracy studies. At present, systematic reviews of prognostic tests often use the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies 2) tool to assess risk of bias and applicability of included studies because no equivalent instrument exists for prognostic accuracy studies.QUAPAS (Quality Assessment of Prognostic Accuracy Studies) is an adaptation of QUADAS-2 for prognostic accuracy studies. Questions likely to identify bias were evaluated in parallel and collated from QUIPS (Quality in Prognosis Studies) and PROBAST (Prediction Model Risk of Bias Assessment Tool) and paired to the corresponding question (or domain) in QUADAS-2. A steering group conducted and reviewed 3 rounds of modifications before arriving at the final set of domains and signaling questions.QUAPAS follows the same steps as QUADAS-2: Specify the review question, tailor the tool, draw a flow diagram, judge risk of bias, and identify applicability concerns. Risk of bias is judged across the following 5 domains: participants, index test, outcome, flow and timing, and analysis. Signaling questions assist the final judgment for each domain. Applicability concerns are assessed for the first 4 domains.The authors used QUAPAS in parallel with QUADAS-2 and QUIPS in a systematic review of prognostic accuracy studies. QUAPAS improved the assessment of the flow and timing domain and flagged a study at risk of bias in the new analysis domain. Judgment of risk of bias in the analysis domain was challenging because of sparse reporting of statistical methods.


Subject(s)
Prognosis , Bias , Humans , Sensitivity and Specificity
4.
J Pediatr Gastroenterol Nutr ; 75(3): 369-386, 2022 09 01.
Article in English | MEDLINE | ID: mdl-35758521

ABSTRACT

OBJECTIVES: To gather the current evidence and to offer recommendations for follow-up and management. METHODS: The Special Interest Group on Celiac Diseases of the European Society of Paediatric Gastroenterology Hepatology and Nutrition formulated ten questions considered to be essential for follow-up care. A literature search (January 2010-March 2020) was performed in PubMed or Medline. Relevant publications were identified and potentially eligible studies were assessed. Statements and recommendations were developed and discussed by all coauthors. Recommendations were voted upon: joint agreement was set as at least 85%. RESULTS: Publications (n = 2775) were identified and 164 were included. Using evidence or expert opinion, 37 recommendations were formulated on: The need to perform follow-up, its frequency and what should be assessed, how to assess adherence to the gluten-free diet, when to expect catch-up growth, how to treat anemia, how to approach persistent high serum levels of antibodies against tissue-transglutaminase, the indication to perform biopsies, assessment of quality of life, management of children with unclear diagnosis for which a gluten-challenge is indicated, children with associated type 1 diabetes or IgA deficiency, cases of potential celiac disease, which professionals should perform follow-up, how to improve the communication to patients and their parents/caregivers and transition from pediatric to adult health care. CONCLUSIONS: We offer recommendations to improve follow-up of children and adolescents with celiac disease and highlight gaps that should be investigated to further improve management.


Subject(s)
Celiac Disease , Adolescent , Celiac Disease/diagnosis , Celiac Disease/therapy , Child , Diet, Gluten-Free , Follow-Up Studies , Glutens , Humans , Quality of Life
5.
Health Expect ; 25(5): 2453-2461, 2022 10.
Article in English | MEDLINE | ID: mdl-35854666

ABSTRACT

OBJECTIVE: Blood tests are commonly used in primary care as a tool to aid diagnosis, and to offer reassurance and validation for patients. If doctors and patients do not have a shared understanding of the reasons for testing and the meaning of results, these aims may not be fulfilled. Shared decision-making is widely advocated; yet, most research focusses on treatment decisions rather than diagnostic decisions. The aim of this study was to explore communication and decision-making around diagnostic blood tests in primary care. METHODS: Qualitative interviews were undertaken with patients and clinicians in UK primary care. Patients were interviewed at the time of blood testing, with a follow-up interview after they received test results. Interviews with clinicians who requested the tests provided paired data to compare clinicians' and patients' expectations, experiences and understandings of tests. Interviews were analysed thematically using inductive and deductive coding. RESULTS: A total of 80 interviews with 28 patients and 19 doctors were completed. We identified a mismatch in expectations and understanding of tests, which led to downstream consequences including frustration, anxiety and uncertainty for patients. There was no evidence of shared decision-making in consultations preceding the decision to test. Doctors adopted a paternalistic approach, believing that they were protecting patients from anxiety. CONCLUSION: Patients were not able to develop informed preferences and did not perceive that choice is possible in decisions about testing, because they did not have sufficient information and a shared understanding of tests. A lack of shared understanding at the point of decision-making led to downstream consequences when test results did not fulfil patients' expectations. Although shared decision-making is recommended as best practice, it does not reflect the reality of doctors' and patients' accounts of testing; a broader model of shared understanding seems to be more relevant to the complexity of primary care diagnosis. PATIENT OR PUBLIC CONTRIBUTION: A patient and public involvement group comprising five participants with lived experience of blood testing in primary care met regularly during the study. They contributed to the development of the research objectives, planning recruitment methods, reviewing patient information leaflets and topic guides and also contributed to discussion of emerging themes at an early stage in the analysis process.


Subject(s)
Communication , Decision Making , Humans , Qualitative Research , Primary Health Care , Hematologic Tests , Patient Participation
6.
Ann Intern Med ; 174(11): 1592-1599, 2021 11.
Article in English | MEDLINE | ID: mdl-34698503

ABSTRACT

Comparative diagnostic test accuracy studies assess and compare the accuracy of 2 or more tests in the same study. Although these studies have the potential to yield reliable evidence regarding comparative accuracy, shortcomings in the design, conduct, and analysis may bias their results. The currently recommended quality assessment tool for diagnostic test accuracy studies, QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies-2), is not designed for the assessment of test comparisons. The QUADAS-C (Quality Assessment of Diagnostic Accuracy Studies-Comparative) tool was developed as an extension of QUADAS-2 to assess the risk of bias in comparative diagnostic test accuracy studies. Through a 4-round Delphi study involving 24 international experts in test evaluation and a face-to-face consensus meeting, an initial version of the tool was developed that was revised and finalized following a pilot study among potential users. The QUADAS-C tool retains the same 4-domain structure of QUADAS-2 (Patient Selection, Index Test, Reference Standard, and Flow and Timing) and comprises additional questions to each QUADAS-2 domain. A risk-of-bias judgment for comparative accuracy requires a risk-of-bias judgment for the accuracy of each test (resulting from QUADAS-2) and additional criteria specific to test comparisons. Examples of such additional criteria include whether participants either received all index tests or were randomly assigned to index tests, and whether index tests were interpreted with blinding to the results of other index tests. The QUADAS-C tool will be useful for systematic reviews of diagnostic test accuracy addressing comparative questions. Furthermore, researchers may use this tool to identify and avoid risk of bias when designing a comparative diagnostic test accuracy study.


Subject(s)
Bias , Diagnosis , Quality Assurance, Health Care , Review Literature as Topic , Surveys and Questionnaires , Evidence-Based Medicine , Humans
7.
Rev Panam Salud Publica ; 46: e112, 2022.
Article in Portuguese | MEDLINE | ID: mdl-36601438

ABSTRACT

The Preferred Reporting Items for Systematic reviews and Meta-Analyses (PRISMA) statement, published in 2009, was designed to help systematic reviewers transparently report why the review was done, what the authors did, and what they found. Over the past decade, advances in systematic review methodology and terminology have necessitated an update to the guideline. The PRISMA 2020 statement replaces the 2009 statement and includes new reporting guidance that reflects advances in methods to identify, select, appraise, and synthesise studies. The structure and presentation of the items have been modified to facilitate implementation. In this article, we present the PRISMA 2020 27-item checklist, an expanded checklist that details reporting recommendations for each item, the PRISMA 2020 abstract checklist, and the revised flow diagrams for original and updated reviews.


La declaración PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses), publicada en 2009, se diseñó para ayudar a los autores de revisiones sistemáticas a documentar de manera transparente el porqué de la revisión, qué hicieron los autores y qué encontraron. Durante la última década, ha habido muchos avances en la metodología y terminología de las revisiones sistemáticas, lo que ha requerido una actualización de esta guía. La declaración PRISMA 2020 sustituye a la declaración de 2009 e incluye una nueva guía de presentación de las publicaciones que refleja los avances en los métodos para identificar, seleccionar, evaluar y sintetizar estudios. La estructura y la presentación de los ítems ha sido modificada para facilitar su implementación. En este artículo, presentamos la lista de verificación PRISMA 2020 con 27 ítems, y una lista de verificación ampliada que detalla las recomendaciones en la publicación de cada ítem, la lista de verificación del resumen estructurado PRISMA 2020 y el diagrama de flujo revisado para revisiones sistemáticas.

9.
BMC Med ; 18(1): 346, 2020 11 04.
Article in English | MEDLINE | ID: mdl-33143712

ABSTRACT

BACKGROUND: Tests for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) viral ribonucleic acid (RNA) using reverse transcription polymerase chain reaction (RT-PCR) are pivotal to detecting current coronavirus disease (COVID-19) and duration of detectable virus indicating potential for infectivity. METHODS: We conducted an individual participant data (IPD) systematic review of longitudinal studies of RT-PCR test results in symptomatic SARS-CoV-2. We searched PubMed, LitCOVID, medRxiv, and COVID-19 Living Evidence databases. We assessed risk of bias using a QUADAS-2 adaptation. Outcomes were the percentage of positive test results by time and the duration of detectable virus, by anatomical sampling sites. RESULTS: Of 5078 studies screened, we included 32 studies with 1023 SARS-CoV-2 infected participants and 1619 test results, from - 6 to 66 days post-symptom onset and hospitalisation. The highest percentage virus detection was from nasopharyngeal sampling between 0 and 4 days post-symptom onset at 89% (95% confidence interval (CI) 83 to 93) dropping to 54% (95% CI 47 to 61) after 10 to 14 days. On average, duration of detectable virus was longer with lower respiratory tract (LRT) sampling than upper respiratory tract (URT). Duration of faecal and respiratory tract virus detection varied greatly within individual participants. In some participants, virus was still detectable at 46 days post-symptom onset. CONCLUSIONS: RT-PCR misses detection of people with SARS-CoV-2 infection; early sampling minimises false negative diagnoses. Beyond 10 days post-symptom onset, lower RT or faecal testing may be preferred sampling sites. The included studies are open to substantial risk of bias, so the positivity rates are probably overestimated.


Subject(s)
Betacoronavirus/isolation & purification , Coronavirus Infections/diagnosis , Pneumonia, Viral/diagnosis , Reverse Transcriptase Polymerase Chain Reaction/methods , Reverse Transcriptase Polymerase Chain Reaction/standards , Betacoronavirus/genetics , COVID-19 , COVID-19 Testing , Clinical Laboratory Techniques , Coronavirus Infections/genetics , Humans , Longitudinal Studies , Pandemics , Pneumonia, Viral/genetics , SARS-CoV-2
10.
Fam Pract ; 37(6): 845-853, 2020 11 28.
Article in English | MEDLINE | ID: mdl-32820328

ABSTRACT

BACKGROUND: Studies have shown unwarranted variation in test ordering among GP practices and regions, which may lead to patient harm and increased health care costs. There is currently no robust evidence base to inform guidelines on monitoring long-term conditions. OBJECTIVES: To map the extent and nature of research that provides evidence on the use of laboratory tests to monitor long-term conditions in primary care, and to identify gaps in existing research. METHODS: We performed a scoping review-a relatively new approach for mapping research evidence across broad topics-using data abstraction forms and charting data according to a scoping framework. We searched CINAHL, EMBASE and MEDLINE to April 2019. We included studies that aimed to optimize the use of laboratory tests and determine costs, patient harm or variation related to testing in a primary care population with long-term conditions. RESULTS: Ninety-four studies were included. Forty percent aimed to describe variation in test ordering and 36% to investigate test performance. Renal function tests (35%), HbA1c (23%) and lipids (17%) were the most studied laboratory tests. Most studies applied a cohort design using routinely collected health care data (49%). We found gaps in research on strategies to optimize test use to improve patient outcomes, optimal testing intervals and patient harms caused by over-testing. CONCLUSIONS: Future research needs to address these gaps in evidence. High-level evidence is missing, i.e. randomized controlled trials comparing one monitoring strategy to another or quasi-experimental designs such as interrupted time series analysis if trials are not feasible.


Subject(s)
Clinical Laboratory Techniques/standards , Health Care Costs , Primary Health Care , Humans , Interrupted Time Series Analysis
11.
BMC Nephrol ; 21(1): 493, 2020 11 18.
Article in English | MEDLINE | ID: mdl-33208126

ABSTRACT

BACKGROUND: People with chronic kidney disease (CKD) have high levels of co-morbidity and polypharmacy placing them at increased risk of prescribing-related harm. Tools for assessing prescribing safety in the general population using prescribing safety indicators (PSIs) have been established. However, people with CKD pose different prescribing challenges to people without kidney disease. Therefore, PSIs designed for use in the general population may not include all PSIs relevant to a CKD population. The aim of this study was to systematically collate a library of PSIs relevant to people with CKD. METHODS: A systematic literature search identified papers reporting PSIs. CKD-specific PSIs were extracted and categorised by Anatomical Therapeutic Chemical (ATC) classification codes. Duplicate PSIs were removed to create a final list of CKD-specific PSIs. RESULTS: Nine thousand, eight hundred fifty-two papers were identified by the systematic literature search, of which 511 proceeded to full text screening and 196 papers were identified as reporting PSIs. Following categorisation by ATC code and duplicate removal, 841 unique PSIs formed the final set of CKD-specific PSIs. The five ATC drug classes containing the largest proportion of CKD-specific PSIs were: Cardiovascular system (26%); Nervous system (13.4%); Blood and blood forming organs (12.4%); Alimentary and metabolism (12%); and Anti-infectives for systemic use (11.3%). CONCLUSION: CKD-specific PSIs could be used alone or alongside general PSIs to assess the safety and quality of prescribing within a CKD population.


Subject(s)
Contraindications, Drug , Inappropriate Prescribing/prevention & control , Polypharmacy , Renal Insufficiency, Chronic , Humans , Multiple Chronic Conditions/drug therapy , Renal Insufficiency, Chronic/drug therapy
12.
BMC Fam Pract ; 21(1): 257, 2020 12 05.
Article in English | MEDLINE | ID: mdl-33278890

ABSTRACT

BACKGROUND: We have shown previously that current recommendations in UK guidelines for monitoring long-term conditions are largely based on expert opinion. Due to a lack of robust evidence on optimal monitoring strategies and testing intervals, the guidelines are unclear and incomplete. This uncertainty may underly variation in testing that has been observed across the UK between GP practices and regions. METHODS: Our objective was to audit current testing practices of GPs in the UK; in particular, perspectives on laboratory tests for monitoring long-term conditions, the workload, and how confident GPs are in ordering and interpreting these tests. We designed an online survey consisting of multiple-choice and open-ended questions that was promoted on social media and in newsletters targeting GPs practicing in UK. The survey was live between October-November 2019. The results were analysed using a mixed-methods approach. RESULTS: The survey was completed by 550 GPs, of whom 69% had more than 10 years of experience. The majority spent more than 30 min per day on testing (78%), but only half of the respondents felt confident in dealing with abnormal results (53%). There was a high level of disagreement for whether liver function tests and full blood counts should be done 'routinely', 'sometimes', or 'never' in patients with a certain long-term condition. The free text comments revealed three common themes: (1) pressures that promote over-testing, i.e. guidelines or protocols, workload from secondary care, fear of missing something, patient expectations; (2) negative consequences of over-testing, i.e. increased workload and patient harm; and (3) uncertainties due to lack of evidence and unclear guidelines. CONCLUSION: These results confirm the variation that has been observed in test ordering data. The results also show that most GPs spent a significant part of their day ordering and interpreting monitoring tests. The lack of confidence in knowing how to act on abnormal test results underlines the urgent need for robust evidence on optimal testing and the development of clear and unambiguous testing recommendations. Uncertainties surrounding optimal testing has resulted in an over-use of tests, which leads to a waste of resources, increased GP workload and potential patient harm.


Subject(s)
Diagnostic Tests, Routine , Workload , Attitude of Health Personnel , Humans , Surveys and Questionnaires
13.
Ann Intern Med ; 170(1): 51-58, 2019 01 01.
Article in English | MEDLINE | ID: mdl-30596875

ABSTRACT

Clinical prediction models combine multiple predictors to estimate risk for the presence of a particular condition (diagnostic models) or the occurrence of a certain event in the future (prognostic models). PROBAST (Prediction model Risk Of Bias ASsessment Tool), a tool for assessing the risk of bias (ROB) and applicability of diagnostic and prognostic prediction model studies, was developed by a steering group that considered existing ROB tools and reporting guidelines. The tool was informed by a Delphi procedure involving 38 experts and was refined through piloting. PROBAST is organized into the following 4 domains: participants, predictors, outcome, and analysis. These domains contain a total of 20 signaling questions to facilitate structured judgment of ROB, which was defined to occur when shortcomings in study design, conduct, or analysis lead to systematically distorted estimates of model predictive performance. PROBAST enables a focused and transparent approach to assessing the ROB and applicability of studies that develop, validate, or update prediction models for individualized predictions. Although PROBAST was designed for systematic reviews, it can be used more generally in critical appraisal of prediction model studies. Potential users include organizations supporting decision making, researchers and clinicians who are interested in evidence-based medicine or involved in guideline development, journal editors, and manuscript reviewers.


Subject(s)
Bias , Decision Support Techniques , Models, Statistical , Research Design/standards , Delphi Technique , Diagnosis , Humans , Prognosis , Systematic Reviews as Topic
14.
Ann Intern Med ; 170(1): W1-W33, 2019 01 01.
Article in English | MEDLINE | ID: mdl-30596876

ABSTRACT

Prediction models in health care use predictors to estimate for an individual the probability that a condition or disease is already present (diagnostic model) or will occur in the future (prognostic model). Publications on prediction models have become more common in recent years, and competing prediction models frequently exist for the same outcome or target population. Health care providers, guideline developers, and policymakers are often unsure which model to use or recommend, and in which persons or settings. Hence, systematic reviews of these studies are increasingly demanded, required, and performed. A key part of a systematic review of prediction models is examination of risk of bias and applicability to the intended population and setting. To help reviewers with this process, the authors developed PROBAST (Prediction model Risk Of Bias ASsessment Tool) for studies developing, validating, or updating (for example, extending) prediction models, both diagnostic and prognostic. PROBAST was developed through a consensus process involving a group of experts in the field. It includes 20 signaling questions across 4 domains (participants, predictors, outcome, and analysis). This explanation and elaboration document describes the rationale for including each domain and signaling question and guides researchers, reviewers, readers, and guideline developers in how to use them to assess risk of bias and applicability concerns. All concepts are illustrated with published examples across different topics. The latest version of the PROBAST checklist, accompanying documents, and filled-in examples can be downloaded from www.probast.org.


Subject(s)
Bias , Decision Support Techniques , Models, Statistical , Research Design/standards , Diagnosis , Humans , Prognosis , Systematic Reviews as Topic
15.
Br J Cancer ; 120(11): 1045-1051, 2019 05.
Article in English | MEDLINE | ID: mdl-31015558

ABSTRACT

BACKGROUND: Early identification of cancer in primary care is important and challenging. This study examined the diagnostic utility of inflammatory markers (C-reactive protein, erythrocyte sedimentation rate and plasma viscosity) for cancer diagnosis in primary care. METHODS: Cohort study of 160,000 patients with inflammatory marker testing in 2014, plus 40,000 untested matched controls, using Clinical Practice Research Datalink (CPRD), with Cancer Registry linkage. Primary outcome was one-year cancer incidence. RESULTS: Primary care patients with a raised inflammatory marker have a one-year cancer incidence of 3.53% (95% CI 3.37-3.70), compared to 1.50% (1.43-1.58) in those with normal inflammatory markers, and 0.97% (0.87-1.07) in untested controls. Cancer risk is greater with higher inflammatory marker levels, with older age and in men; risk rises further when a repeat test is abnormal but falls if it normalises. Men over 50 and women over 60 with raised inflammatory markers have a cancer risk which exceeds the 3% NICE threshold for urgent investigation. Sensitivities for cancer were 46.1% for CRP, 43.6% ESR and 49.7% for PV. CONCLUSION: Cancer should be considered in patients with raised inflammatory markers. However, inflammatory markers have a poor sensitivity for cancer and are therefore not useful as 'rule-out' test.


Subject(s)
Blood Sedimentation , Blood Viscosity , C-Reactive Protein/analysis , Electronic Health Records , Neoplasms/diagnosis , Primary Health Care , Adult , Age Factors , Aged , Biomarkers , Female , Humans , Male , Middle Aged , Neoplasms/epidemiology , Prospective Studies
16.
J Med Internet Res ; 21(9): e14231, 2019 09 25.
Article in English | MEDLINE | ID: mdl-31573906

ABSTRACT

BACKGROUND: Reducing childhood morbidity and mortality is challenging, particularly in countries with a shortage of qualified health care workers. Lack of trainers makes it difficult to provide the necessary continuing education in pediatrics for postregistration health professionals. Digital education, teaching and learning by means of digital technologies, has the potential to deliver medical education to a large audience while limiting the number of trainers needed. OBJECTIVE: The goal of the research was to evaluate whether digital education can replace traditional learning to improve postregistration health professionals' knowledge, skills, attitudes, and satisfaction and foster behavior change in the field of pediatrics. METHODS: We completed a systematic review of the literature by following the Cochrane methodology. We searched 7 major electronic databases for articles published from January 1990 to August 2017. No language restrictions were applied. We independently selected studies, extracted data, and assessed risk of bias, and pairs of authors compared information. We contacted authors of studies for additional information if necessary. All pooled analyses were based on random effects models. We included individually or cluster randomized controlled trials that compared digital education with traditional learning, no intervention, or other forms of digital education. We assessed the quality of evidence using the Grading of Recommendations, Assessment, Development, and Evaluations (GRADE) criteria. RESULTS: Twenty studies (1382 participants) were included. Participants included pediatricians, physicians, nurses, and midwives. Digital education technologies were assessed including high-fidelity mannequins (6 studies), computer-based education (12 studies), mobile learning (1 study), and virtual reality (1 study). Most studies reported that digital education was either as effective as or more effective than the control intervention for outcomes including skill, knowledge, attitude, and satisfaction. High-fidelity mannequins were associated with higher postintervention skill scores compared with low-fidelity mannequins (standardized mean difference 0.62; 95% CI 0.17-1.06; moderate effect size, low-quality evidence). One study reported physician change in practicing behavior and found similar effects between offline plus online digital education and no intervention. The only study that assessed impact on patient outcome found no difference between intervention and control groups. None of the included studies reported adverse or untoward effects or economic outcomes of the digital education interventions. The risk of bias was mainly unclear or high. The quality of evidence was low due to study inconsistencies, limitations, or imprecision across the studies. CONCLUSIONS: Digital education for postregistration health professions education in pediatrics is at least as effective as traditional learning and more effective than no learning. High-fidelity mannequins were found to be more effective at improving skills than traditional learning with low-fidelity mannequins. Computer-based offline/online digital education was better than no intervention for knowledge and skill outcomes and as good as traditional face-to-face learning. This review highlights evidence gaps calling for more methodologically rigorous randomized controlled trials on the topic. TRIAL REGISTRATION: PROSPERO CRD42017057793; https://tinyurl.com/y5q9q5o6.


Subject(s)
Education, Continuing/methods , Education, Distance/methods , Health Personnel/education , Pediatrics/education , Bias , Computer-Assisted Instruction , Education, Medical, Continuing/methods , Education, Nursing, Continuing/methods , Humans , Learning , Manikins , Midwifery/education , Mobile Applications , Virtual Reality
17.
J Med Internet Res ; 21(3): e13000, 2019 03 04.
Article in English | MEDLINE | ID: mdl-30829576

ABSTRACT

BACKGROUND: Tobacco smoking, one of the leading causes of preventable death and disease, is associated with 7 million deaths every year. This is estimated to rise to more than 8 million deaths per year by 2030, with 80% occurring in low- and middle-income countries. Digital education, teaching, and learning using digital technologies have the potential to increase educational opportunities, supplement teaching activities, and decrease distance barriers in health professions education. OBJECTIVE: The primary objective of this systematic review was to evaluate the effectiveness of digital education compared with various controls in improving learners' knowledge, skills, attitudes, and satisfaction to deliver smoking cessation therapy. The secondary objectives were to assess patient-related outcomes, change in health professionals' practice or behavior, self-efficacy or self-rated competence of health professionals in delivering smoking cessation therapy, and cost-effectiveness of the interventions. METHODS: We searched 7 electronic databases and 2 trial registers for randomized controlled trials published between January 1990 and August 2017. We used gold standard Cochrane methods to select and extract data and appraise eligible studies. RESULTS: A total of 11 studies (number of participants, n=2684) were included in the review. All studies found that digital education was at least as effective as traditional or usual learning. There was some suggestion that blended education results in similar or greater improvements in knowledge (standardized mean difference, SMD=0.19, 95% CI -0.35 to 0.72), skill (SMD=0.58, 95% CI 0.08-1.08), and satisfaction (SMD=0.62, 95% CI 0.12-1.12) compared with digital education or usual learning alone. There was also some evidence for improved attitude (SMD=0.45, 95% CI 0.18-0.72) following digital education compared with usual learning. Only 1 study reported patient outcomes and the setup cost of blended education but did not compare outcomes among groups. There were insufficient data to investigate what components of the digital education interventions were associated with the greatest improvements in learning outcomes. CONCLUSIONS: The evidence suggests that digital education is at least as effective as usual learning in improving health professionals' knowledge and skill for delivering smoking cessation therapy. However, limitations in the evidence base mean that these conclusions should be interpreted with some caution. TRIAL REGISTRATION: PROSPERO CRD42016046815; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=46815.


Subject(s)
Education, Distance/methods , Health Education/methods , Health Personnel/education , Smoking Cessation/methods , Humans
18.
J Med Internet Res ; 21(2): e12913, 2019 02 14.
Article in English | MEDLINE | ID: mdl-30762583

ABSTRACT

Synthesizing evidence from randomized controlled trials of digital health education poses some challenges. These include a lack of clear categorization of digital health education in the literature; constantly evolving concepts, pedagogies, or theories; and a multitude of methods, features, technologies, or delivery settings. The Digital Health Education Collaboration was established to evaluate the evidence on digital education in health professions; inform policymakers, educators, and students; and ultimately, change the way in which these professionals learn and are taught. The aim of this paper is to present the overarching methodology that we use to synthesize evidence across our digital health education reviews and to discuss challenges related to the process. For our research, we followed Cochrane recommendations for the conduct of systematic reviews; all reviews are reported according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidance. This included assembling experts in various digital health education fields; identifying gaps in the evidence base; formulating focused research questions, aims, and outcome measures; choosing appropriate search terms and databases; defining inclusion and exclusion criteria; running the searches jointly with librarians and information specialists; managing abstracts; retrieving full-text versions of papers; extracting and storing large datasets, critically appraising the quality of studies; analyzing data; discussing findings; drawing meaningful conclusions; and drafting research papers. The approach used for synthesizing evidence from digital health education trials is commonly regarded as the most rigorous benchmark for conducting systematic reviews. Although we acknowledge the presence of certain biases ingrained in the process, we have clearly highlighted and minimized those biases by strictly adhering to scientific rigor, methodological integrity, and standard operating procedures. This paper will be a valuable asset for researchers and methodologists undertaking systematic reviews in digital health education.


Subject(s)
Health Education/methods , Health Occupations/education , Humans , Learning
19.
Emerg Med J ; 36(5): 287-292, 2019 May.
Article in English | MEDLINE | ID: mdl-30842204

ABSTRACT

BACKGROUND: Early warning scores (EWS) were developed in acute hospital settings to improve recognition and response to patient deterioration. In 2012, the UK Royal College of Physicians developed the National Early Warning Score (NEWS) to standardise EWS across the NHS. Its use was also recommended outside acute hospital settings; however, there is limited information about NEWS in these settings. From March 2015, NEWS was implemented across the healthcare system in the West of England, with the aim that NEWS would be calculated for all patients prior to referral into acute care. AIM: To describe the distribution and use of NEWS in out-of-hospital settings for patients with acute illness or long-term conditions, following system wide implementation. METHOD: Anonymised data were obtained from 115 030 emergency department (ED) attendances, 1 137 734 ambulance electronic records, 31 063 community attendances and 15 160 general practitioner (GP) referrals into secondary care, in the West of England. Descriptive statistics are presented. RESULTS: Most attendance records had NEWS=0-2: 80% in ED, 67% of ambulance attendances and 72% in the community. In contrast, only 8%, 18% and 11% of attendances had NEWS ≥5 (the trigger for escalation of care in-hospital), respectively. Referrals by a GP had higher NEWS on average (46% NEWS=0-2 and 30% NEWS ≥5). By April 2016, the use of NEWS was reasonably stable in ED, ambulance and community populations, and still increasing for GP referrals. CONCLUSIONS: NEWS ≥5 occurred in less than 20% of ED, ambulance and community populations studied and 30% of GP referrals. This suggests that in most out-of-hospital settings studied, high scores are reasonably uncommon.


Subject(s)
Clinical Deterioration , Geographic Mapping , Research Design/statistics & numerical data , Adult , Aged , Aged, 80 and over , Delivery of Health Care/methods , Delivery of Health Care/trends , Emergency Medical Services , Emergency Service, Hospital/statistics & numerical data , England , Female , Hospital Mortality , Humans , Length of Stay , Male , Middle Aged
20.
Support Care Cancer ; 26(5): 1635-1644, 2018 May.
Article in English | MEDLINE | ID: mdl-29209836

ABSTRACT

PURPOSE: We conducted a systematic review and individual patient data (IPD) meta-analysis to examine the utility of cystatin C for evaluation of glomerular function in children with cancer. METHODS: Eligible studies evaluated the accuracy of cystatin C for detecting poor renal function in children undergoing chemotherapy. Study quality was assessed using QUADAS-2. Authors of four studies shared IPD. We calculated the correlation between log cystatin C and GFR stratified by study and measure of cystatin C. We dichotomized the reference standard at GFR 80 ml/min/1.73m2 and stratified cystatin C at 1 mg/l, to calculate sensitivity and specificity in each study and according to age group (0-4, 5-12, and ≥ 13 years). In sensitivity analyses, we investigated different GFR and cystatin C cut points. We used logistic regression to estimate the association of impaired renal function with log cystatin C and quantified diagnostic accuracy using the area under the ROC curve (AUC). RESULTS: Six studies, which used different test and reference standard thresholds, suggested that cystatin C has the potential to monitor renal function in children undergoing chemotherapy for malignancy. IPD data (504 samples, 209 children) showed that cystatin C has poor sensitivity (63%) and moderate specificity (89%), although use of a GFR cut point of < 60 ml/min/1.73m2 (data only available from two of the studies) estimated sensitivity to be 92% and specificity 81.3%. The AUC for the combined data set was 0.890 (95% CI 0.826, 0.951). Diagnostic accuracy appeared to decrease with age. CONCLUSIONS: Cystatin C has better diagnostic accuracy than creatinine as a test for glomerular dysfunction in young people undergoing treatment for cancer. Diagnostic accuracy is not sufficient for it to replace current reference standards for predicting clinically relevant impairments that may alter dosing of important nephrotoxic agents.


Subject(s)
Cystatin C/metabolism , Neoplasms/complications , Renal Insufficiency/diagnosis , Adolescent , Child , Female , Humans , Male , Middle Aged , Neoplasms/drug therapy , Qualitative Research , Renal Insufficiency/etiology , Renal Insufficiency/pathology
SELECTION OF CITATIONS
SEARCH DETAIL