Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 38
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Med Teach ; : 1-15, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38627020

RESUMO

PURPOSE: Management reasoning is a distinct subset of clinical reasoning. We sought to explore features to be considered when designing assessments of management reasoning. METHODS: This is a hybrid empirical research study, narrative review, and expert perspective. In 2021, we reviewed and discussed 10 videos of simulated (staged) physician-patient encounters, actively seeking actions that offered insights into assessment of management reasoning. We analyzed our own observations in conjunction with literature on clinical reasoning assessment, using a constant comparative qualitative approach. RESULTS: Distinguishing features of management reasoning that will influence its assessment include management scripts, shared decision-making, process knowledge, illness-specific knowledge, and tailoring of the encounter and management plan. Performance domains that merit special consideration include communication, integration of patient preferences, adherence to the management script, and prognostication. Additional facets of encounter variation include the clinical problem, clinical and nonclinical patient characteristics (including preferences, values, and resources), team/system characteristics, and encounter features. We cataloged several relevant assessment approaches including written/computer-based, simulation-based, and workplace-based modalities, and a variety of novel response formats. CONCLUSIONS: Assessment of management reasoning could be improved with attention to the performance domains, facets of variation, and variety of approaches herein identified.

2.
Acad Med ; 97(10): 1554-1563, 2022 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-35830262

RESUMO

PURPOSE: An essential yet oft-neglected step in cost evaluations is the selection of resources (ingredients) to include in cost estimates. The ingredients that most influence the cost of physician continuous professional development (CPD) are unknown, as are the relative costs of instructional modalities. This study's purpose was to estimate the costs of cost ingredients and instructional modalities in physician CPD. METHOD: The authors conducted a systematic review in April 2020, searching MEDLINE, Embase, PsycInfo, and the Cochrane Library for comparative cost evaluations of CPD for practicing physicians. Two reviewers, working independently, screened articles for inclusion and extracted information on costs (converted to 2021 U.S. dollars) for each intervention overall, each ingredient, and each modality. RESULTS: Of 3,338 eligible studies, 62 were included, enumerating costs for 86 discrete training interventions or instructional modalities. The most frequently reported ingredients were faculty time (25 of 86 interventions), materials (24), administrator/staff time (23), and travel (20). Ingredient costs varied widely, ranging from a per-physician median of $4 for postage (10 interventions) to $525 for learner time (13); equipment (9) and faculty time were also relatively expensive (median > $170). Among instructional modalities (≤ 11 interventions per modality), audit and feedback performed by physician learners, computer-based modules, computer-based virtual patients, in-person lectures, and experiences with real patients were relatively expensive (median > $1,000 per physician). Mailed paper materials, video clips, and audit and feedback performed by others were relatively inexpensive (median ≤ $62 per physician). Details regarding ingredient selection (10 of 62 studies), quantitation (10), and pricing (26) were reported infrequently. CONCLUSIONS: Some ingredients, including time, are more important (i.e., contribute more to total costs) than others and should be prioritized in cost evaluations. Data on the relative costs of instructional modalities are insightful but limited. The methods and reporting of cost valuations merit improvement.


Assuntos
Médicos , Análise Custo-Benefício , Custos e Análise de Custo , Docentes , Humanos
3.
J Surg Educ ; 79(5): 1270-1281, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35688704

RESUMO

OBJECTIVES: Well-developed mental representations of a task are fundamental to proficient performance. 'Video Commentary' (VC) is a novel assessment intended to measure mental representations of surgical tasks that would reflect an important aspect of task proficiency. Whether examinees' actual response processes align with this intent remains unknown. As part of ongoing validation of the assessment, we sought to understand examinees' response processes in VC. DESIGN: Grounded theory qualitative study. In 2019, residents were interviewed about their understanding of and approach to VC. Using grounded theory, we created a theoretical model explaining relationships among factors that influence residents' response processes and performance. Residents' perceived purpose of VC was also explored using Likert-type questions. SETTING: Academic surgical residency program. PARTICIPANTS: Forty-eight surgical residents (PGY-1 to PGY-5). RESULTS: Analysis of narrative comments indicated that residents' perceived purposes of VC generally align with the educator's intent. Resident response processes are influenced by test characteristics, residents' perception and understanding of VC, and residents' personal characteristics. Four strategies seem to guide how residents respond, namely a focus on speed, points, logic, and relevance. Quantitative results indicated residents believe VC scores reflect their ability to speak quickly, ability to think quickly, and knowledge of anatomy (mean = 5.0, 4.5, and 4.4 respectively [1 = strongly disagree, 6 = strongly agree]). PGY-1 and PGY-2 residents tend to focus on naming facts whereas PGY-4 and PGY-5 residents focus on providing comprehensive descriptions. CONCLUSIONS: Residents generally have an accurate understanding of the purpose of VC. However, their use of different approaches could represent a threat to validity. The response strategies of speed, points, logic, and relevance may inform other clinical skills assessments.


Assuntos
Cirurgia Geral , Internato e Residência , Competência Clínica , Avaliação Educacional/métodos , Cirurgia Geral/educação , Humanos , Estudos Longitudinais , Pesquisa Qualitativa
4.
Perspect Med Educ ; 11(3): 156-164, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-35357652

RESUMO

INTRODUCTION: We sought to evaluate the reporting and methodological quality of cost evaluations of physician continuing professional development (CPD). METHODS: We conducted a systematic review, searching MEDLINE, Embase, PsycInfo, and the Cochrane Database for studies comparing the cost of physician CPD (last update 23 April 2020). Two reviewers, working independently, screened all articles for inclusion. Two reviewers extracted information on reporting quality using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS), and on methodological quality using the Medical Education Research Study Quality Instrument (MERSQI) and a published reference case. RESULTS: Of 3338 potentially eligible studies, 62 were included. Operational definitions of methodological and reporting quality elements were iteratively revised. Articles reported mean (SD) 43% (20%) of CHEERS elements for the Title/Abstract, 56% (34%) for Introduction, 66% (19%) for Methods, 61% (17%) for Results, and 66% (30%) for Discussion, with overall reporting index 292 (83) (maximum 500). Valuation methods were reported infrequently (resource selection 10 of 62 [16%], resource quantitation 10 [16%], pricing 26 [42%]), as were descriptions/discussion of the physicians trained (42 [68%]), training setting (42 [68%]), training intervention (40 [65%]), sensitivity analyses of uncertainty (9 [15%]), and generalizability (30 [48%]). MERSQI scores ranged from 6.0 to 16.0 (mean 11.2 [2.4]). Changes over time in reporting index (initial 241 [105], final 321 [52]) and MERSQI scores (initial 9.8 [2.7], final 11.9 [1.9]) were not statistically significant (p ≥ 0.08). DISCUSSION: Methods and reporting of HPE cost evaluations fall short of current standards. Gaps exist in the valuation, analysis, and contextualization of cost outcomes.


Assuntos
Médicos , Projetos de Pesquisa , Análise Custo-Benefício , Coleta de Dados , Atenção à Saúde , Humanos
5.
JAMA Netw Open ; 5(1): e2144973, 2022 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-35080604

RESUMO

Importance: The economic impact of continuous professional development (CPD) education is incompletely understood. Objective: To systematically identify and synthesize published research examining the costs associated with physician CPD for drug prescribing. Evidence Review: MEDLINE, Embase, PsycInfo, and the Cochrane Database were searched from inception to April 23, 2020, for comparative studies that evaluated the cost of CPD focused on drug prescribing. Two reviewers independently screened all articles for inclusion and reviewed all included articles to extract data on participants, educational interventions, study designs, and outcomes (costs and effectiveness). Results were synthesized for educational costs, health care costs, and cost-effectiveness. Findings: Of 3338 articles screened, 38 were included in this analysis. These studies included at least 15 659 health care professionals and 1 963 197 patients. Twelve studies reported on educational costs, ranging from $281 to $183 554 (median, $15 664). When economic outcomes were evaluated, 31 of 33 studies (94%) comparing CPD with no intervention found that CPD was associated with reduced health care costs (drug costs), ranging from $4731 to $6 912 000 (median, $79 373). Four studies found reduced drug costs for 1-on-1 outreach compared with other CPD approaches. Regarding cost-effectiveness, among 5 studies that compared CPD with no intervention, the incremental cost-effectiveness ratio for a 10% improvement in prescribing ranged from $15 390 to $437 027 to train all program participants. Four comparisons of alternative CPD approaches found that 1-on-1 educational outreach was more effective but more expensive than group education or mailed materials (incremental cost-effectiveness ratio, $18-$4105 per physician trained). Conclusions and Relevance: In this systematic review, CPD for drug prescribing was associated with reduced health care (drug) costs. The educational costs and cost-effectiveness of CPD varied widely. Several CPD instructional approaches (including educational outreach) were more effective but more costly than comparators.


Assuntos
Prescrições de Medicamentos/economia , Educação Médica Continuada/economia , Educação em Farmácia/economia , Análise Custo-Benefício , Custos de Medicamentos , Custos de Cuidados de Saúde , Humanos
6.
Acad Med ; 97(1): 152-161, 2022 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-34432716

RESUMO

PURPOSE: Nearly all health care professionals engage in continuous professional development (CPD), yet little is known about the cost and cost-effectiveness of physician CPD. Clarification of key concepts, comprehensive identification of published work, and determination of research gaps would facilitate application of existing evidence and planning for future investigations. The authors sought to systematically map study themes, methods, and outcomes in peer-reviewed literature on the cost and value of physician CPD. METHOD: The authors conducted a scoping review, systematically searching MEDLINE, Embase, PsycInfo, and Cochrane Library databases for comparative economic evaluations of CPD for practicing physicians through April 2020. Two reviewers, working independently, screened all articles for inclusion. Three reviewers iteratively reviewed all included articles to inductively identify key features including participants, educational interventions, study designs, cost ingredients, and cost analyses. Two reviewers then independently reexamined all included articles to code these features. RESULTS: Of 3,338 potentially eligible studies, 111 were included. Physician specialties included internal, family, or general medicine (80 studies [72%]), surgery (14 studies [13%]), and medicine subspecialties (7 studies [6%]). Topics most often addressed general medicine (45 studies [41%]) or appropriate drug use (37 studies [33%]). Eighty-seven studies (78%) compared CPD with no intervention. Sixty-three studies (57%) reported the cost of training, and 79 (71%) evaluated the economic impact (money saved/lost following CPD). Training cost ingredients (median 3 itemized per study) and economic impact ingredients (median 1 per study) were infrequently and incompletely identified, quantified, or priced. Twenty-seven studies (24%) reported cost-impact expressions such as cost-effectiveness ratio or net value. Nineteen studies (17%) reported sensitivity analyses. CONCLUSIONS: Studies evaluating the costs and economic impact of physician CPD are few. Gaps exist in identification, quantification, pricing, and analysis of cost outcomes. The authors propose a comprehensive framework for appraising ingredients and a preliminary reference case for economic evaluations.


Assuntos
Médicos , Análise Custo-Benefício , Humanos
7.
Med Teach ; 43(9): 984-998, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-33280483

RESUMO

Growing demand for accountability, transparency, and efficiency in health professions education is expected to drive increased demand for, and use of, cost and value analyses. In this AMEE Guide, we introduce key concepts, methods, and literature that will enable novices in economics to conduct simple cost and value analyses, hold informed discussions with economic specialists, and undertake further learning on more advanced economic topics. The practical structure for conducting analyses provided in this guide will enable researchers to produce robust results that are meaningful and useful for improving educational practice. Key steps include defining the economic research question, identifying an appropriate economic study design, carefully identifying cost ingredients, quantifying, and pricing the ingredients consumed, and conducting sensitivity analyses to explore uncertainties in the results.


Assuntos
Projetos de Pesquisa , Pesquisadores , Ocupações em Saúde , Humanos
8.
Med Educ ; 53(12): 1196-1208, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31402515

RESUMO

CONTEXT: High-quality research into education costs can inform better decision making. Improvements to cost research can be guided by information about the research questions, methods and reporting of studies evaluating costs in health professions education (HPE). Our objective was to appraise the overall state of the field and evaluate temporal trends in the methods and reporting quality of cost evaluations in HPE research. METHODS: We searched the MEDLINE, CINAHL (Cumulative Index to Nursing and Allied Health Literature), EMBASE, Business Source Complete and ERIC (Education Resources Information Centre) databases on 31 July 2017. To evaluate trends over time, we sampled research reports at 5-year intervals (2001, 2006, 2011 and 2016). All original research studies in HPE that reported a cost outcome were included. The Medical Education Research Study Quality Instrument (MERSQI) and the BMJ economic checklist were used to appraise methodological and reporting quality, respectively. Trends in quality over time were analysed. RESULTS: A total of 78 studies were included, of which 16 were published in 2001, 15 in 2006, 20 in 2011 and 27 in 2016. The region most commonly represented was the USA (n = 43). The profession most commonly referred to was that of the physician (n = 46). The mean ± standard deviation (SD) MERSQI score was 10.9 ± 2.6 out of 18, with no significant change over time (p = 0.55). The mean ± SD BMJ score was 13.5 ± 7.1 out of 35, with no significant change over time (p = 0.39). A total of 49 (63%) studies stated a cost-related research question, 23 (29%) stated the type of cost evaluation used, and 31 (40%) described the method of estimating resource quantities and unit costs. A total of 16 studies compared two or more interventions and reported both cost and learning outcomes. CONCLUSIONS: The absolute number of cost evaluations in HPE is increasing. However, there are shortcomings in the quality of methodology and reporting, and these are not improving over time.


Assuntos
Lista de Checagem , Análise Custo-Benefício , Ocupações em Saúde , Qualidade da Assistência à Saúde , Projetos de Pesquisa , Educação Médica , Ocupações em Saúde/educação , Ocupações em Saúde/tendências , Humanos
9.
Surgery ; 163(4): 944-949, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29452702

RESUMO

Simulation has become an integral part of physician education, and abundant evidence confirms that simulation-based education improves learners' skills and behaviors and is associated with improved patient outcomes. The resources required to implement simulation-based education, however, have led some stakeholders to question the overall value proposition of simulation-based education. This paper summarizes the information from a special panel on this topic and defines research priorities for the field. Future work should focus on both outcomes and costs, with robust measurement of resource investments, provider performance (in both simulation and real settings), patient outcomes, and impact on the health care organization. Increased attention to training practicing clinicians and health care teams is also essential. Clarifying the value proposition of simulation-based education will require a major national effort with funding from multiple sponsors and active engagement of a variety of stakeholders.


Assuntos
Educação Médica/métodos , Cirurgia Geral/educação , Treinamento por Simulação , Competência Clínica , Educação Médica/economia , Educação Médica/normas , Cirurgia Geral/economia , Humanos , Avaliação de Resultados em Cuidados de Saúde , Pesquisa , Treinamento por Simulação/economia , Treinamento por Simulação/normas , Estados Unidos
10.
Acad Med ; 93(2): 314-323, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-28640032

RESUMO

PURPOSE: To characterize reporting of P values, confidence intervals (CIs), and statistical power in health professions education research (HPER) through manual and computerized analysis of published research reports. METHOD: The authors searched PubMed, Embase, and CINAHL in May 2016, for comparative research studies. For manual analysis of abstracts and main texts, they randomly sampled 250 HPER reports published in 1985, 1995, 2005, and 2015, and 100 biomedical research reports published in 1985 and 2015. Automated computerized analysis of abstracts included all HPER reports published 1970-2015. RESULTS: In the 2015 HPER sample, P values were reported in 69/100 abstracts and 94 main texts. CIs were reported in 6 abstracts and 22 main texts. Most P values (≥77%) were ≤.05. Across all years, 60/164 two-group HPER studies had ≥80% power to detect a between-group difference of 0.5 standard deviations. From 1985 to 2015, the proportion of HPER abstracts reporting a CI did not change significantly (odds ratio [OR] 2.87; 95% CI 1.04, 7.88) whereas that of main texts reporting a CI increased (OR 1.96; 95% CI 1.39, 2.78). Comparison with biomedical studies revealed similar reporting of P values, but more frequent use of CIs in biomedicine. Automated analysis of 56,440 HPER abstracts found 14,867 (26.3%) reporting a P value, 3,024 (5.4%) reporting a CI, and increased reporting of P values and CIs from 1970 to 2015. CONCLUSIONS: P values are ubiquitous in HPER, CIs are rarely reported, and most studies are underpowered. Most reported P values would be considered statistically significant.


Assuntos
Educação Profissionalizante , Ocupações em Saúde/educação , Relatório de Pesquisa , Estatística como Assunto , Intervalos de Confiança , Humanos
11.
Med Educ ; 51(7): 680-682, 2017 07.
Artigo em Inglês | MEDLINE | ID: mdl-28722187
12.
Acad Med ; 91(10): 1359-1369, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27049538

RESUMO

Quantitative scores fail to capture all important features of learner performance. This awareness has led to increased use of qualitative data when assessing health professionals. Yet the use of qualitative assessments is hampered by incomplete understanding of their role in forming judgments, and lack of consensus in how to appraise the rigor of judgments therein derived. The authors articulate the role of qualitative assessment as part of a comprehensive program of assessment, and translate the concept of validity to apply to judgments arising from qualitative assessments. They first identify standards for rigor in qualitative research, and then use two contemporary assessment validity frameworks to reorganize these standards for application to qualitative assessment.Standards for rigor in qualitative research include responsiveness, reflexivity, purposive sampling, thick description, triangulation, transparency, and transferability. These standards can be reframed using Messick's five sources of validity evidence (content, response process, internal structure, relationships with other variables, and consequences) and Kane's four inferences in validation (scoring, generalization, extrapolation, and implications). Evidence can be collected and evaluated for each evidence source or inference. The authors illustrate this approach using published research on learning portfolios.The authors advocate a "methods-neutral" approach to assessment, in which a clearly stated purpose determines the nature of and approach to data collection and analysis. Increased use of qualitative assessments will necessitate more rigorous judgments of the defensibility (validity) of inferences and decisions. Evidence should be strategically sought to inform a coherent validity argument.

13.
Surg Endosc ; 30(2): 512-520, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-26091982

RESUMO

BACKGROUND: The Fundamentals of Laparoscopic Surgery (FLS) program uses five simulation stations (peg transfer, precision cutting, loop ligation, and suturing with extracorporeal and intracorporeal knot tying) to teach and assess laparoscopic surgery skills. We sought to summarize evidence regarding the validity of scores from the FLS assessment. METHODS: We systematically searched for studies evaluating the FLS as an assessment tool (last search update February 26, 2013). We classified validity evidence using the currently standard validity framework (content, response process, internal structure, relations with other variables, and consequences). RESULTS: From a pool of 11,628 studies, we identified 23 studies reporting validity evidence for FLS scores. Studies involved residents (n = 19), practicing physicians (n = 17), and medical students (n = 8), in specialties of general (n = 17), gynecologic (n = 4), urologic (n = 1), and veterinary (n = 1) surgery. Evidence was most common in the form of relations with other variables (n = 22, most often expert-novice differences). Only three studies reported internal structure evidence (inter-rater or inter-station reliability), two studies reported content evidence (i.e., derivation of assessment elements), and three studies reported consequences evidence (definition of pass/fail thresholds). Evidence nearly always supported the validity of FLS total scores. However, the loop ligation task lacks discriminatory ability. CONCLUSION: Validity evidence confirms expected relations with other variables and acceptable inter-rater reliability, but other validity evidence is sparse. Given the high-stakes use of this assessment (required for board eligibility), we suggest that more validity evidence is required, especially to support its content (selection of tasks and scoring rubric) and the consequences (favorable and unfavorable impact) of assessment.


Assuntos
Competência Clínica , Laparoscopia/educação , Treinamento por Simulação/métodos , Humanos , Reprodutibilidade dos Testes , Estados Unidos
15.
Adv Health Sci Educ Theory Pract ; 20(5): 1149-75, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25702196

RESUMO

In order to construct and evaluate the validity argument for the Objective Structured Assessment of Technical Skills (OSATS), based on Kane's framework, we conducted a systematic review. We searched MEDLINE, EMBASE, CINAHL, PsycINFO, ERIC, Web of Science, Scopus, and selected reference lists through February 2013. Working in duplicate, we selected original research articles in any language evaluating the OSATS as an assessment tool for any health professional. We iteratively and collaboratively extracted validity evidence from included articles to construct and evaluate the validity argument for varied uses of the OSATS. Twenty-nine articles met the inclusion criteria, all focussed on surgical technical skills assessment. We identified three intended uses for the OSATS, namely formative feedback, high-stakes assessment and program evaluation. Following Kane's framework, four inferences in the validity argument were examined (scoring, generalization, extrapolation, decision). For formative feedback and high-stakes assessment, there was reasonable evidence for scoring and extrapolation. However, for high-stakes assessment there was a dearth of evidence for generalization aside from inter-rater reliability data and an absence of evidence linking multi-station OSATS scores to performance in real clinical settings. For program evaluation, the OSATS validity argument was supported by reasonable generalization and extrapolation evidence. There was a complete lack of evidence regarding implications and decisions based on OSATS scores. In general, validity evidence supported the use of the OSATS for formative feedback. Research to provide support for decisions based on OSATS scores is required if the OSATS is to be used for higher-stakes decisions and program evaluation.


Assuntos
Competência Clínica , Avaliação Educacional/normas , Pessoal de Saúde/educação , Humanos , Variações Dependentes do Observador , Reprodutibilidade dos Testes
16.
Med Educ ; 49(2): 161-73, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25626747

RESUMO

CONTEXT: The relative advantages and disadvantages of checklists and global rating scales (GRSs) have long been debated. To compare the merits of these scale types, we conducted a systematic review of the validity evidence for checklists and GRSs in the context of simulation-based assessment of health professionals. METHODS: We conducted a systematic review of multiple databases including MEDLINE, EMBASE and Scopus to February 2013. We selected studies that used both a GRS and checklist in the simulation-based assessment of health professionals. Reviewers working in duplicate evaluated five domains of validity evidence, including correlation between scales and reliability. We collected information about raters, instrument characteristics, assessment context, and task. We pooled reliability and correlation coefficients using random-effects meta-analysis. RESULTS: We found 45 studies that used a checklist and GRS in simulation-based assessment. All studies included physicians or physicians in training; one study also included nurse anaesthetists. Topics of assessment included open and laparoscopic surgery (n = 22), endoscopy (n = 8), resuscitation (n = 7) and anaesthesiology (n = 4). The pooled GRS-checklist correlation was 0.76 (95% confidence interval [CI] 0.69-0.81, n = 16 studies). Inter-rater reliability was similar between scales (GRS 0.78, 95% CI 0.71-0.83, n = 23; checklist 0.81, 95% CI 0.75-0.85, n = 21), whereas GRS inter-item reliabilities (0.92, 95% CI 0.84-0.95, n = 6) and inter-station reliabilities (0.80, 95% CI 0.73-0.85, n = 10) were higher than those for checklists (0.66, 95% CI 0-0.84, n = 4 and 0.69, 95% CI 0.56-0.77, n = 10, respectively). Content evidence for GRSs usually referenced previously reported instruments (n = 33), whereas content evidence for checklists usually described expert consensus (n = 26). Checklists and GRSs usually had similar evidence for relations to other variables. CONCLUSIONS: Checklist inter-rater reliability and trainee discrimination were more favourable than suggested in earlier work, but each task requires a separate checklist. Compared with the checklist, the GRS has higher average inter-item and inter-station reliability, can be used across multiple tasks, and may better capture nuanced elements of expertise.


Assuntos
Lista de Checagem , Simulação por Computador , Reprodutibilidade dos Testes , Lista de Checagem/normas , Competência Clínica , Indicadores Básicos de Saúde , Humanos
18.
Med Teach ; 36(11): 965-72, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25072533

RESUMO

BACKGROUND: The benefits of online learning come at a price. How can we optimize the overall value? AIMS: Critically appraise the value of online learning. METHODS: Narrative review. RESULTS: Several prevalent myths overinflate the value of online learning. These include that online learning is cheap and easy (it is usually more expensive), that it is more efficient (efficiency depends on the instructional design, not the modality), that it will transform education (fundamental learning principles have not changed), and that the Net Generation expects it (there is no evidence of pent-up demand). However, online learning does add real value by enhancing flexibility, control and analytics. Costs may also go down if disruptive innovations (e.g. low-cost, low-tech, but instructionally sound "good enough" online learning) supplant technically superior but more expensive online learning products. Cost-lowering strategies include focusing on core principles of learning rather than technologies, using easy-to-learn authoring tools, repurposing content (organizing and sequencing existing resources rather than creating new content) and using course templates. CONCLUSIONS: Online learning represents just one tool in an educator's toolbox, as does the MRI for clinicians. We need to use the right tool(s) for the right learner at the right dose, time and route.


Assuntos
Instrução por Computador/economia , Instrução por Computador/métodos , Educação Médica/economia , Educação Médica/métodos , Internet , Simulação por Computador , Eficiência Organizacional , Humanos , Aprendizagem , Imageamento por Ressonância Magnética
19.
Adv Health Sci Educ Theory Pract ; 19(2): 233-50, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-23636643

RESUMO

Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types of data that contribute to each evidence source. We sought to enumerate the validity evidence sources and supporting data elements for assessments using technology-enhanced simulation. We conducted a systematic literature search including MEDLINE, ERIC, and Scopus through May 2011. We included original research that evaluated the validity of simulation-based assessment scores using two or more evidence sources. Working in duplicate, we abstracted information on the prevalence of each evidence source and the underlying data elements. Among 217 eligible studies only six (3 %) referenced the five-source framework, and 51 (24 %) made no reference to any validity framework. The most common evidence sources and data elements were: relations with other variables (94 % of studies; reported most often as variation in simulator scores across training levels), internal structure (76 %; supported by reliability data or item analysis), and content (63 %; reported as expert panels or modification of existing instruments). Evidence of response process and consequences were each present in <10 % of studies. We conclude that relations with training level appear to be overrepresented in this field, while evidence of consequences and response process are infrequently reported. Validation science will be improved as educators use established frameworks to collect and interpret evidence from the full spectrum of possible sources and elements.


Assuntos
Educação Médica/métodos , Avaliação Educacional/métodos , Interface Usuário-Computador , Educação Médica/normas , Avaliação Educacional/normas , Avaliação Educacional/estatística & dados numéricos , Humanos , Prevalência , Reprodutibilidade dos Testes
20.
Surgery ; 153(2): 160-76, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22884087

RESUMO

BACKGROUND: The costs involved with technology-enhanced simulation remain unknown. Appraising the value of simulation-based medical education (SBME) requires complete accounting and reporting of cost. We sought to summarize the quantity and quality of studies that contain an economic analysis of SBME for the training of health professions learners. METHODS: We performed a systematic search of MEDLINE, EMBASE, CINAHL, ERIC, PsychINFO, Scopus, key journals, and previous review bibliographies through May 2011. Articles reporting original research in any language evaluating the cost of simulation, in comparison with nonstimulation instruction or another simulation intervention, for training practicing and student physicians, nurses, and other health professionals were selected. Reviewers working in duplicate evaluated study quality and abstracted information on learners, instructional design, cost elements, and outcomes. RESULTS: From a pool of 10,903 articles we identified 967 comparative studies. Of these, 59 studies (6.1%) reported any cost elements and 15 (1.6%) provided information on cost compared with another instructional approach. We identified 11 cost components reported, most often the cost of the simulator (n = 42 studies; 71%) and training materials (n = 21; 36%). Ten potential cost components were never reported. The median number of cost components reported per study was 2 (range, 1-9). Only 12 studies (20%) reported cost in the Results section; most reported it in the Discussion (n = 34; 58%). CONCLUSION: Cost reporting in SBME research is infrequent and incomplete. We propose a comprehensive model for accounting and reporting costs in SBME.


Assuntos
Simulação por Computador/economia , Educação Médica/economia , Pesquisa/economia , Análise Custo-Benefício , Humanos , Modelos Econômicos , Ensino/economia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA