Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Acad Med ; 99(2): 192-197, 2024 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-37934828

RESUMEN

PURPOSE: In late 2022 and early 2023, reports that ChatGPT could pass the United States Medical Licensing Examination (USMLE) generated considerable excitement, and media response suggested ChatGPT has credible medical knowledge. This report analyzes the extent to which an artificial intelligence (AI) agent's performance on these sample items can generalize to performance on an actual USMLE examination and an illustration is given using ChatGPT. METHOD: As with earlier investigations, analyses were based on publicly available USMLE sample items. Each item was submitted to ChatGPT (version 3.5) 3 times to evaluate stability. Responses were scored following rules that match operational practice, and a preliminary analysis explored the characteristics of items that ChatGPT answered correctly. The study was conducted between February and March 2023. RESULTS: For the full sample of items, ChatGPT scored above 60% correct except for one replication for Step 3. Response success varied across replications for 76 items (20%). There was a modest correspondence with item difficulty wherein ChatGPT was more likely to respond correctly to items found easier by examinees. ChatGPT performed significantly worse ( P < .001) on items relating to practice-based learning. CONCLUSIONS: Achieving 60% accuracy is an approximate indicator of meeting the passing standard, requiring statistical adjustments for comparison. Hence, this assessment can only suggest consistency with the passing standards for Steps 1 and 2 Clinical Knowledge, with further limitations in extrapolating this inference to Step 3. These limitations are due to variances in item difficulty and exclusion of the simulation component of Step 3 from the evaluation-limitations that would apply to any AI system evaluated on the Step 3 sample items. It is crucial to note that responses from large language models exhibit notable variations when faced with repeated inquiries, underscoring the need for expert validation to ensure their utility as a learning tool.


Asunto(s)
Inteligencia Artificial , Conocimiento , Humanos , Simulación por Computador , Lenguaje , Aprendizaje
2.
Artículo en Inglés | MEDLINE | ID: mdl-37665413

RESUMEN

Recent advances in automated scoring technology have made it practical to replace multiple-choice questions (MCQs) with short-answer questions (SAQs) in large-scale, high-stakes assessments. However, most previous research comparing these formats has used small examinee samples testing under low-stakes conditions. Additionally, previous studies have not reported on the time required to respond to the two item types. This study compares the difficulty, discrimination, and time requirements for the two formats when examinees responded as part of a large-scale, high-stakes assessment. Seventy-one MCQs were converted to SAQs. These matched items were randomly assigned to examinees completing a high-stakes assessment of internal medicine. No examinee saw the same item in both formats. Items administered in the SAQ format were generally more difficult than items in the MCQ format. The discrimination index for SAQs was modestly higher than that for MCQs and response times were substantially higher for SAQs. These results support the interchangeability of MCQs and SAQs. When it is important that the examinee generate the response rather than selecting it, SAQs may be preferred. The results relating to difficulty and discrimination reported in this paper are consistent with those of previous studies. The results on the relative time requirements for the two formats suggest that with a fixed testing time fewer SAQs can be administered, this limitation more than makes up for the higher discrimination that has been reported for SAQs. We additionally examine the extent to which increased difficulty may directly impact the discrimination of SAQs.

3.
Adv Health Sci Educ Theory Pract ; 27(5): 1401-1422, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-35511357

RESUMEN

Understanding the response process used by test takers when responding to multiple-choice questions (MCQs) is particularly important in evaluating the validity of score interpretations. Previous authors have recommended eye-tracking technology as a useful approach for collecting data on the processes test taker's use to respond to test questions. This study proposes a new method for evaluating alternative score interpretations by using eye-tracking data and machine learning. We collect eye-tracking data from 26 students responding to clinical MCQs. Analysis is performed by providing 119 eye-tracking features as input for a machine-learning model aiming to classify correct and incorrect responses. The predictive power of various combinations of features within the model is evaluated to understand how different feature interactions contribute to the predictions. The emerging eye-movement patterns indicate that incorrect responses are associated with working from the options to the stem. By contrast, correct responses are associated with working from the stem to the options, spending more time on reading the problem carefully, and a more decisive selection of a response option. The results suggest that the behaviours associated with correct responses are aligned with the real-world model used for score interpretation, while those associated with incorrect responses are not. To the best of our knowledge, this is the first study to perform data-driven, machine-learning experiments with eye-tracking data for the purpose of evaluating score interpretation validity.


Asunto(s)
Movimientos Oculares , Tecnología de Seguimiento Ocular , Humanos , Aprendizaje Automático , Estudiantes
4.
Eval Health Prof ; 45(4): 327-340, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-34753326

RESUMEN

One of the most challenging aspects of writing multiple-choice test questions is identifying plausible incorrect response options-i.e., distractors. To help with this task, a procedure is introduced that can mine existing item banks for potential distractors by considering the similarities between a new item's stem and answer and the stems and response options for items in the bank. This approach uses natural language processing to measure similarity and requires a substantial pool of items for constructing the generating model. The procedure is demonstrated with data from the United States Medical Licensing Examination (USMLE®). For about half the items in the study, at least one of the top three system-produced candidates matched a human-produced distractor exactly; and for about one quarter of the items, two of the top three candidates matched human-produced distractors. A study was conducted in which a sample of system-produced candidates were shown to 10 experienced item writers. Overall, participants thought about 81% of the candidates were on topic and 56% would help human item writers with the task of writing distractors.


Asunto(s)
Evaluación Educacional , Procesamiento de Lenguaje Natural , Humanos , Estados Unidos , Evaluación Educacional/métodos
5.
BMC Med Educ ; 19(1): 389, 2019 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-31647012

RESUMEN

BACKGROUND: Examinees often believe that changing answers will lower their scores; however, empirical studies suggest that allowing examinees to change responses may improve their performance in classroom assessments. To date, no studies have been able to examine answer changes during large scale professional credentialing or licensing examinations. METHODS: In this study, we expand the research on answer changes by analyzing responses from 27,830 examinees who completed the Step 2 Clinical Knowledge (CK) examination between August of 2015 and August of 2016. RESULTS: The results showed that although 68% of examinees changed at least one item, the overall average number of changes was small. Among the examinees who changed answers, approximately 45% increased their scores and approximately 28% decreased their scores. On average, examinees spent shortest time on the item changes from wrong to right and they were more likely to change their scores from wrong to right than right to wrong. CONCLUSIONS: Consistent with previous studies, these findings support the beneficial effects of answer changes in high-stakes medical examinations and suggest that examinees who are overly cautious about changing answers may put themselves at a disadvantage.


Asunto(s)
Competencia Clínica/normas , Evaluación Educacional/estadística & datos numéricos , Licencia Médica/normas , Estudiantes de Medicina/estadística & datos numéricos , Conocimientos, Actitudes y Práctica en Salud , Humanos , Licencia Médica/tendencias , Análisis y Desempeño de Tareas
6.
Med Teach ; 40(11): 1143-1150, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-29688108

RESUMEN

BACKGROUND: Increased recognition of the importance of competency-based education and assessment has led to the need for practical and reliable methods to assess relevant skills in the workplace. METHODS: A novel milestone-based workplace assessment system was implemented in 15 pediatrics residency programs. The system provided: (1) web-based multisource feedback (MSF) and structured clinical observation (SCO) instruments that could be completed on any computer or mobile device; and (2) monthly feedback reports that included competency-level scores and recommendations for improvement. RESULTS: For the final instruments, an average of five MSF and 3.7 SCO assessment instruments were completed for each of 292 interns; instruments required an average of 4-8 min to complete. Generalizability coefficients >0.80 were attainable with six MSF observations. Users indicated that the new system added value to their existing assessment program; the need to complete the local assessments in addition to the new assessments was identified as a burden of the overall process. CONCLUSIONS: Outcomes - including high participation rates and high reliability compared to what has traditionally been found with workplace-based assessment - provide evidence for the validity of scores resulting from this novel competency-based assessment system. The development of this assessment model is generalizable to other specialties.


Asunto(s)
Educación Basada en Competencias/normas , Evaluación Educacional/métodos , Retroalimentación Formativa , Internado y Residencia/organización & administración , Lugar de Trabajo/normas , Competencia Clínica/normas , Toma de Decisiones Clínicas , Evaluación Educacional/normas , Humanos , Internet , Internado y Residencia/normas , Pediatría/educación , Reproducibilidad de los Resultados
7.
Acad Med ; 92(12): 1780-1785, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28562454

RESUMEN

PURPOSE: Physicians must pass the United States Medical Licensing Examination (USMLE) to obtain an unrestricted license to practice allopathic medicine in the United States. Little is known, however, about how well USMLE performance relates to physician behavior in practice, particularly conduct inconsistent with safe, effective patient care. The authors examined the extent to which USMLE scores relate to the odds of receiving a disciplinary action from a U.S. state medical board. METHOD: Controlling for multiple factors, the authors used non-nested multilevel logistic regression analyses to estimate the relationships between scores and receiving an action. The sample included 164,725 physicians who graduated from U.S. MD-granting medical schools between 1994 and 2006. RESULTS: Physicians had a mean Step 1 score of 214 (standard deviation [SD] = 21) and a mean Step 2 Clinical Knowledge (CK) score of 213 (SD = 23). Of the physicians, 2,205 (1.3%) received at least one action. Physicians with higher Step 2 CK scores had lower odds of receiving an action. A 1-SD increase in Step 2 CK scores corresponded to a decrease in the chance of disciplinary action by roughly 25% (odds ratio = 0.75; 95% CI = 0.70-0.80). After accounting for Step 2 CK scores, Step 1 scores were unrelated to the odds of receiving an action. CONCLUSIONS: USMLE Step 2 CK scores provide useful information about the odds a physician will receive an official sanction for problematic practice behavior. These results provide validity evidence supporting current interpretation and use of Step 2 CK scores.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Licencia Médica , Evaluación Educacional/métodos , Humanos , Reproducibilidad de los Resultados , Estados Unidos
8.
Acad Med ; 91(1): 133-9, 2016 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-26397703

RESUMEN

PURPOSE: To add to the small body of validity research addressing whether scores from performance assessments of clinical skills are related to performance in supervised patient settings, the authors examined relationships between United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) data gathering and data interpretation scores and subsequent performance in history taking and physical examination in internal medicine residency training. METHOD: The sample included 6,306 examinees from 238 internal medicine residency programs who completed Step 2 CS for the first time in 2005 and whose performance ratings from their first year of residency training were available. Hierarchical linear modeling techniques were used to examine the relationships among Step 2 CS data gathering and data interpretation scores and history-taking and physical examination ratings. RESULTS: Step 2 CS data interpretation scores were positively related to both history-taking and physical examination ratings. Step 2 CS data gathering scores were not related to either history-taking or physical examination ratings after other USMLE scores were taken into account. CONCLUSIONS: Step 2 CS data interpretation scores provide useful information for predicting subsequent performance in history taking and physical examination in supervised practice and thus provide validity evidence for their intended use as an indication of readiness to enter supervised practice. The results show that there is less evidence to support the usefulness of Step 2 CS data gathering scores. This study provides important information for practitioners interested in Step 2 CS specifically or in performance assessments of medical students' clinical skills more generally.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Medicina Interna/educación , Internado y Residencia , Anamnesis , Examen Físico , Canadá , Humanos , Licencia Médica , Modelos Lineales , Estados Unidos
9.
Acad Med ; 88(5): 693-8, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23524927

RESUMEN

PURPOSE: This study extends available evidence about the relationship between scores on the Step 2 Clinical Skills (CS) component of the United States Medical Licensing Examination and subsequent performance in residency. It focuses on the relationship between Step 2 CS communication and interpersonal skills scores and communication skills ratings that residency directors assign to residents in their first postgraduate year of internal medicine training. It represents the first large-scale evaluation of the extent to which Step 2 CS communication and interpersonal skills scores can be extrapolated to examinee performance in supervised practice. METHOD: Hierarchical linear modeling techniques were used to examine the relationships among examinee characteristics, residency program characteristics, and residency-director-provided ratings. The sample comprised 6,306 examinees from 238 internal medicine residency programs who completed Step 2 CS for the first time in 2005 and received ratings during their first year of internal medicine residency training. RESULTS: Although the relationship is modest, Step 2 CS communication and interpersonal skills scores predict communication skills ratings for first-year internal medicine residents after accounting for other factors. CONCLUSIONS: The results of this study make a reasonable case that Step 2 CS communication and interpersonal skills scores provide useful information for predicting the level of communication skill that examinees will display in their first year of internal medicine residency training. This finding demonstrates some level of extrapolation from the testing context to behavior in supervised practice, thus providing validity-related evidence for using Step 2 CS communication and interpersonal skills scores in high-stakes decisions.


Asunto(s)
Competencia Clínica/estadística & datos numéricos , Comunicación , Medicina Interna/educación , Internado y Residencia , Relaciones Interpersonales , Licencia Médica , Análisis de Varianza , Femenino , Humanos , Modelos Lineales , Masculino , Estados Unidos
10.
Adv Health Sci Educ Theory Pract ; 17(2): 165-81, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-20094911

RESUMEN

During the last decade, interest in assessing professionalism in medical education has increased exponentially and has led to the development of many new assessment tools. Efforts to validate the scores produced by tools designed to assess professionalism have lagged well behind the development of these tools. This paper provides a structured framework for collecting evidence to support the validity of assessments of professionalism. The paper begins with a short history of the concept of validity in the context of psychological assessment. It then describes Michael Kane's approach to validity as a structured argument. The majority of the paper then focuses on how Kane's framework can be applied to assessments of professionalism. Examples are provided from the literature, and recommendations for future investigation are made in areas where the literature is deficient.


Asunto(s)
Educación Médica/métodos , Trastornos Mentales/diagnóstico , Competencia Profesional , Rol Profesional , Pruebas Psicológicas , Reproducibilidad de los Resultados , Humanos
12.
Acad Med ; 85(10 Suppl): S93-7, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20881714

RESUMEN

PURPOSE: This research examined the credibility of the cut scores used to make pass/fail decisions on United States Medical Licensing Examination (USMLE) Step 1, Step 2 Clinical Knowledge, and Step 3. METHOD: Approximately 15,000 members of nine constituency groups were asked their opinions about (1) current initial and ultimate fail rates and (2) the highest acceptable, lowest acceptable, and optimal initial and ultimate fail rates. RESULTS: Initial fail rates were generally viewed as appropriate; more variability was associated with ultimate fail rates. Actual fail rates for each examination across recent years fell within the range that respondents considered acceptable. CONCLUSIONS: Results provide important evidence to support the appropriateness of the cut scores used to make classification decisions for USMLE examinations. This evidence is viewed as part of the overall validity argument for decisions based on USMLE scores.


Asunto(s)
Medicina Clínica/educación , Evaluación Educacional/estadística & datos numéricos , Licencia Médica , Educación de Pregrado en Medicina , Escolaridad , Humanos , Encuestas y Cuestionarios , Estados Unidos
13.
Adv Health Sci Educ Theory Pract ; 15(5): 717-33, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-20509047

RESUMEN

In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.


Asunto(s)
Competencia Clínica/estadística & datos numéricos , Interpretación Estadística de Datos , Análisis Multivariante , Análisis de Varianza , Simulación por Computador , Humanos , Reproducibilidad de los Resultados , Estadística como Asunto , Estados Unidos
14.
Adv Health Sci Educ Theory Pract ; 15(4): 587-600, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20127509

RESUMEN

The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.


Asunto(s)
Comunicación , Interpretación Estadística de Datos , Relaciones Médico-Paciente , Médicos/psicología , Análisis de Varianza , Educación Médica , Evaluación Educacional , Escolaridad , Humanos , Análisis de los Mínimos Cuadrados , Modelos Lineales , Análisis de Regresión
15.
Acad Med ; 84(10 Suppl): S79-82, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19907393

RESUMEN

BACKGROUND: The 2000 Institute of Medicine report on patient safety brought renewed attention to the issue of preventable medical errors, and subsequently specialty boards and the National Board of Medical Examiners were encouraged to play a role in setting expectations around safety education. This paper examines potentially dangerous actions taken by examinees during the portion of the United States Medical Licensing Examination Step 3 that is particularly well suited to evaluating lapses in physician decision making, the Computer-based Case Simulation (CCS). METHOD: Descriptive statistics and a general linear modeling approach were used to analyze dangerous actions ordered by 25,283 examinees that completed CCS for the first time between November 2006 and January 2008. RESULTS: More than 20% of examinees ordered at least one dangerous action with the potential to cause significant patient harm. The propensity to order dangerous actions may vary across clinical cases. CONCLUSIONS: The CCS format may provide a means of collecting important information about patient-care situations in which examinees may be more likely to commit dangerous actions and the propensity of examinees to order dangerous tests and treatments.


Asunto(s)
Competencia Clínica , Instrucción por Computador , Evaluación Educacional , Licencia Médica , Errores Médicos , Medición de Riesgo , Estados Unidos
16.
Acad Med ; 84(10 Suppl): S83-5, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19907394

RESUMEN

BACKGROUND: Previous research has shown that ratings of English proficiency on the United States Medical Licensing Examination Clinical Skills Examination are highly reliable. However, the score distributions for native and nonnative speakers of English are sufficiently different to suggest that reliability should be investigated separately for each group. METHOD: Generalizability theory was used to obtain reliability indices separately for native and nonnative speakers of English (N = 29,084). Conditional standard errors of measurement were also obtained for both groups to evaluate measurement precision for each group at specific score levels. RESULTS: Overall indices of reliability (phi) exceeded 0.90 for both native and nonnative speakers, and both groups were measured with nearly equal precision throughout the score distribution. However, measurement precision decreased at lower levels of proficiency for all examinees. CONCLUSIONS: The results of this and future studies may be helpful in understanding and minimizing sources of measurement error at particular regions of the score distribution.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Lenguaje , Licencia Médica , Competencia Clínica/estadística & datos numéricos , Evaluación Educacional/estadística & datos numéricos , Estados Unidos
17.
Acad Med ; 84(10 Suppl): S86-9, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19907395

RESUMEN

BACKGROUND: In clinical skills, closely related skills are often combined to form a composite score. For example, history-taking and physical examination scores are typically combined. Interestingly, there is relatively little research to support this practice. METHOD: Multivariate generalizability theory was employed to examine the relationship between history-taking and physical examination scores from the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills examination. These two proficiencies are currently combined into a data-gathering score. RESULTS: The physical examination score is less generalizable than the score for history taking, and there is only a modest to moderate relationship between these two proficiencies. CONCLUSIONS: A decision about combining physical examination and history-taking proficiencies into one composite score, as well as the weighting of these components, should be driven by the intended use of the score. The choice of weights in combining physical examination and history taking makes a substantial difference in the precision of the resulting score.


Asunto(s)
Competencia Clínica , Evaluación Educacional , Licencia Médica , Anamnesis , Examen Físico , Análisis Multivariante , Estados Unidos
18.
Acad Med ; 84(10 Suppl): S97-100, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19907399

RESUMEN

BACKGROUND: Documentation is a subcomponent of the Step 2 Clinical Skills Examination Integrated Clinical Encounter (ICE) component wherein licensed physicians rate examinees on their ability to communicate the findings of the patient encounter, diagnostic impression, and initial patient work-up. The main purpose of this research was to examine the impact of modifications to the scoring rubric and rater training protocol on the psychometric characteristics of the documentation scores. METHOD: Following the modifications, the variance structure of the ICE components was modeled using multivariate generalizability theory. RESULTS: The results confirmed the expectations that true score variance for the documentation subcomponent would increase after adopting a modified training protocol and increased rubric specificity. CONCLUSIONS: In general, results support the commonsense assumption that providing raters with detailed rubrics and comprehensive training will indeed improve measurement outcomes. Although the steps taken here were in the right direction, there remains room for improvement. Efforts are currently under way to further improve both the scoring rubrics and rater training.


Asunto(s)
Competencia Clínica , Evaluación Educacional/métodos , Evaluación Educacional/normas , Licencia Médica , Estados Unidos
19.
Med Teach ; 31(4): 348-61, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19404894

RESUMEN

Medical professionalism is increasingly recognized as a core competence of medical trainees and practitioners. Although the general and specific domains of professionalism are thoroughly characterized, procedures for assessing them are not well-developed. This article outlines an approach to designing and implementing an assessment program for medical professionalism that begins and ends with asking and answering a series of critical questions about the purpose and nature of the program. The process of exposing an assessment program to a series of interrogatives that comprise an integrated and iterative framework for thinking about the assessment process should lead to continued improvement in the quality and defensibility of that program.


Asunto(s)
Estudios de Evaluación como Asunto , Rol del Médico , Competencia Profesional/normas , Humanos
20.
Simul Healthc ; 4(1): 30-4, 2009.
Artículo en Inglés | MEDLINE | ID: mdl-19212248

RESUMEN

To obtain a full and unrestricted license to practice medicine in the United States, students and graduates of the MD-granting US medical schools and of medical schools located outside of the United States must take and pass the United States Medical Licensing Examination. United States Medical Licensing Examination began as a series of paper-and-pencil examinations in the early 1990s and converted to computer-delivery in 1999. With this change to the computerized format came the opportunity to introduce computer-simulated patients, which had been under development at the National Board of Medical Examiners for a number of years. This testing format, called a computer-based case simulation, requires the examinee to manage a simulated patient in simulated time. The examinee can select options for history-taking and physical examination. Diagnostic studies and treatment are ordered via free-text entry, and the examinee controls the advance of simulated time and the location of the patient in the health care setting. Although the inclusion of this format has brought a number of practical, psychometric, and security challenges, its addition has allowed a significant expansion in ways to assess examinees on their diagnostic decision making and therapeutic intervention skills and on developing and implementing a reasonable patient management plan.


Asunto(s)
Competencia Clínica , Simulación por Computador , Licencia Médica , Humanos , Estados Unidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA