Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.448
Filter
1.
J Nurses Prof Dev ; 40(4): 184-189, 2024.
Article in English | MEDLINE | ID: mdl-38949971

ABSTRACT

Assessment of initial nursing competency is essential to safe nursing practice yet often focuses on psychomotor skill acquisition. A multistate health system created a competency strategy based on a comprehensive conceptualization of competency using the American Nursing Association scope and standards of nursing practice. This approach allows for the broad application of a standard competency assessment tool across diverse nursing specialties and provides a framework for nursing professional development practitioners to implement in their organizations.


Subject(s)
Clinical Competence , Nurse's Role , Humans , Clinical Competence/standards , Staff Development/methods , United States , Educational Measurement/methods , Educational Measurement/standards
2.
Dyslexia ; 30(3): e1777, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38952195

ABSTRACT

This article aims to assist practitioners in understanding dyslexia and other reading difficulties and assessing students' learning needs. We describe the essential components of language and literacy, universal screening, diagnostic assessments, curriculum-based measurement and eligibility determination. We then introduce four diagnostic assessments as examples, including norm-referenced assessments (i.e. the Comprehensive Test of Phonological Processing second edition and the Woodcock-Johnson IV Tests of Achievement) and criterion-referenced assessments (i.e. the Gallistel-Ellis Test of Coding Skills and the Dynamic Indicators of Basic Early Literacy Skills). Finally, We use a makeup case as a concrete example to illustrate how multiple diagnostic assessments are recorded and how the results can be used to inform intervention and eligibility for special education services.


Subject(s)
Dyslexia , Humans , Dyslexia/diagnosis , Child , Reading , Educational Measurement/standards , Language Tests/standards , Students , Literacy , Education, Special
3.
Article in English | MEDLINE | ID: mdl-38977033

ABSTRACT

PURPOSE: This study aimed to compare and evaluate the efficiency and accuracy of computerized adaptive testing (CAT) under two stopping rules (SEM 0.3 and 0.25) using both real and simulated data in medical examinations in Korea. METHODS: This study employed post-hoc simulation and real data analysis to explore the optimal stopping rule for CAT in medical examinations. The real data were obtained from the responses of 3rd-year medical students during examinations in 2020 at Hallym University College of Medicine. Simulated data were generated using estimated parameters from a real item bank in R. Outcome variables included the number of examinees' passing or failing with SEM values of 0.25 and 0.30, the number of items administered, and the correlation. The consistency of real CAT result was evaluated by examining consistency of pass or fail based on a cut score of 0.0. The efficiency of all CAT designs was assessed by comparing the average number of items administered under both stopping rules. RESULTS: Both SEM 0.25 and SEM 0.30 provided a good balance between accuracy and efficiency in CAT. The real data showed minimal differences in pass/fail outcomes between the 2 SEM conditions, with a high correlation (r = 0.99) between ability estimates. The simulation results confirmed these findings, indicating similar average item numbers between real and simulated data. CONCLUSION: The findings suggest that both SEM 0.25 and 0.30 are effective termination criteria in the context of the Rasch model, balancing accuracy and efficiency in CAT.


Subject(s)
Educational Measurement , Psychometrics , Students, Medical , Humans , Educational Measurement/methods , Educational Measurement/standards , Republic of Korea , Psychometrics/methods , Computer Simulation , Data Analysis , Education, Medical, Undergraduate/methods , Male , Female
5.
J Med Internet Res ; 26: e60807, 2024 Jul 25.
Article in English | MEDLINE | ID: mdl-39052324

ABSTRACT

BACKGROUND: Over the past 2 years, researchers have used various medical licensing examinations to test whether ChatGPT (OpenAI) possesses accurate medical knowledge. The performance of each version of ChatGPT on the medical licensing examination in multiple environments showed remarkable differences. At this stage, there is still a lack of a comprehensive understanding of the variability in ChatGPT's performance on different medical licensing examinations. OBJECTIVE: In this study, we reviewed all studies on ChatGPT performance in medical licensing examinations up to March 2024. This review aims to contribute to the evolving discourse on artificial intelligence (AI) in medical education by providing a comprehensive analysis of the performance of ChatGPT in various environments. The insights gained from this systematic review will guide educators, policymakers, and technical experts to effectively and judiciously use AI in medical education. METHODS: We searched the literature published between January 1, 2022, and March 29, 2024, by searching query strings in Web of Science, PubMed, and Scopus. Two authors screened the literature according to the inclusion and exclusion criteria, extracted data, and independently assessed the quality of the literature concerning Quality Assessment of Diagnostic Accuracy Studies-2. We conducted both qualitative and quantitative analyses. RESULTS: A total of 45 studies on the performance of different versions of ChatGPT in medical licensing examinations were included in this study. GPT-4 achieved an overall accuracy rate of 81% (95% CI 78-84; P<.01), significantly surpassing the 58% (95% CI 53-63; P<.01) accuracy rate of GPT-3.5. GPT-4 passed the medical examinations in 26 of 29 cases, outperforming the average scores of medical students in 13 of 17 cases. Translating the examination questions into English improved GPT-3.5's performance but did not affect GPT-4. GPT-3.5 showed no difference in performance between examinations from English-speaking and non-English-speaking countries (P=.72), but GPT-4 performed better on examinations from English-speaking countries significantly (P=.02). Any type of prompt could significantly improve GPT-3.5's (P=.03) and GPT-4's (P<.01) performance. GPT-3.5 performed better on short-text questions than on long-text questions. The difficulty of the questions affected the performance of GPT-3.5 and GPT-4. In image-based multiple-choice questions (MCQs), ChatGPT's accuracy rate ranges from 13.1% to 100%. ChatGPT performed significantly worse on open-ended questions than on MCQs. CONCLUSIONS: GPT-4 demonstrates considerable potential for future use in medical education. However, due to its insufficient accuracy, inconsistent performance, and the challenges posed by differing medical policies and knowledge across countries, GPT-4 is not yet suitable for use in medical education. TRIAL REGISTRATION: PROSPERO CRD42024506687; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=506687.


Subject(s)
Educational Measurement , Licensure, Medical , Humans , Licensure, Medical/standards , Licensure, Medical/statistics & numerical data , Educational Measurement/methods , Educational Measurement/standards , Educational Measurement/statistics & numerical data , Clinical Competence/statistics & numerical data , Clinical Competence/standards , Artificial Intelligence , Education, Medical/standards
6.
J Grad Med Educ ; 16(3): 328-332, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38882433

ABSTRACT

Background Standardized Letters of Evaluation (SLOEs) are an important part of resident selection in many specialties. Often written by a group, such letters may ask writers to rate applicants in different domains. Prior studies have noted inflated ratings; however, the degree to which individual institutions are "doves" (higher rating) or "hawks" (lower rating) is unclear. Objective To characterize institutional SLOE rating distributions to inform readers and developers regarding potential threats to validity from disparate rating practices. Methods Data from emergency medicine (EM) SLOEs between 2016 and 2021 were obtained from a national database. SLOEs from institutions with at least 10 letters per year in all years were included. Ratings on one element of the SLOE-the "global assessment of performance" item (Top 10%, Top Third, Middle Third, and Lower Third)-were analyzed numerically and stratified by predefined criteria for grading patterns (Extreme Dove, Dove, Neutral, Hawk, Extreme Hawk) and adherence to established guidelines (Very High, High, Neutral, Low, Very Low). Results Of 40 286 SLOEs, 20 407 met inclusion criteria. Thirty-five to 50% of institutions displayed Neutral grading patterns across study years, with most other institutional patterns rated as Dove or Extreme Dove. Adherence to guidelines was mixed and fewer than half of institutions had Very High or High adherence each year. Most institutions underutilize the Lower Third rating. Conclusions Despite explicit guidelines for the distribution of global assessment ratings in the EM SLOE, there is high variability in institutional rating practices.


Subject(s)
Emergency Medicine , Internship and Residency , Humans , Correspondence as Topic , Personnel Selection/standards , Educational Measurement/methods , Educational Measurement/standards , Clinical Competence/standards
9.
Med Educ Online ; 29(1): 2370617, 2024 Dec 31.
Article in English | MEDLINE | ID: mdl-38934534

ABSTRACT

While objective clinical structured examination (OSCE) is a worldwide recognized and effective method to assess clinical skills of undergraduate medical students, the latest Ottawa conference on the assessment of competences raised vigorous debates regarding the future and innovations of OSCE. This study aimed to provide a comprehensive view of the global research activity on OSCE over the past decades and to identify clues for its improvement. We performed a bibliometric and scientometric analysis of OSCE papers published until March 2024. We included a description of the overall scientific productivity, as well as an unsupervised analysis of the main topics and the international scientific collaborations. A total of 3,224 items were identified from the Scopus database. There was a sudden spike in publications, especially related to virtual/remote OSCE, from 2020 to 2024. We identified leading journals and countries in terms of number of publications and citations. A co-occurrence term network identified three main clusters corresponding to different topics of research in OSCE. Two connected clusters related to OSCE performance and reliability, and a third cluster on student's experience, mental health (anxiety), and perception with few connections to the two previous clusters. Finally, the United States, the United Kingdom, and Canada were identified as leading countries in terms of scientific publications and collaborations in an international scientific network involving other European countries (the Netherlands, Belgium, Italy) as well as Saudi Arabia and Australia, and revealed the lack of important collaboration with Asian countries. Various avenues for improving OSCE research have been identified: i) developing remote OSCE with comparative studies between live and remote OSCE and issuing international recommendations for sharing remote OSCE between universities and countries; ii) fostering international collaborative studies with the support of key collaborating countries; iii) investigating the relationships between student performance and anxiety.


Subject(s)
Bibliometrics , Clinical Competence , Education, Medical, Undergraduate , Educational Measurement , Humans , Educational Measurement/methods , Educational Measurement/standards , Education, Medical, Undergraduate/standards , Reproducibility of Results , Students, Medical/psychology , Students, Medical/statistics & numerical data , Biomedical Research/standards
10.
Nurse Educ Pract ; 78: 104012, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38851040

ABSTRACT

AIMS: The study aimed to develop and psychometrically evaluate a measurement scale for identifying and assessing the hidden curriculum in undergraduate nursing education. BACKGROUND: The hidden curriculum is a general term for educational information that exists outside of the teaching program and mainly affects students' knowledge, emotions, behaviors, beliefs, values and professional ethics. However, a specific instrument to comprehensively define and assess the hidden curriculum in nursing education has not yet been developed in China. DESIGN: A descriptive and explorative study design was used. METHODS: We developed the initial scale through a literature review, focus group discussion, Delphi expert consultation and pre-survey. From February to April 2023, the data were collected from a convenient sample of 512 nursing students enrolled in five medical universities in China to conduct exploratory factor analysis and confirmatory factor analysis for validity testing. In addition, reliability analysis was conducted by calculating Cronbach's alpha coefficients, split-half reliability and test-retest reliability. The nursing students' responses were evaluated using a five-point Likert scale. RESULTS: The Hidden Curriculum Assessment Scale in Nursing Education (HCAS-NE) was formulated, consisting of 4 dimensions and 35 items. Exploratory factor analysis extracted four factors, with a cumulative variance contribution rate of 66.863 % and confirmatory factor analysis indicated that the fit indices values of the scale structure model met the criteria for an ideal level. the Cronbach's α coefficient of the scale was 0.965, the Guttman split-half was 0.853 and the test-retest reliability was 0.967. CONCLUSION: This study demonstrated that the Hidden Curriculum Assessment Scale in Nursing Education (HCAS-NE) has ideal reliability and validity, which provides a valid and reliable tool for identifying and assessing the hidden curriculum in nursing education.


Subject(s)
Curriculum , Education, Nursing, Baccalaureate , Psychometrics , Students, Nursing , Humans , Psychometrics/instrumentation , Reproducibility of Results , Students, Nursing/psychology , Surveys and Questionnaires , China , Female , Male , Delphi Technique , Focus Groups , Adult , Educational Measurement/methods , Educational Measurement/standards
11.
Nurse Educ Pract ; 78: 104021, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38917560

ABSTRACT

AIM: This paper reflects on the experience of one Scottish University in conducting a face-to-face Objective Structured Examination (OSCE) for large cohorts of student nurses. It outlines the challenges experienced and learning gained. Borton's model of reflection frames this work due to its simplicity, ease of application and cyclical nature. BACKGROUND: The theoretical framework for the OSCE is critical thinking, enabling students to apply those skills authentically. OSCE's are designed to transfer classroom knowledge to clinical practice and offer an authentic work-based assessment. DESIGN: Validity and robustness are key considerations in any assessment and in OSCE, the number of stations that students encounter is important and debated. We used a case-study based OSCE approach initially over four stations and following reflection, changed to one long station with four phases. RESULTS: In OSCE examinations, interrater reliability is a necessity, and students expect equity of approach. We identified that despite clear marking criteria, marks were polarised, with students achieving high or low marks with little middle ground. Review of examination papers highlighted that although students' overall performance was good, some had failed in at least one station, suggesting a four-station approach may skew results. On reflection we hypothesised that using a one station case study-based, phased approach enabled the examiner to build up a more holistic picture of student knowledge and skills. It also provided the student opportunity to develop a rapport with the examiner and standardised patient, thereby putting them more at ease. We argue that this approach is holistic, authentic and student centred. CONCLUSIONS: Our experience highlights that a single station, four phase OSCE is preferrable, enabling students to integrate all aspects of the assessment and provides a holistic view of clinical skills and knowledge.


Subject(s)
Clinical Competence , Educational Measurement , Students, Nursing , Humans , Scotland , Educational Measurement/methods , Educational Measurement/standards , Students, Nursing/psychology , Clinical Competence/standards , Education, Nursing, Baccalaureate , Reproducibility of Results , Schools, Nursing , Thinking
13.
BMJ Open Qual ; 13(Suppl 2)2024 May 07.
Article in English | MEDLINE | ID: mdl-38719519

ABSTRACT

INTRODUCTION: Safe practice in medicine and dentistry has been a global priority area in which large knowledge gaps are present.Patient safety strategies aim at preventing unintended damage to patients that can be caused by healthcare practitioners. One of the components of patient safety is safe clinical practice. Patient safety efforts will help in ensuring safe dental practice for early detection and limiting non-preventable errors.A valid and reliable instrument is required to assess the knowledge of dental students regarding patient safety. OBJECTIVE: To determine the psychometric properties of a written test to assess safe dental practice in undergraduate dental students. MATERIAL AND METHODS: A test comprising 42 multiple-choice questions of one-best type was administered to final year students (52) of a private dental college. Items were developed according to National Board of Medical Examiners item writing guidelines. The content of the test was determined in consultation with dental experts (either professor or associate professor). These experts had to assess each item on the test for language clarity as A: clear, B: ambiguous and relevance as 1: essential, 2: useful, not necessary, 3: not essential. Ethical approval was taken from the concerned dental college. Statistical analysis was done in SPSS V.25 in which descriptive analysis, item analysis and Cronbach's alpha were measured. RESULT: The test scores had a reliability (calculated by Cronbach's alpha) of 0.722 before and 0.855 after removing 15 items. CONCLUSION: A reliable and valid test was developed which will help to assess the knowledge of dental students regarding safe dental practice. This can guide medical educationist to develop or improve patient safety curriculum to ensure safe dental practice.


Subject(s)
Educational Measurement , Patient Safety , Psychometrics , Humans , Psychometrics/instrumentation , Psychometrics/methods , Patient Safety/standards , Patient Safety/statistics & numerical data , Surveys and Questionnaires , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Educational Measurement/standards , Reproducibility of Results , Students, Dental/statistics & numerical data , Students, Dental/psychology , Education, Dental/methods , Education, Dental/standards , Male , Female , Clinical Competence/statistics & numerical data , Clinical Competence/standards
14.
Am J Pharm Educ ; 88(7): 100723, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38821189

ABSTRACT

From 2021 to 2023, 7978 graduates of pharmacy programs failed the North American Pharmacist Licensure Examination on the first attempt. Presently, the Accreditation Council for Pharmacy Education monitors programs with a passage rate of ≥ 2 SDs below the national mean pass rate. In 2023, this should lead to monitoring 7 programs that produced 140 failures out of the total of 2472 failures (5.7 %). In our view, this is neither equitable nor demonstrative of sufficient accountability. Analysis of failure counts among the 144 programs reported by the National Association of Boards of Pharmacy demonstrates a distribution curve highly skewed to the right. The evaluation of average failure counts across all programs suggests that schools with absolute failures ≥ 2 SDs higher than the average number of failures should be identified for monitoring, in addition to those falling ≥ 2 SDs below the national mean pass rate. Based on the 2023 data, this additional criterion corresponds to ≥ 35 failures/program. This threshold would prompt monitoring of 18 programs and 36.5 % of the total failures. Of the 7 programs that will be monitored based on the current Accreditation Council for Pharmacy Education criteria, only 1 would be captured by the ≥ 35 failure method of selection; the remaining 6 contribute 85 total failures to the pool. Thus, if both criteria were to be applied, ie, ≥ 35 failures and ≥ 2 SDs below the national mean pass rate, a total of 24 programs would be monitored (16.6 % of the 144 programs) that contribute 987 of the total failures (39.9 %).


Subject(s)
Accreditation , Education, Pharmacy , Educational Measurement , Licensure, Pharmacy , Pharmacists , Humans , Education, Pharmacy/standards , Educational Measurement/standards , Educational Measurement/methods , Educational Measurement/statistics & numerical data , Accreditation/standards , Pharmacists/standards , Pharmacists/statistics & numerical data , Schools, Pharmacy/standards , Schools, Pharmacy/statistics & numerical data , United States , North America , Students, Pharmacy
16.
BMC Anesthesiol ; 24(1): 188, 2024 May 27.
Article in English | MEDLINE | ID: mdl-38802780

ABSTRACT

BACKGROUND: Ethiopia made a national licensing examination (NLE) for associate clinician anesthetists a requirement for entry into the practice workforce. However, there is limited empirical evidence on whether the NLE scores of associate clinicians predict the quality of health care they provide in low-income countries. This study aimed to assess the association between anesthetists' NLE scores and three selected quality of patient care indicators. METHODS: A multicenter longitudinal observational study was conducted between January 8 and February 7, 2023, to collect quality of care (QoC) data on surgical patients attended by anesthetists (n = 56) who had taken the Ethiopian anesthetist NLE since 2019. The three QoC indicators were standards for safe anesthesia practice, critical incidents, and patient satisfaction. The medical records of 991 patients were reviewed to determine the standards for safe anesthesia practice and critical incidents. A total of 400 patients responded to the patient satisfaction survey. Multivariable regressions were employed to determine whether the anesthetist NLE score predicted QoC indicators. RESULTS: The mean percentage of safe anesthesia practice standards met was 69.14%, and the mean satisfaction score was 85.22%. There were 1,120 critical incidents among 911 patients, with three out of five experiencing at least one. After controlling for patient, anesthetist, facility, and clinical care-related confounding variables, the NLE score predicted the occurrence of critical incidents. For every 1% point increase in the total NLE score, the odds of developing one or more critical incidents decreased by 18% (aOR = 0.82; 95% CI = 0.70 = 0.96; p = 0.016). No statistically significant associations existed between the other two QoC indicators and NLE scores. CONCLUSION: The NLE score had an inverse relationship with the occurrence of critical incidents, supporting the validity of the examination in assessing graduates' ability to provide safe and effective care. The lack of an association with the other two QoC indicators requires further investigation. Our findings may help improve education quality and the impact of NLEs in Ethiopia and beyond.


Subject(s)
Anesthetists , Patient Satisfaction , Quality of Health Care , Humans , Ethiopia , Longitudinal Studies , Male , Female , Adult , Quality of Health Care/standards , Anesthetists/standards , Middle Aged , Anesthesiology/standards , Clinical Competence/standards , Educational Measurement/methods , Educational Measurement/standards
17.
JAMA ; 332(4): 300-309, 2024 07 23.
Article in English | MEDLINE | ID: mdl-38709542

ABSTRACT

Importance: Despite its importance to medical education and competency assessment for internal medicine trainees, evidence about the relationship between physicians' milestone residency ratings or the American Board of Internal Medicine's initial certification examination and their hospitalized patients' outcomes is sparse. Objective: To examine the association between physicians' milestone ratings and certification examination scores and hospital outcomes for their patients. Design, Setting, and Participants: Retrospective cohort analyses of 6898 hospitalists completing training in 2016 to 2018 and caring for Medicare fee-for-service beneficiaries during hospitalizations in 2017 to 2019 at US hospitals. Main Outcomes and Measures: Primary outcome measures included 7-day mortality and readmission rates. Thirty-day mortality and readmission rates, length of stay, and subspecialist consultation frequency were also assessed. Analyses accounted for hospital fixed effects and adjusted for patient characteristics, physician years of experience, and year. Exposures: Certification examination score quartile and milestone ratings, including an overall core competency rating measure equaling the mean of the end of residency milestone subcompetency ratings categorized as low, medium, or high, and a knowledge core competency measure categorized similarly. Results: Among 455 120 hospitalizations, median patient age was 79 years (IQR, 73-86 years), 56.5% of patients were female, 1.9% were Asian, 9.8% were Black, 4.6% were Hispanic, and 81.9% were White. The 7-day mortality and readmission rates were 3.5% (95% CI, 3.4%-3.6%) and 5.6% (95% CI, 5.5%-5.6%), respectively, and were 8.8% (95% CI, 8.7%-8.9%) and 16.6% (95% CI, 16.5%-16.7%) for mortality and readmission at 30 days. Mean length of stay and number of specialty consultations were 3.6 days (95% CI, 3.6-3.6 days) and 1.01 (95% CI, 1.00-1.03), respectively. A high vs low overall or knowledge milestone core competency rating was associated with none of the outcome measures assessed. For example, a high vs low overall core competency rating was associated with a nonsignificant 2.7% increase in 7-day mortality rates (95% CI, -5.2% to 10.6%; P = .51). In contrast, top vs bottom examination score quartile was associated with a significant 8.0% reduction in 7-day mortality rates (95% CI, -13.0% to -3.1%; P = .002) and a 9.3% reduction in 7-day readmission rates (95% CI, -13.0% to -5.7%; P < .001). For 30-day mortality, this association was -3.5% (95% CI, -6.7% to -0.4%; P = .03). Top vs bottom examination score quartile was associated with 2.4% more consultations (95% CI, 0.8%-3.9%; P < .003) but was not associated with length of stay or 30-day readmission rates. Conclusions and Relevance: Among newly trained hospitalists, certification examination score, but not residency milestone ratings, was associated with improved outcomes among hospitalized Medicare beneficiaries.


Subject(s)
Certification , Clinical Competence , Hospital Mortality , Internal Medicine , Internship and Residency , Patient Readmission , Humans , Internal Medicine/education , Internal Medicine/standards , Patient Readmission/statistics & numerical data , Retrospective Studies , Internship and Residency/standards , Certification/standards , United States , Female , Male , Length of Stay/statistics & numerical data , Medicare , Aged , Hospitalists/standards , Educational Measurement/standards
18.
Ophthalmologie ; 121(7): 554-564, 2024 Jul.
Article in German | MEDLINE | ID: mdl-38801461

ABSTRACT

PURPOSE: In recent years artificial intelligence (AI), as a new segment of computer science, has also become increasingly more important in medicine. The aim of this project was to investigate whether the current version of ChatGPT (ChatGPT 4.0) is able to answer open questions that could be asked in the context of a German board examination in ophthalmology. METHODS: After excluding image-based questions, 10 questions from 15 different chapters/topics were selected from the textbook 1000 questions in ophthalmology (1000 Fragen Augenheilkunde 2nd edition, 2014). ChatGPT was instructed by means of a so-called prompt to assume the role of a board certified ophthalmologist and to concentrate on the essentials when answering. A human expert with considerable expertise in the respective topic, evaluated the answers regarding their correctness, relevance and internal coherence. Additionally, the overall performance was rated by school grades and assessed whether the answers would have been sufficient to pass the ophthalmology board examination. RESULTS: The ChatGPT would have passed the board examination in 12 out of 15 topics. The overall performance, however, was limited with only 53.3% completely correct answers. While the correctness of the results in the different topics was highly variable (uveitis and lens/cataract 100%; optics and refraction 20%), the answers always had a high thematic fit (70%) and internal coherence (71%). CONCLUSION: The fact that ChatGPT 4.0 would have passed the specialist examination in 12 out of 15 topics is remarkable considering the fact that this AI was not specifically trained for medical questions; however, there is a considerable performance variability between the topics, with some serious shortcomings that currently rule out its safe use in clinical practice.


Subject(s)
Educational Measurement , Ophthalmology , Specialty Boards , Ophthalmology/education , Educational Measurement/methods , Educational Measurement/standards , Germany , Humans , Clinical Competence/standards , Certification , Artificial Intelligence
19.
Am J Pharm Educ ; 88(5): 100701, 2024 May.
Article in English | MEDLINE | ID: mdl-38641172

ABSTRACT

As first-time pass rates on the North American Pharmacy Licensure Examination (NAPLEX) continue to decrease, pharmacy educators are left questioning the dynamics causing the decline and how to respond. Institutional and student factors both influence first-time NAPLEX pass rates. Pharmacy schools established before 2000, those housed within an academic medical center, and public rather than private schools have been associated with tendencies toward higher first-time NAPLEX pass rates. However, these factors alone do not sufficiently explain the issues surrounding first-time pass rates. Changes to the NAPLEX blueprint may also have influenced first-time pass rates. The number of existing pharmacy schools combined with decreasing numbers of applicants and influences from the COVID-19 pandemic should also be considered as potential causes of decreased first-time pass rates. In this commentary, factors associated with first-time NAPLEX pass rates are discussed along with some possible responses for the Academy to consider.


Subject(s)
COVID-19 , Education, Pharmacy , Educational Measurement , Licensure, Pharmacy , Schools, Pharmacy , Humans , Educational Measurement/standards , Schools, Pharmacy/standards , COVID-19/epidemiology , Students, Pharmacy , Pharmacists , United States
SELECTION OF CITATIONS
SEARCH DETAIL