Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters











Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38388855

ABSTRACT

The entrustment framework redirects assessment from considering only trainees' competence to decision-making about their readiness to perform clinical tasks independently. Since trainees and supervisors both contribute to entrustment decisions, we examined the cognitive and affective factors that underly their negotiation of trust, and whether trainee demographic characteristics may bias them. Using a document analysis approach, we adapted large language models (LLMs) to examine feedback dialogs (N = 24,187, each with an associated entrustment rating) between medical student trainees and their clinical supervisors. We compared how trainees and supervisors differentially documented feedback dialogs about similar tasks by identifying qualitative themes and quantitatively assessing their correlation with entrustment ratings. Supervisors' themes predominantly reflected skills related to patient presentations, while trainees' themes were broader-including clinical performance and personal qualities. To examine affect, we trained an LLM to measure feedback sentiment. On average, trainees used more negative language (5.3% lower probability of positive sentiment, p < 0.05) compared to supervisors, while documenting higher entrustment ratings (+ 0.08 on a 1-4 scale, p < 0.05). We also found biases tied to demographic characteristics: trainees' documentation reflected more positive sentiment in the case of male trainees (+ 1.3%, p < 0.05) and of trainees underrepresented in medicine (UIM) (+ 1.3%, p < 0.05). Entrustment ratings did not appear to reflect these biases, neither when documented by trainee nor supervisor. As such, bias appeared to influence the emotive language trainees used to document entrustment more than the degree of entrustment they experienced. Mitigating these biases is nonetheless important because they may affect trainees' assimilation into their roles and formation of trusting relationships.

2.
Acad Med ; 99(1): 22-27, 2024 Jan 01.
Article in English | MEDLINE | ID: mdl-37651677

ABSTRACT

ABSTRACT: ChatGPT has ushered in a new era of artificial intelligence (AI) that already has significant consequences for many industries, including health care and education. Generative AI tools, such as ChatGPT, refer to AI that is designed to create or generate new content, such as text, images, or music, from their trained parameters. With free access online and an easy-to-use conversational interface, ChatGPT quickly accumulated more than 100 million users within the first few months of its launch. Recent headlines in the popular press have ignited concerns relevant to medical education over the possible implications of cheating and plagiarism in assessments as well as excitement over new opportunities for learning, assessment, and research. In this Scholarly Perspective, the authors offer insights and recommendations about generative AI for medical educators based on literature review, including the AI literacy framework. The authors provide a definition of generative AI, introduce an AI literacy framework and competencies, and offer considerations for potential impacts and opportunities to optimize integration of generative AI for admissions, learning, assessment, and medical education research to help medical educators navigate and start planning for this new environment. As generative AI tools continue to expand, educators need to increase their AI literacy through education and vigilance around new advances in the technology and serve as stewards of AI literacy to foster social responsibility and ethical awareness around the use of AI.


Subject(s)
Artificial Intelligence , Education, Medical , Humans , Educational Status , Learning , Literacy
3.
Med Teach ; 45(6): 565-573, 2023 06.
Article in English | MEDLINE | ID: mdl-36862064

ABSTRACT

The use of Artificial Intelligence (AI) in medical education has the potential to facilitate complicated tasks and improve efficiency. For example, AI could help automate assessment of written responses, or provide feedback on medical image interpretations with excellent reliability. While applications of AI in learning, instruction, and assessment are growing, further exploration is still required. There exist few conceptual or methodological guides for medical educators wishing to evaluate or engage in AI research. In this guide, we aim to: 1) describe practical considerations involved in reading and conducting studies in medical education using AI, 2) define basic terminology and 3) identify which medical education problems and data are ideally-suited for using AI.


Subject(s)
Artificial Intelligence , Education, Medical , Humans , Reproducibility of Results
4.
Med Educ ; 57(5): 384-387, 2023 05.
Article in English | MEDLINE | ID: mdl-36739578
5.
Teach Learn Med ; 35(5): 565-576, 2023.
Article in English | MEDLINE | ID: mdl-36001491

ABSTRACT

Problem: Recognition of the importance of clinical learning environments (CLEs) in health professions education has led to calls to evaluate and improve the quality of such learning environments. As CLEs sit at the crossroads of education and healthcare delivery, leadership from both entities should share the responsibility and accountability for this work. Current data collection about the experience and outcomes for learners, faculty, staff, and patients tends to occur in fragmented and siloed ways, and available tools to assess clinical learning environments are limited in scope. In addition, from an organizational perspective oversight of education and patient care is often done by separate entities, and not infrequently is there a sense of competing interests. Intervention: We aimed to design and pilot a holistic approach to assessment and review of CLEs and establish whether such a formative assessment process could be used to engage stakeholders from education, departmental, and health systems leadership in improvement of CLEs. Utilizing concepts of implementation science, we planned and executed a holistic assessment process for CLEs, monitored the impact of the assessment, and reflected on the process. We focused the assessment on four pillars characterizing exemplary learning environments: 1) Environment is inclusive, promotes diversity and collaboration; 2) Focus on continuous quality improvement; 3) Alignment between work and learning; and 4) Integration of education and healthcare mission. Context: At our institution, medical trainees rotate through several different health systems, but clinical and educational leadership converge at the departmental level. We therefore focused this proof-of-concept project on two large clinical departments at our institution, centering on medical learners from undergraduate and graduate medical education. For each department, a small team of champions helped create an assessment grid based on the four pillars and identified existing quantitative evaluation data sources. Champions subsequently collected qualitative data through observations, focus groups, and interviews to fill any gaps in available quantitative data. Impact: The project teams shared reports summarizing findings and recommendations with departmental, clinical, and educational leadership. Subsequent meetings with these stakeholders led to actionable plans for improvement as well as sustained structures for collaborative work between the different stakeholder groups. Lessons Learned: This project demonstrated the feasibility and effectiveness of collating, analyzing, and sharing data from various sources in engaging different stakeholder groups to initiate actionable improvement plans. Collating quantitative data from existing resources was a powerful way to demonstrate common issues in CLEs, and qualitative data provided further detail to inform improvement initiatives. Other institutions can adapt this approach to guide assessment and quality improvement of CLEs. As a next step, we are creating a comprehensive learning environment scorecard to allow for comparison of clinical learning environment quality across institutions and over time.


Subject(s)
Delivery of Health Care , Students , Humans , Pilot Projects , Faculty , Leadership
6.
Med Educ ; 56(3): 303-311, 2022 Mar.
Article in English | MEDLINE | ID: mdl-34773415

ABSTRACT

CONTEXT: Clinical supervisors make judgements about how much to trust learners with critical activities in patient care. Such decisions mediate trainees' opportunities for learning and competency development and thus are a critical component of education. As educators apply entrustment frameworks to assessment, it is important to determine how narrative feedback reflecting entrustment may also address learners' educational needs. METHODS: In this study, we used artificial intelligence (AI) and natural language processing (NLP) to identify characteristics of feedback tied to supervisors' entrustment decisions during direct observation encounters of clerkship medical students (3328 unique observations). Supervisors conducted observations of students and collaborated with them to complete an entrustment-based assessment in which they documented narrative feedback and assigned an entrustment rating. We trained a deep neural network (DNN) to predict entrustment levels from the narrative data and developed an explainable AI protocol to uncover the latent thematic features the DNN used to make its prediction. RESULTS: We found that entrustment levels were associated with level of detail (specific steps for performing clinical tasks), feedback type (constructive versus reinforcing) and task type (procedural versus cognitive). In justifying both high and low levels of entrustment, supervisors detailed concrete steps that trainees performed (or did not yet perform) competently. CONCLUSIONS: Framing our results in the factors previously identified as influencing entrustment, we find a focus on performance details related to trainees' clinical competency as opposed to nonspecific feedback on trainee qualities. The entrustment framework reflected in feedback appeared to guide specific goal-setting, combined with details necessary to reach those goals. Our NLP methodology can also serve as a starting point for future work on entrustment and feedback as similar assessment datasets accumulate.


Subject(s)
Internship and Residency , Students, Medical , Artificial Intelligence , Clinical Competence , Competency-Based Education , Feedback , Humans , Learning , Students, Medical/psychology
7.
Perspect Med Educ ; 10(6): 327-333, 2021 12.
Article in English | MEDLINE | ID: mdl-34297348

ABSTRACT

INTRODUCTION: Trust between supervisors and trainees mediates trainee participation and learning. A resident (postgraduate) trainee's understanding of their supervisor's trust can affect their perceptions of their patient care responsibilities, opportunities for learning, and overall growth as physicians. While the supervisor perspective of trust has been well studied, less is known about how resident trainees recognize supervisor trust and how it affects them. METHODS: In this qualitative study, 21 pediatric residents were interviewed at a single institution. Questions addressed their experiences during their first post-graduate year (PGY-1) on inpatient wards. Each interviewee was asked to describe three different patient care scenarios in which they perceived optimal, under-, and over-trust from their resident supervisor. Data were analyzed using thematic analysis. RESULTS: Residents recognized and interpreted their supervisor's trust through four factors: supervisor, task, relationship, and context. Optimal trust was associated with supervision balancing supervisor availability and resident independence, tasks affording participation in decision-making, trusting relationships with supervisors, and a workplace fostering appropriate autonomy and team inclusivity. The effects of supervisor trust on residents fell into three themes: learning experiences, attitudes and self-confidence, and identities and roles. Optimal trust supported learning via tailored guidance, confidence and lessened vulnerability, and a sense of patient ownership and team belonging. DISCUSSION: Understanding how trainees recognize supervisor trust can enhance interventions for improving the dialogue of trust between supervisors and trainees. It is important for supervisors to be cognizant of their trainees' interpretations of trust because it affects how trainees understand their patient care roles, perceive autonomy, and approach learning.


Subject(s)
Internship and Residency , Trust , Attitude of Health Personnel , Child , Clinical Competence , Humans , Patient Care
8.
Psychometrika ; 85(3): 815-836, 2020 09.
Article in English | MEDLINE | ID: mdl-32856271

ABSTRACT

We propose a dyadic Item Response Theory (dIRT) model for measuring interactions of pairs of individuals when the responses to items represent the actions (or behaviors, perceptions, etc.) of each individual (actor) made within the context of a dyad formed with another individual (partner). Examples of its use include the assessment of collaborative problem solving or the evaluation of intra-team dynamics. The dIRT model generalizes both Item Response Theory models for measurement and the Social Relations Model for dyadic data. The responses of an actor when paired with a partner are modeled as a function of not only the actor's inclination to act and the partner's tendency to elicit that action, but also the unique relationship of the pair, represented by two directional, possibly correlated, interaction latent variables. Generalizations are discussed, such as accommodating triads or larger groups. Estimation is performed using Markov-chain Monte Carlo implemented in Stan, making it straightforward to extend the dIRT model in various ways. Specifically, we show how the basic dIRT model can be extended to accommodate latent regressions, multilevel settings with cluster-level random effects, as well as joint modeling of dyadic data and a distal outcome. A simulation study demonstrates that estimation performs well. We apply our proposed approach to speed-dating data and find new evidence of pairwise interactions between participants, describing a mutual attraction that is inadequately characterized by individual properties alone.


Subject(s)
Markov Chains , Problem Solving , Computer Simulation , Humans , Monte Carlo Method , Psychometrics
9.
J Mol Biol ; 392(5): 1303-14, 2009 Oct 09.
Article in English | MEDLINE | ID: mdl-19576901

ABSTRACT

Models of protein energetics that neglect interactions between amino acids that are not adjacent in the native state, such as the Go model, encode or underlie many influential ideas on protein folding. Implicit in this simplification is a crucial assumption that has never been critically evaluated in a broad context: Detailed mechanisms of protein folding are not biased by nonnative contacts, typically argued to be a consequence of sequence design and/or topology. Here we present, using computer simulations of a well-studied lattice heteropolymer model, the first systematic test of this oft-assumed correspondence over the statistically significant range of hundreds of thousands of amino acid sequences that fold to the same native structure. Contrary to previous conjectures, we find a multiplicity of folding mechanisms, suggesting that Go-like models cannot be justified by considerations of topology alone. Instead, we find that the crucial factor in discriminating among topological pathways is the heterogeneity of native contact energies: The order in which native contacts accumulate is profoundly insensitive to omission of nonnative interactions, provided that native contact heterogeneity is retained. This robustness holds over a surprisingly wide range of folding rates for our designed sequences. Mirroring predictions based on the principle of minimum frustration, fast-folding sequences match their Go-like counterparts in both topological mechanism and transit times. Less optimized sequences dwell much longer in the unfolded state and/or off-pathway intermediates than do Go-like models. For dynamics that bridge unfolded and unfolded states, however, even slow folders exhibit topological mechanisms and transit times nearly identical with those of their Go-like counterparts. Our results do not imply a direct correspondence between folding trajectories of Go-like models and those of real proteins, but they do help to clarify key topological and energetic assumptions that are commonly used to justify such caricatures.


Subject(s)
Cytoskeletal Proteins/metabolism , Protein Folding , Computer Simulation , Models, Chemical , Protein Binding
SELECTION OF CITATIONS
SEARCH DETAIL