Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters











Database
Language
Publication year range
1.
Perspect Med Educ ; 12(1): 584-593, 2023.
Article in English | MEDLINE | ID: mdl-38144672

ABSTRACT

Introduction: Competency-based education requires high-quality feedback to guide students' acquisition of competencies. Sound assessment and feedback systems, such as ePortfolios, are needed to facilitate seeking and giving feedback during clinical placements. However, it is unclear whether the written feedback comments in ePortfolios are of high quality and aligned with the current competency focus. Therefore, this study investigates the quality of written feedback comments in ePortfolios of healthcare students, as well as how these feedback comments align with the CanMEDS roles. Methods: A qualitative textual analysis was conducted. 2,349 written feedback comments retrieved from the ePortfolios of 149 healthcare students (specialist medicine, general practice, occupational therapy, speech therapy and midwifery) were analysed retrospectively using deductive content analysis. Two structured categorisation matrices, one based on four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and another one on the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional), guided the analysis. Results: The minority of the feedback comments (n = 352; 14.9%) could be considered of high quality because they met all four quality criteria. Most feedback comments were of moderate quality and met only two to three quality criteria. Regarding the CanMEDS roles, the Medical Expert role was most frequently represented in the feedback comments, as opposed to the roles Leader and Health Advocate. Discussion: The results highlighted that providing high-quality feedback is challenging. To respond to these challenges, it is recommended to set up individual and continuous feedback training.


Subject(s)
Clinical Competence , Medicine , Humans , Feedback , Retrospective Studies , Competency-Based Education/methods
2.
Perspect Med Educ ; 12(1): 540-549, 2023.
Article in English | MEDLINE | ID: mdl-38144670

ABSTRACT

Introduction: Manually analysing the quality of large amounts of written feedback comments is time-consuming and demands extensive resources and human effort. Therefore, this study aimed to explore whether a state-of-the-art large language model (LLM) could be fine-tuned to identify the presence of four literature-derived feedback quality criteria (performance, judgment, elaboration and improvement) and the seven CanMEDS roles (Medical Expert, Communicator, Collaborator, Leader, Health Advocate, Scholar and Professional) in written feedback comments. Methods: A set of 2,349 labelled feedback comments of five healthcare educational programs in Flanders (Belgium) (specialistic medicine, general practice, midwifery, speech therapy and occupational therapy) was split into 12,452 sentences to create two datasets for the machine learning analysis. The Dutch BERT models BERTje and RobBERT were used to train four multiclass-multilabel classification models: two to identify the four feedback quality criteria and two to identify the seven CanMEDS roles. Results: The classification models trained with BERTje and RobBERT to predict the presence of the four feedback quality criteria attained macro average F1-scores of 0.73 and 0.76, respectively. The F1-score of the model predicting the presence of the CanMEDS roles trained with BERTje was 0.71 and 0.72 with RobBERT. Discussion: The results showed that a state-of-the-art LLM is able to identify the presence of the four feedback quality criteria and the CanMEDS roles in written feedback comments. This implies that the quality analysis of written feedback comments can be automated using an LLM, leading to savings of time and resources.


Subject(s)
Clinical Competence , Physicians , Humans , Feedback , Natural Language Processing , Family Practice
3.
Lang Resour Eval ; 57(1): 189-221, 2023.
Article in English | MEDLINE | ID: mdl-36415480

ABSTRACT

News organizations increasingly tailor their news offering to the reader through personalized recommendation algorithms. However, automated recommendation algorithms reflect a commercial logic based on calculated relevance to the user, rather than aiming at a well-informed citizenry. In this paper, we introduce the EventDNA corpus, a dataset of 1773 Dutch-language news articles annotated with information on entities, news events and IPTC Media Topic codes, with the ultimate goal to outline a recommendation algorithm that uses news event diversity rather than previous reading behaviour as a key driver for personalized news recommendation. We describe the EventDNA annotation guidelines, which are inspired by the well-known ERE framework and conclude that it is not practical to apply a fixed event typology such as used in ERE to an unrestricted data context. The corpus and related source code is made available at https://github.com/NewsDNA-LT3/.github.

SELECTION OF CITATIONS
SEARCH DETAIL