Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
1.
Eur J Neurol ; 28(8): 2523-2532, 2021 08.
Article in English | MEDLINE | ID: mdl-33369806

ABSTRACT

BACKGROUND AND PURPOSE: Due to the COVID-19 pandemic, scientific congresses are increasingly being organized as virtual congresses (VCs). In May 2020, the European Academy of Neurology (EAN) held a VC, free of charge. In the absence of systematic studies on this topic, the aim of this study is to evaluate the attendance and perceived quality of the 2020 EAN VC compared to the 2019 EAN face-to-face congress (FFC). METHODS: An analysis of the demographic data of participants obtained from the online registration was done. A comparison of the two congresses based on a survey with questions on the perception of speakers' performance, quality of networking and other aspects was made. RESULTS: Of 43,596 registered participants, 20,694 active participants attended the VC. Compared to 2019, the number of participants tripled (6916 in 2019) and the cumulated number of participants attending the sessions was five times higher (169,334 in 2020 vs. 33,024 in 2019). Out of active participants 55% were from outside Europe, 42% were board-certified neurologists (FFC 80%) and 21% were students (FFC 0.6%). The content of the congress was evaluated as 'above expectation' by 56% of the attendees (FFC 41%). Of the respondents who had been exposed to earlier EAN congresses 73% preferred the FFC compared to the VC (17%). CONCLUSION: The VC fulfilled the main mission of organizing high quality EAN congresses despite the restrictions of the impersonal format. The geographical distribution of the participants proves the expected higher inclusivity of a VC. The large participation of students and neurologists in training opens new educational potentials for the EAN.


Subject(s)
COVID-19 , Neurology , Europe , Humans , Pandemics , SARS-CoV-2
2.
Med Teach ; 29(6): e170-4, 2007 Sep.
Article in English | MEDLINE | ID: mdl-17922355

ABSTRACT

BACKGROUND: It is a very well-known fact that examinations drive learning to a great extent. The examination program is actually the 'hidden curriculum' for the students. In order to improve teaching and learning one option is to strategically use of exams. AIMS: This report of the strategic use of an innovative assessment tool in clinical problem solving domain, presents the design, format, content, students' results and evaluation of one year test results of instructive case-based exams for 6th year medical students. METHOD: Using a hybrid form of the OSCE, PMP and KFE formats, we developed a case-based stationary exam. Students were treated as advanced beginners in medical career and forced to an inquiry to use their clinical knowledge in the cases. Case discussions and question-answer sessions followed the exams. Six exams were held in 2000-2001 and 382 students participated in the study. One or two problems were used for each exam and the mean duration was 27 minutes for 7-11 stations. 17-19 observers contributed to each exam. Exams were evaluated by questionnaire based feedbacks of the students and oral feedbacks of the staff members. RESULTS: The exams were well received and rated 'fair' by the students and the format was found highly 'relevant for learning' while the content was 'instructive' and 'not difficult'. The total non-satisfactory performance rate was 2.36%. Students asked to take a similar test weekly. Although it was labor intensive, staff members appreciated the collaborative working process. CONCLUSIONS: Instructive case-based exams and the following case discussions seemed a high potential and motivating teaching tool in the clinical problem solving domain for 6th year students.


Subject(s)
Anxiety/diagnosis , Anxiety/therapy , Clinical Competence , Decision Making , Depression/diagnosis , Depression/therapy , Education, Medical, Continuing , Patient Simulation , Teaching/methods , Adult , Chi-Square Distribution , Educational Measurement , Female , Humans , Male , Middle Aged , Videotape Recording
3.
Turk J Gastroenterol ; 27(2): 129-35, 2016 Mar.
Article in English | MEDLINE | ID: mdl-27015618

ABSTRACT

BACKGROUND/AIMS: Questionnaire on Pediatric Gastrointestinal Symptoms: Rome III version (QPGS-RIII), originally developed in English, was adapted to different languages in order to widen its use. The aim of this study was to evaluate the validity and reliability of a questionnaire on the Pediatric QPGS-RIII parent-report form for children and self-report form for children and adolescents, which has been adapted into Turkish. MATERIALS AND METHODS: The study group comprised 7-18-year-old children/adolescents (n=690) who presented to Ege University School of Medicine, Department of Child Health and Diseases outpatient clinic. In the study, the validity and reliability of the QPGS-RIII Turkish version of the questionnaire was established. RESULTS: Confirmatory factor analysis (CFA) resulted in a 10-factor model satisfactory construct for the validity and in acceptable indices of goodness of fit. Standardized coefficients determined with CFA in the Turkish version of the instrument ranged between 0.15 and 0.87 in the 7-9-year-old children and between 0.13 and 0.98 in the 10-18-year-old children/adolescents. t-values of all the factor loadings were significant. In addition, the test-retest analyses were above 0.70, except for the abdominal migraine factor. CONCLUSION: Findings relating to the validity and reliability of the study indicated that the Turkish version of the instrument could be adequately used to assess functional gastrointestinal disorders (FGIDs) in Turkish children and adolescents. The Turkish version of the instrument is therefore recommended to be used in epidemiologic studies and in clinical trials to be conducted in a Turkish-speaking population.


Subject(s)
Gastrointestinal Diseases/diagnosis , Surveys and Questionnaires/standards , Translations , Adolescent , Child , Factor Analysis, Statistical , Female , Humans , Male , Reproducibility of Results , Turkey
4.
Patient Educ Couns ; 87(3): 293-9, 2012 Jun.
Article in English | MEDLINE | ID: mdl-22169634

ABSTRACT

OBJECTIVE: The aim of our study was to present the structure, process and results of the objective structured video exam and One-Station standardized patient exam that have been used to assess second year medical students' communication skills. METHODS: Scores of 1137 students between the years 2007 and 2010 were analyzed. Means and standard deviations were calculated for scores and ratings. Internal consistency was assessed using Cronbach's alpha coefficient. To analyze reliability and generalizability, multivariate generalizability theory was employed. RESULTS: Students' total and item scores on the objective structured video exam (60.5-68.8) were lower than on the One-Station standardized patient exam (90.4-96.6). Internal consistencies of both exams were moderate. Generalizability analysis and D-study results showed that both the objective structured video exam and the One-Station standardized patient exam need improvement. CONCLUSION: Both exams need measures to improve them, such as increasing the number of video cases or stations, and further standardization of raters. PRACTICE IMPLICATIONS: This study might encourage medical teachers to consider assessing validity and reliability of written and performance exams on the basis of generalizability theory, and to find out feasible actions to improve assessment procedures by conducting a D-study.


Subject(s)
Clinical Competence , Communication , Education, Medical, Undergraduate/standards , Educational Measurement/methods , Students, Medical/psychology , Videotape Recording , Adult , Female , Humans , Male , Multivariate Analysis , Patient Simulation , Reproducibility of Results , Turkey , Writing , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL