Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters











Database
Language
Publication year range
1.
J Am Coll Radiol ; 14(2): 274-281, 2017 Feb.
Article in English | MEDLINE | ID: mdl-27927589

ABSTRACT

PURPOSE: The Radiology Process Model (RPM) was previously described in terms of its conceptual basis and proposed survey items. The current study describes the first pilot application of the RPM in the field and the results of initial psychometric analysis. METHODS: We used an Institutional Review Board-approved pilot RPM survey in 100 patients having outpatient interventional radiology procedures. The 24 survey items had 4 or 5 levels of severity. We assessed for missing data, items that patients found confusing, any suggestions by patients for additional items and clarity of items from patient feedback. Factor analysis was performed and internal consistency measured. Construct validity was assessed by correlation of patient responses to the items as a summated scale with a visual analog scale (VAS) they completed indicating their interventional radiology experience. RESULTS: The visual analog scale and the RPM summated scale were strongly correlated (r = 0.7). Factor analysis showed four factors: interactions with facility and doctors/staff, time-sensitive aspects, pain, and anxiety. The items showed high internal consistency (alpha: 0.86) as a group and approximately 0.7 to 0.9 by the factors. Analysis shows that two items could be deleted (cost and communication between radiologist and referrers). Revision of two items and potential addition of others are discussed. CONCLUSIONS: The RPM shows initial evidence of psychometric validity and internal consistency reliability. Minor changes are anticipated before wider use.


Subject(s)
Outcome Assessment, Health Care/methods , Pain/psychology , Patient Satisfaction/statistics & numerical data , Patient-Centered Care/statistics & numerical data , Quality of Life/psychology , Radiography, Interventional/psychology , Radiography, Interventional/statistics & numerical data , Adult , Aged , Aged, 80 and over , Boston/epidemiology , Computer Simulation , Female , Health Care Surveys/methods , Humans , Male , Middle Aged , Models, Organizational , Pain/diagnosis , Pain/epidemiology , Pilot Projects , Process Assessment, Health Care/methods , Psychometrics/methods , Radiology/organization & administration
3.
Adv Health Sci Educ Theory Pract ; 15(5): 717-33, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20509047

ABSTRACT

In recent years, demand for performance assessments has continued to grow. However, performance assessments are notorious for lower reliability, and in particular, low reliability resulting from task specificity. Since reliability analyses typically treat the performance tasks as randomly sampled from an infinite universe of tasks, these estimates of reliability may not be accurate. For tests built according to a table of specifications, tasks are randomly sampled from different strata (content domains, skill areas, etc.). If these strata remain fixed in the test construction process, ignoring this stratification in the reliability analysis results in an underestimate of "parallel forms" reliability, and an overestimate of the person-by-task component. This research explores the effect of representing and misrepresenting the stratification appropriately in estimation of reliability and the standard error of measurement. Both multivariate and univariate generalizability studies are reported. Results indicate that the proper specification of the analytic design is essential in yielding the proper information both about the generalizability of the assessment and the standard error of measurement. Further, illustrative D studies present the effect under a variety of situations and test designs. Additional benefits of multivariate generalizability theory in test design and evaluation are also discussed.


Subject(s)
Clinical Competence/statistics & numerical data , Data Interpretation, Statistical , Multivariate Analysis , Analysis of Variance , Computer Simulation , Humans , Reproducibility of Results , Statistics as Topic , United States
4.
Acad Med ; 84(10 Suppl): S97-100, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19907399

ABSTRACT

BACKGROUND: Documentation is a subcomponent of the Step 2 Clinical Skills Examination Integrated Clinical Encounter (ICE) component wherein licensed physicians rate examinees on their ability to communicate the findings of the patient encounter, diagnostic impression, and initial patient work-up. The main purpose of this research was to examine the impact of modifications to the scoring rubric and rater training protocol on the psychometric characteristics of the documentation scores. METHOD: Following the modifications, the variance structure of the ICE components was modeled using multivariate generalizability theory. RESULTS: The results confirmed the expectations that true score variance for the documentation subcomponent would increase after adopting a modified training protocol and increased rubric specificity. CONCLUSIONS: In general, results support the commonsense assumption that providing raters with detailed rubrics and comprehensive training will indeed improve measurement outcomes. Although the steps taken here were in the right direction, there remains room for improvement. Efforts are currently under way to further improve both the scoring rubrics and rater training.


Subject(s)
Clinical Competence , Educational Measurement/methods , Educational Measurement/standards , Licensure, Medical , United States
5.
Teach Learn Med ; 17(3): 202-9, 2005.
Article in English | MEDLINE | ID: mdl-16042515

ABSTRACT

BACKGROUND: Objective structured teaching exercises (OSTEs) are relatively new in medical education, with few studies that have reported reliability and validity. PURPOSE: To systematically examine the impact of OSTE design decisions, including number of cases, choice of raters, and type of scoring systems used. METHODS: We examined the impact of number of cases and raters using generalizability theory. We also compared scores from standardized students (SS), faculty raters (FR) and trained graduate student raters (TR), and examined the relation between behavior checklist ratings and global perception scores. RESULTS: Generalizability (g) coefficients for checklist scores were higher for SSs than TRs. The g estimates based on SSs' global scores were higher than g estimates for FRs. SSs' checklist scores were higher than TRs' checklist scores, and SSs' global evaluations were higher than FRs' and TRs' global scores. TRs' relative to SSs' global perceptions correlated more highly with checklist scores. CONCLUSIONS: SSs provide more generalizable checklist scores than TRs. Generalizability estimates for global scores from SSs and FRs were comparable. SSs are lenient raters compared to TRs and FRs.


Subject(s)
Education, Medical/standards , Educational Measurement/standards , Faculty , Students, Medical , Teaching/methods , Humans , Reproducibility of Results , Research Design , Teaching/standards
SELECTION OF CITATIONS
SEARCH DETAIL