ABSTRACT
AIM: This study investigated the effects of gender on repeated, maximal-intensity intermittent sprint exercise following variable day-to-day recovery periods. METHODS: Sixteen volunteers (8 men, 8 women) performed four trials of high-intensity intermittent sprint exercise consisting of three bouts of eight 30 m sprints (total of 24 sprints). Following completion of the baseline trial, in repeated-measures design, participants were assigned, in counter-balanced order, variable recovery periods of 24, 48, and 72 h whereupon they repeated an identical exercise trial. RESULTS: Results from a series of 4 (trial) x 3 (bout) repeated measures ANOVAs revealed men produced significantly (P < 0.01) faster times throughout all bouts and trials of repeated sprint exercise. Additionally, women exhibited significantly lower (P < 0.05) blood lactate concentration and significantly lower (P < 0.05) decrement in performance, indicating increased resistance to fatigue during repeated exercise sessions. There were no significant differences (P > 0.05) between genders for heart rate or rating of perceived exertion during or following trials. There were no significant differences for overall sprint performance within either gender among trials. CONCLUSION: These results indicate men, while able to produce higher absolute power outputs (i.e., lower sprint time), demonstrate higher decrement scores within a trial compared to women, thus suggesting women may recover faster and fatigue less. Also, gender differences affecting recovery within in a trial were observed to be diminished between trials (i.e., day-to-day recovery) of maximal intermittent sprint work evidenced by the observed stability of performance between trials following various recovery durations.
Subject(s)
Muscle Fatigue/physiology , Running/physiology , Analysis of Variance , Female , Humans , Lactates/blood , Male , Recovery of Function , Sex Factors , Young AdultABSTRACT
The study investigated five factors which can affect the equating of scores from two tests onto a common score scale. The five factors were: (a) item distribution type (i.e., normal versus uniform; (b) standard deviation of item difficulty (i.e.,.68,.95,.99); (c) number of items or test length (i.e., 50, 100, 200); (d) number of common items (i.e., 10, 20, 30); and (e) sample size (i.e., 100, 300, 500). SIMTEST and BIGSTEPS programs were used for the simulation and equating of 4,860 item data sets, respectively. Results from the five-way fixed effects factorial analysis of variance indicated three statistically significant two-way interaction effects. Simple effects for the interaction between common item length and test length only were interpreted given Type I error rate considerations. The eta-squared values for number of common items and test length were small indicating the effects had little practical importance. The Rasch approach to equating is robust with as few as 10 common items and a test length of 100 items.
Subject(s)
Personality Inventory/statistics & numerical data , Computer Simulation , Humans , Mathematical Computing , Models, Statistical , Psychometrics , SoftwareSubject(s)
Chest Pain/etiology , Computer-Assisted Instruction , Diagnosis, Differential , Education, Medical, Undergraduate , Analysis of Variance , Artificial Intelligence , Cognition , Decision Support Techniques , Feedback , Humans , Monte Carlo Method , Pilot Projects , Task Performance and AnalysisABSTRACT
Many-facet Rasch analysis provides the bases for making fair and meaningful decisions from individual ratings by judges on tasks. The typical measurement design employed in a many-facet Rasch analysis has judges crossed with other facets or conditions of measurement. A nested design does not permit facets to be compared. However, a mixed design can be used to achieve a common vertical ruler when the frame of reference permits commensurate measures to be linked. Examples of crossed, nested, and mixed designs are presented to illustrate how a many-facet Rasch analysis can be modified to meet the connectivity requirement for comparing facet measures.
Subject(s)
Psychometrics/methods , Research Design , Task Performance and Analysis , Humans , Models, Statistical , SoftwareABSTRACT
In order to test the assumption that mathematically talented students show little mathematics anxiety, students participating in an early entrance to college program for talented students were asked to complete the Mathematics Anxiety Rating Scale. Results indicated that these talented students were less math anxious than most unselected college students. However, they were more math anxious than a group of college students majoring in physics. Females in the study showed a tendency to be more math anxious than males (d=-.32), although this finding was not significant. No relationship between level of mathematics anxiety and grades or math anxiety and Scholastic Aptitude Test - Mathematics scores was found for the group of subjects. However, when those relationships were examined for males alone, higher verbal scores and higher grades were associated with lower levels of mathematics anxiety. These relationships were not evident for females.
ABSTRACT
The different chi-square statistics reported in the many-faceted Rasch model analysis are presented and interpreted. In addition, other chi-square summary values are computed and presented for interpretation of facets. The chi-square values are useful for determining: (1) the significance of a facet in the Rasch model; (2) the significant contribution of facet main and interaction effects; (3) differences among facet elements; and (4) identifying the specific facet interaction adjustments to the subjects' calibrated logit ability measure.
Subject(s)
Chi-Square Distribution , Task Performance and Analysis , Humans , Probability , Psychometrics , SoftwareABSTRACT
The purpose of this study was to compare the results and interpretation of the data from a performance examination when four methods of analysis are used. Methods are 1) traditional summary statistics, 2) inter-judge correlations, 3) generalizability theory, and 4) the multi-facet Rasch model. Results indicated that similar sources of variance were identified using each method; however, the multi-facet Rasch model is the only method that linearized the scores and accounts for differences in the particular examination challenged by a candidate before ability estimates are calculated.
Subject(s)
Task Performance and Analysis , Data Interpretation, Statistical , Humans , Observer Variation , Psychometrics , SoftwareABSTRACT
A Monte Carlo study was conducted using simulated dichotomous data to determine the effects of guessing on Rasch item fit statistics (weighted total, unweighted total, and unweighted between fit statistics) and the Logit Residual Index (LRI). The data were simulated using 100 items, 100 persons, three levels of guessing (0%, 25%, and 50%), and two item difficulty distributions (normal and uniform). The results of the study indicated that no significant differences were found between the mean Rasch item fit statistics for each distribution type as the probability of guessing the correct answer increased. The mean item scores differed significantly with uniformly distributed item difficulties, but not normally distributed item difficulties. The LRI was more sensitive to large positive item misfit values associated with the unweighted total fit statistic than to similar values associated with the weighted total fit or unweighted between fit statistics. The greatest magnitude of change in LRI values (negative) was observed when the unweighted total fit statistic had large positive values greater than 2.4. The LRI statistic was most useful in identifying the linear trend in the residuals for each item, thereby indicating differences in ability groups, i.e. differential item functioning.
Subject(s)
Bias , Models, Statistical , Research Design/statistics & numerical data , Humans , Monte Carlo MethodABSTRACT
As organizations begin to implement work teams, their assessment will ultimately reflect compensation strategies that move away from individual assessment. This will involve not only using multiple raters, but also the use of multiple criteria. Team assessment using multiple raters and multiple criteria is therefore necessitated; however, this can produce differences in ratings due to the leniency or severity of the individual team raters. This study analyzed the ratings of individual members on 31 different teams across 12 different criteria of team performance. Utilizing the many-facet Rasch model, statistical differences between the teams and 12 criteria were calculated.
Subject(s)
Employee Performance Appraisal/methods , Psychometrics/methods , Employee Performance Appraisal/statistics & numerical data , Humans , Logistic Models , Models, Statistical , Observer VariationABSTRACT
Throughout the mid to late 1970's considerable research was conducted on the properties of Rasch fit mean squares. This work culminated in a variety of transformations to convert the mean squares into approximate t-statistics. This work was primarily motivated by the influence sample size has on the magnitude of the mean squares and the desire to have a single critical value that can generally be applied to most cases. In the late 1980's and the early 1990's the trend seems to have reversed, with numerous researchers using the untransformed fit mean squares as a means of testing fit to the Rasch measurement models. The principal motivation is cited as the influence sample size has on the sensitivity of the t-converted mean squares. The purpose of this paper is to present the historical development of these fit indices and the various transformations and to examine the impact of sample size on both the fit mean squares and the t-transformations of those mean squares. Because the sample size problem has little influence on the person mean square problem, due to the relatively short length (100 items or less), this paper focuses on the item fit mean squares, where it is common to find the statistics used with sample sizes ranging from 30 to 10,000.