Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 5 de 5
Filter
Add more filters










Database
Language
Publication year range
1.
Ergonomics ; 66(8): 1132-1141, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36227226

ABSTRACT

Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.


Subject(s)
Hand , Task Performance and Analysis , Humans , Prospective Studies , Upper Extremity , Computers , Video Recording/methods
2.
Hum Factors ; 59(5): 844-860, 2017 08.
Article in English | MEDLINE | ID: mdl-28704631

ABSTRACT

Objective This research considers how driver movements in video clips of naturalistic driving are related to observer subjective ratings of distraction and engagement behaviors. Background Naturalistic driving video provides a unique window into driver behavior unmatched by crash data, roadside observations, or driving simulator experiments. However, manually coding many thousands of hours of video is impractical. An objective method is needed to identify driver behaviors suggestive of distracted or disengaged driving for automated computer vision analysis to access this rich source of data. Method Visual analog scales ranging from 0 to 10 were created, and observers rated their perception of driver distraction and engagement behaviors from selected naturalistic driving videos. Driver kinematics time series were extracted from frame-by-frame coding of driver motions, including head rotation, head flexion/extension, and hands on/off the steering wheel. Results The ratings were consistent among participants. A statistical model predicting average ratings from the kinematic features accounted for 54% of distraction rating variance and 50% of engagement rating variance. Conclusion Rated distraction behavior was positively related to the magnitude of head rotation and fraction of time the hands were off the wheel. Rated engagement behavior was positively related to the variation of head rotation and negatively related to the fraction of time the hands were off the wheel. Application If automated computer vision can code simple kinematic features, such as driver head and hand movements, then large-volume naturalistic driving videos could be automatically analyzed to identify instances when drivers were distracted or disengaged.


Subject(s)
Attention/physiology , Automobile Driving , Motor Activity/physiology , Psychometrics/methods , Psychomotor Performance/physiology , Adult , Biomechanical Phenomena , Humans
3.
Ergonomics ; 60(12): 1730-1738, 2017 Dec.
Article in English | MEDLINE | ID: mdl-28640656

ABSTRACT

Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.


Subject(s)
Algorithms , Hand/physiology , Physical Exertion , Time and Motion Studies , Computers , Humans , Video Recording
4.
Ergonomics ; 59(11): 1514-1525, 2016 Nov.
Article in English | MEDLINE | ID: mdl-26848051

ABSTRACT

A marker-less 2D video algorithm measured hand kinematics (location, velocity and acceleration) in a paced repetitive laboratory task for varying hand activity levels (HAL). The decision tree (DT) algorithm identified the trajectory of the hand using spatiotemporal relationships during the exertion and rest states. The feature vector training (FVT) method utilised the k-nearest neighbourhood classifier, trained using a set of samples or the first cycle. The average duty cycle (DC) error using the DT algorithm was 2.7%. The FVT algorithm had an average 3.3% error when trained using the first cycle sample of each repetitive task, and had a 2.8% average error when trained using several representative repetitive cycles. Error for HAL was 0.1 for both algorithms, which was considered negligible. Elemental time, stratified by task and subject, were not statistically different from ground truth (p < 0.05). Both algorithms performed well for automatically measuring elapsed time, DC and HAL. Practitioner Summary: A completely automated approach for measuring elapsed time and DC was developed using marker-less video tracking and the tracked kinematic record. Such an approach is automatic, repeatable, objective and unobtrusive, and is suitable for evaluating repetitive exertions, muscle fatigue and manual tasks.


Subject(s)
Algorithms , Hand/physiology , Image Processing, Computer-Assisted , Task Performance and Analysis , Video Recording , Acceleration , Biomechanical Phenomena , Female , Humans , Male , Movement , Muscle Fatigue
5.
Ergonomics ; 58(2): 184-94, 2015.
Article in English | MEDLINE | ID: mdl-25343278

ABSTRACT

An equation was developed for estimating hand activity level (HAL) directly from tracked root mean square (RMS) hand speed (S) and duty cycle (D). Table lookup, equation or marker-less video tracking can estimate HAL from motion/exertion frequency (F) and D. Since automatically estimating F is sometimes complex, HAL may be more readily assessed using S. Hands from 33 videos originally used for the HAL rating were tracked to estimate S, scaled relative to hand breadth (HB), and single-frame analysis was used to measure D. Since HBs were unknown, a Monte Carlo method was employed for iteratively estimating the regression coefficients from US Army anthropometry survey data. The equation: HAL = 10[e(-15:87+0:02D+2:25 ln S)/(1+e(-15:87+0:02D+2:25 ln S)], R(2) = 0.97, had a residual range ± 0.5 HAL. The S equation superiorly fits the Latko et al. ( 1997 ) data and predicted independently observed HAL values (Harris 2011) better (MSE = 0.16) than the F equation (MSE = 1.28).


Subject(s)
Hand/physiology , Physical Exertion , Task Performance and Analysis , Work/physiology , Anthropometry/methods , Biomechanical Phenomena , Humans , Military Personnel , Movement , Occupational Health , Regression Analysis , Threshold Limit Values , United States
SELECTION OF CITATIONS
SEARCH DETAIL