Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 63
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Ergonomics ; 66(8): 1132-1141, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36227226

RESUMO

Observer, manual single-frame video, and automated computer vision measures of the Hand Activity Level (HAL) were compared. HAL can be measured three ways: (1) observer rating (HALO), (2) calculated from single-frame multimedia video task analysis for measuring frequency (F) and duty cycle (D) (HALF), or (3) from automated computer vision (HALC). This study analysed videos collected from three prospective cohort studies to ascertain HALO, HALF, and HALC for 419 industrial videos. Although the differences for the three methods were relatively small on average (<1), they were statistically significant (p < .001). A difference between the HALC and HALF ratings within ±1 point on the HAL scale was the most consistent, where more than two thirds (68%) of all the cases were within that range and had a linear regression through the mean coefficient of 1.03 (R2 = 0.89). The results suggest that the computer vision methodology yields comparable results as single-frame video analysis.Practitioner summary: The ACGIH Hand Activity Level (HAL) was obtained for 419 industrial tasks using three methods: observation, calculated using single-frame video analysis and computer vision. The computer vision methodology produced results that were comparable to single-frame video analysis.


Assuntos
Mãos , Análise e Desempenho de Tarefas , Humanos , Estudos Prospectivos , Extremidade Superior , Computadores , Gravação em Vídeo/métodos
2.
Hum Factors ; 64(2): 265-268, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35025608

RESUMO

Today's challenges for scientific publications require operating at a time when trust in science depends upon effective vetting of data, identifying questionable practices, and scrutinizing research. The Editor-in-Chief has an invaluable opportunity to influence the direction and reputation of our field but also has the responsibility to confront contemporary trends that threaten the publication of quality research. The editor is responsible for maintaining strict scientific standards for the journal through the exercise of good judgment and steadfast commitment to upholding the highest ethical principles. Opportunities exist to create and implement new initiatives for improving the peer review process and elevating the journal's stature. The journal must address the challenges as well as effectively communicate with the public, who seek a reliable source of information.


Assuntos
Internet , Humanos
3.
Hum Factors ; : 187208221093829, 2022 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-35548929

RESUMO

OBJECTIVE: The effect of camera viewpoint was studied when performing visually obstructed psychomotor targeting tasks. BACKGROUND: Previous research in laparoscopy and robotic teleoperation found that complex perceptual-motor adaptations associated with misaligned viewpoints corresponded to degraded performance in manipulation. Because optimal camera positioning is often unavailable in restricted environments, alternative viewpoints that might mitigate performance effects are not obvious. METHODS: A virtual keyboard-controlled targeting task was remotely distributed to workers of Amazon Mechanical Turk. The experiment was performed by 192 subjects for a static viewpoint with independent parameters of target direction, Fitts' law index of difficulty, viewpoint azimuthal angle (AA), and viewpoint polar angle (PA). A dynamic viewpoint experiment was also performed by 112 subjects in which the viewpoint AA changed after every trial. RESULTS: AA and target direction had significant effects on performance for the static viewpoint experiment. Movement time and travel distance increased while AA increased until there was a discrete improvement in performance for 180°. Increasing AA from 225° to 315° linearly decreased movement time and distance. There were significant main effects of current AA and magnitude of transition for the dynamic viewpoint experiment. Orthogonal direction and no-change viewpoint transitions least affected performance. CONCLUSIONS: Viewpoint selection should aim to minimize associated rotations within the manipulation plane when performing targeting tasks whether implementing a static or dynamic viewing solution. Because PA rotations had negligible performance effects, PA adjustments may extend the space of viable viewpoints. APPLICATIONS: These results can inform viewpoint selection for visual feedback during psychomotor tasks.

4.
Hum Factors ; : 187208221077722, 2022 Mar 28.
Artigo em Inglês | MEDLINE | ID: mdl-35345922

RESUMO

OBJECTIVE: Trade-offs between productivity, physical workload (PWL), and mental workload (MWL) were studied when integrating collaborative robots (cobots) into existing manual work by optimizing the allocation of tasks. BACKGROUND: As cobots become more widely introduced in the workplace and their capabilities greatly improved, there is a need to consider how they can best help their human partners. METHODS: A theoretical data-driven analysis was conducted using the O*NET Content Model to evaluate 16 selected jobs for associated work context, skills, and constraints. Associated work activities were ranked by potential for substitution by a cobot. PWL and MWL were estimated using variables from the O*Net database that represent variables for the Strain Index and NASA-TLX. An algorithm was developed to optimize work activity assignment to cobots and human workers according to their most suited abilities. RESULTS: Human workload for some jobs decreased while workload for some jobs increased after cobots were reassigned tasks, and residual human capacity was used to perform job activities designated the most important to increase productivity. The human workload for other jobs remained unchanged. CONCLUSIONS: The changes in human workload from the introduction of cobots may not always be beneficial for the human worker unless trade-offs are considered.Application: The framework of this study may be applied to existing jobs to identify the relationship between productivity and worker tolerances that integrate cobots into specific tasks.

5.
Hum Factors ; 64(3): 482-498, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-32972247

RESUMO

OBJECTIVE: A computer vision method was developed for estimating the trunk flexion angle, angular speed, and angular acceleration by extracting simple features from the moving image during lifting. BACKGROUND: Trunk kinematics is an important risk factor for lower back pain, but is often difficult to measure by practitioners for lifting risk assessments. METHODS: Mannequins representing a wide range of hand locations for different lifting postures were systematically generated using the University of Michigan 3DSSPP software. A bounding box was drawn tightly around each mannequin and regression models estimated trunk angles. The estimates were validated against human posture data for 216 lifts collected using a laboratory-grade motion capture system and synchronized video recordings. Trunk kinematics, based on bounding box dimensions drawn around the subjects in the video recordings of the lifts, were modeled for consecutive video frames. RESULTS: The mean absolute difference between predicted and motion capture measured trunk angles was 14.7°, and there was a significant linear relationship between predicted and measured trunk angles (R2 = .80, p < .001). The training error for the kinematics model was 2.3°. CONCLUSION: Using simple computer vision-extracted features, the bounding box method indirectly estimated trunk angle and associated kinematics, albeit with limited precision. APPLICATION: This computer vision method may be implemented on handheld devices such as smartphones to facilitate automatic lifting risk assessments in the workplace.


Assuntos
Remoção , Tronco , Fenômenos Biomecânicos , Computadores , Humanos , Postura
6.
Hum Factors ; 63(7): 1169-1181, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-32286884

RESUMO

OBJECTIVE: Surgeon tremor was measured during vitreoretinal microscopic surgeries under different hand support conditions. BACKGROUND: While the ophthalmic surgeon's forearm is supported using a standard symmetric wrist rest when operating on the patient's same side as the dominant hand (SSD), the surgeon's hand is placed directly on the patient's forehead when operating on the contralateral side of the dominant hand (CSD). It was hypothesized that more tremor is associated with CSD surgeries than SSD surgeries and that, using an experimental asymmetric wrist rest where the contralateral wrist bar gradually rises and curves toward the patient's operative eye, there is no difference in tremor associated with CSD and SSD surgeries. METHODS: Seventy-six microscope videos, recorded from three surgeons performing macular membrane peeling operations, were analyzed using marker-less motion tracking, and movement data (instrument path length and acceleration) were recorded. Tremor acceleration frequency and magnitude were measured using spectral analysis. Following 47 surgeries using a conventional symmetric wrist support, surgeons incorporated the experimental asymmetric wrist rest into their surgical routine. RESULTS: There was 0.11 mm/s2 (22%) greater (p = .05) average tremor acceleration magnitude for CSD surgeries (0.62 mm/s2, SD = 0.08) than SSD surgeries (0.51 mm/s2, SD = 0.09) for the symmetric wrist rest, while no significant (p > .05) differences were observed (0.57 mm, SD = 0.13 for SSD and 0.58 mm, SD = 0.11 for CSD surgeries) for the experimental asymmetric wrist rest. CONCLUSION: The asymmetric wrist support reduced the difference in tremor acceleration between CSD and SSD surgeries.


Assuntos
Tremor , Cirurgia Vitreorretiniana , Mãos , Humanos , Punho , Articulação do Punho
7.
J Surg Res ; 254: 255-260, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32480069

RESUMO

BACKGROUND: Historically low, the proportion of female urology residents now exceeds 25% in recent years. Self-assessment is a widely used tool to track progress in medical education. However, the validity of its results and gender differences may influence interpretation. Simulation of surgical skills is increasingly common in modern residency training and standardizes certain objective tasks and skills. The objective of this study was to identify gender differences in self-assessment of surgeons and trainees when using simulation of surgical skills. METHODS: Medical students, residents, and attending and retired surgeons completed simple interrupted suturing. Assessment was self-rated using previously tested visual analog motion scales. Tasks were video recorded and rated by blinded expert surgeons using identical motion scales. Computer vision motion tracking software was used to objectively analyze the kinematics of surgical tasks. RESULTS: Proportion of female (n = 17) and male (n = 20) participants did not differ significantly by the level of training, P = 0.76. Five expert surgeons evaluated 84 video segments of simple interrupted suturing tasks (mean 3.0 segments per task per participant). Self-assessment correlated well overall with expert rating for motion economy (Pearson correlation coefficient 0.61, P < 0.001) and motion fluidity (0.55, P = 0.002). Women underrated their performance in accordance with mean individual difference of self-assessment and expert assessment scores (Δ SAS-EAS) for both economy of motion (mean ± SEM -1.1 ± 0.38, P = 0.01) and fluidity of motion (-1.3 ± 0.39, P < 0.01). On the same measures, men tended to rate themselves in accordance with experts (-0.16 ± 0.36, P = 0.63; -0.09 ± 0.41, P = 0.82, respectively). Δ SAS-EAS did not differ significantly on any rating scale across levels of training. Expert ratings did not differ significantly by gender for any domain. CONCLUSIONS: Female surgeons and trainees underrate some technical skills on self-assessment when compared with expert ratings, whereas male surgeon and trainee self-ratings and expert ratings were similar. Further work is needed to determine if these differences are accentuated across increasingly difficult tasks.


Assuntos
Identidade de Gênero , Autoavaliação (Psicologia) , Estudantes de Medicina/psicologia , Urologistas/psicologia , Competência Clínica , Feminino , Humanos , Masculino , Técnicas de Sutura
8.
Ann Surg ; 269(3): 574-581, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-28885509

RESUMO

OBJECTIVE: Computer vision was used to predict expert performance ratings from surgeon hand motions for tying and suturing tasks. SUMMARY BACKGROUND DATA: Existing methods, including the objective structured assessment of technical skills (OSATS), have proven reliable, but do not readily discriminate at the task level. Computer vision may be used for evaluating distinct task performance throughout an operation. METHODS: Open surgeries was videoed and surgeon hands were tracked without using sensors or markers. An expert panel of 3 attending surgeons rated tying and suturing video clips on continuous scales from 0 to 10 along 3 task measures adapted from the broader OSATS: motion economy, fluidity of motion, and tissue handling. Empirical models were developed to predict the expert consensus ratings based on the hand kinematic data records. RESULTS: The predicted versus panel ratings for suturing had slopes from 0.73 to 1, and intercepts from 0.36 to 1.54 (Average R2 = 0.81). Predicted versus panel ratings for tying had slopes from 0.39 to 0.88, and intercepts from 0.79 to 4.36 (Average R2 = 0.57). The mean square error among predicted and expert ratings was consistently less than the mean squared difference among individual expert ratings and the eventual consensus ratings. CONCLUSIONS: The computer algorithm consistently predicted the panel ratings of individual tasks, and were more objective and reliable than individual assessment by surgical experts.


Assuntos
Inteligência Artificial , Competência Clínica , Técnicas de Sutura , Análise e Desempenho de Tarefas , Algoritmos , Fenômenos Biomecânicos , Feminino , Mãos/fisiologia , Humanos , Masculino , Modelos Teóricos , Variações Dependentes do Observador , Reprodutibilidade dos Testes , Gravação em Vídeo
9.
Hum Factors ; 61(8): 1326-1339, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31013463

RESUMO

OBJECTIVE: This study explores how common machine learning techniques can predict surgical maneuvers from a continuous video record of surgical benchtop simulations. BACKGROUND: Automatic computer vision recognition of surgical maneuvers (suturing, tying, and transition) could expedite video review and objective assessment of surgeries. METHOD: We recorded hand movements of 37 clinicians performing simple and running subcuticular suturing benchtop simulations, and applied three machine learning techniques (decision trees, random forests, and hidden Markov models) to classify surgical maneuvers every 2 s (60 frames) of video. RESULTS: Random forest predictions of surgical video correctly classified 74% of all video segments into suturing, tying, and transition states for a randomly selected test set. Hidden Markov model adjustments improved the random forest predictions to 79% for simple interrupted suturing on a subset of randomly selected participants. CONCLUSION: Random forest predictions aided by hidden Markov modeling provided the best prediction of surgical maneuvers. Training of models across all users improved prediction accuracy by 10% compared with a random selection of participants. APPLICATION: Marker-less video hand tracking can predict surgical maneuvers from a continuous video record with similar accuracy as robot-assisted surgical platforms, and may enable more efficient video review of surgical procedures for training and coaching.


Assuntos
Mãos , Interpretação de Imagem Assistida por Computador , Aprendizado de Máquina , Destreza Motora , Reconhecimento Automatizado de Padrão , Procedimentos Cirúrgicos Operatórios , Humanos , Gravação em Vídeo
10.
Hum Factors ; 61(1): 64-77, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30091947

RESUMO

OBJECTIVE: A method for automatically classifying lifting postures from simple features in video recordings was developed and tested. We explored if an "elastic" rectangular bounding box, drawn tightly around the subject, can be used for classifying standing, stooping, and squatting at the lift origin and destination. BACKGROUND: Current marker-less video tracking methods depend on a priori skeletal human models, which are prone to error from poor illumination, obstructions, and difficulty placing cameras in the field. Robust computer vision algorithms based on spatiotemporal features were previously applied for evaluating repetitive motion tasks, exertion frequency, and duty cycle. METHODS: Mannequin poses were systematically generated using the Michigan 3DSSPP software for a wide range of hand locations and lifting postures. The stature-normalized height and width of a bounding box were measured in the sagittal plane and when rotated horizontally by 30°. After randomly ordering the data, a classification and regression tree algorithm was trained to classify the lifting postures. RESULTS: The resulting tree had four levels and four splits, misclassifying 0.36% training-set cases. The algorithm was tested using 30 video clips of industrial lifting tasks, misclassifying 3.33% test-set cases. The sensitivity and specificity, respectively, were 100.0% and 100.0% for squatting, 90.0% and 100.0% for stooping, and 100.0% and 95.0% for standing. CONCLUSIONS: The tree classification algorithm is capable of classifying lifting postures based only on dimensions of bounding boxes. APPLICATIONS: It is anticipated that this practical algorithm can be implemented on handheld devices such as a smartphone, making it readily accessible to practitioners.


Assuntos
Remoção , Postura/fisiologia , Análise e Desempenho de Tarefas , Algoritmos , Fenômenos Biomecânicos , Árvores de Decisões , Humanos , Manequins , Reprodutibilidade dos Testes , Gravação em Vídeo
11.
Ergonomics ; 62(8): 1043-1054, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-31092146

RESUMO

A widely used risk prediction tool, the revised NIOSH lifting equation (RNLE), provides the recommended weight limit (RWL), but is limited by analyst subjectivity, experience, and resources. This paper describes a robust, non-intrusive, straightforward approach to automatically extract spatial and temporal factors necessary for the RNLE using a single video camera in the sagittal plane. The participant's silhouette is segmented by motion information and the novel use of a ghosting effect provides accurate detection of lifting instances, and hand and feet location prediction. Laboratory tests using 6 participants, each performing 36 lifts, showed that a nominal 640 pixel × 480 pixel 2D video, in comparison to 3D motion capture, provided RWL estimations within 0.2 kg (SD = 1.0 kg). The linear regression between the video and 3D tracking RWL was R2 = 0.96 (slope = 1.0, intercept = 0.2 kg). Since low definition video was used in order to synchronise with motion capture, better performance is anticipated using high definition video. Practitioner's summary: An algorithm for automatically calculating the revised NIOSH lifting equation using a single video camera was evaluated in comparison to laboratory 3D motion capture. The results indicate that this method has suitable accuracy for practical use and may be, particularly, useful when multiple lifts are evaluated. Abbreviations: 2D: Two-dimensional; 3D: Three-dimensional; ACGIH: American Conference of Governmental Industrial Hygienists; AM: asymmetric multiplier; BOL: beginning of lift; CM: coupling multiplier; DM: distance multiplier; EOL: end of lift; FIRWL: frequency independent recommended weight limit; FM: frequency multiplier; H: horizontal distance; HM: horizontal multiplier; IMU: inertial measurement unit; ISO: International Organization for Standardization; LC: load constant; NIOSH: National Institute for Occupational Safety and Health; RGB: red, green, blue; RGB-D: red, green, blue - depth; RNLE: revised NIOSH lifting equation; RWL: recommended weight limit; SD: standard deviation; TLV: threshold limit value; VM: vertical multiplier; V: vertical distance.


Assuntos
Ergonomia/métodos , Remoção , Monitorização Fisiológica/métodos , Saúde Ocupacional , Gravação em Vídeo/métodos , Adulto , Feminino , Humanos , Modelos Lineares , Masculino , National Institute for Occupational Safety and Health, U.S. , Medição de Risco , Estados Unidos
12.
Hum Factors ; 59(5): 844-860, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28704631

RESUMO

Objective This research considers how driver movements in video clips of naturalistic driving are related to observer subjective ratings of distraction and engagement behaviors. Background Naturalistic driving video provides a unique window into driver behavior unmatched by crash data, roadside observations, or driving simulator experiments. However, manually coding many thousands of hours of video is impractical. An objective method is needed to identify driver behaviors suggestive of distracted or disengaged driving for automated computer vision analysis to access this rich source of data. Method Visual analog scales ranging from 0 to 10 were created, and observers rated their perception of driver distraction and engagement behaviors from selected naturalistic driving videos. Driver kinematics time series were extracted from frame-by-frame coding of driver motions, including head rotation, head flexion/extension, and hands on/off the steering wheel. Results The ratings were consistent among participants. A statistical model predicting average ratings from the kinematic features accounted for 54% of distraction rating variance and 50% of engagement rating variance. Conclusion Rated distraction behavior was positively related to the magnitude of head rotation and fraction of time the hands were off the wheel. Rated engagement behavior was positively related to the variation of head rotation and negatively related to the fraction of time the hands were off the wheel. Application If automated computer vision can code simple kinematic features, such as driver head and hand movements, then large-volume naturalistic driving videos could be automatically analyzed to identify instances when drivers were distracted or disengaged.


Assuntos
Atenção/fisiologia , Condução de Veículo , Atividade Motora/fisiologia , Psicometria/métodos , Desempenho Psicomotor/fisiologia , Adulto , Fenômenos Biomecânicos , Humanos
13.
Ergonomics ; 60(12): 1730-1738, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28640656

RESUMO

Two computer vision algorithms were developed to automatically estimate exertion time, duty cycle (DC) and hand activity level (HAL) from videos of workers performing 50 industrial tasks. The average DC difference between manual frame-by-frame analysis and the computer vision DC was -5.8% for the Decision Tree (DT) algorithm, and 1.4% for the Feature Vector Training (FVT) algorithm. The average HAL difference was 0.5 for the DT algorithm and 0.3 for the FVT algorithm. A sensitivity analysis, conducted to examine the influence that deviations in DC have on HAL, found it remained unaffected when DC error was less than 5%. Thus, a DC error less than 10% will impact HAL less than 0.5 HAL, which is negligible. Automatic computer vision HAL estimates were therefore comparable to manual frame-by-frame estimates. Practitioner Summary: Computer vision was used to automatically estimate exertion time, duty cycle and hand activity level from videos of workers performing industrial tasks.


Assuntos
Algoritmos , Mãos/fisiologia , Esforço Físico , Estudos de Tempo e Movimento , Computadores , Humanos , Gravação em Vídeo
14.
Hum Factors ; 58(3): 427-40, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26546381

RESUMO

OBJECTIVE: This study investigates using marker-less video tracking to evaluate hands-on clinical skills during simulated clinical breast examinations (CBEs). BACKGROUND: There are currently no standardized and widely accepted CBE screening techniques. METHODS: Experienced physicians attending a national conference conducted simulated CBEs presenting different pathologies with distinct tumorous lesions. Single hand exam motion was recorded and analyzed using marker-less video tracking. Four kinematic measures were developed to describe temporal (time pressing and time searching) and spatial (area covered and distance explored) patterns. RESULTS: Mean differences between time pressing, area covered, and distance explored varied across the simulated lesions. Exams were objectively categorized as either sporadic, localized, thorough, or efficient for both temporal and spatial categories based on spatiotemporal characteristics. The majority of trials were temporally or spatially thorough (78% and 91%), exhibiting proportionally greater time pressing and time searching (temporally thorough) and greater area probed with greater distance explored (spatially thorough). More efficient exams exhibited proportionally more time pressing with less time searching (temporally efficient) and greater area probed with less distance explored (spatially efficient). Just two (5.9 %) of the trials exhibited both high temporal and spatial efficiency. CONCLUSIONS: Marker-less video tracking was used to discriminate different examination techniques and measure when an exam changes from general searching to specific probing. The majority of participants exhibited more thorough than efficient patterns. APPLICATION: Marker-less video kinematic tracking may be useful for quantifying clinical skills for training and assessment.


Assuntos
Mama/diagnóstico por imagem , Processamento de Imagem Assistida por Computador/métodos , Exame Físico/métodos , Gravação em Vídeo/métodos , Algoritmos , Simulação por Computador , Feminino , Humanos , Modelos Teóricos
15.
Ergonomics ; 59(11): 1514-1525, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26848051

RESUMO

A marker-less 2D video algorithm measured hand kinematics (location, velocity and acceleration) in a paced repetitive laboratory task for varying hand activity levels (HAL). The decision tree (DT) algorithm identified the trajectory of the hand using spatiotemporal relationships during the exertion and rest states. The feature vector training (FVT) method utilised the k-nearest neighbourhood classifier, trained using a set of samples or the first cycle. The average duty cycle (DC) error using the DT algorithm was 2.7%. The FVT algorithm had an average 3.3% error when trained using the first cycle sample of each repetitive task, and had a 2.8% average error when trained using several representative repetitive cycles. Error for HAL was 0.1 for both algorithms, which was considered negligible. Elemental time, stratified by task and subject, were not statistically different from ground truth (p < 0.05). Both algorithms performed well for automatically measuring elapsed time, DC and HAL. Practitioner Summary: A completely automated approach for measuring elapsed time and DC was developed using marker-less video tracking and the tracked kinematic record. Such an approach is automatic, repeatable, objective and unobtrusive, and is suitable for evaluating repetitive exertions, muscle fatigue and manual tasks.


Assuntos
Algoritmos , Mãos/fisiologia , Processamento de Imagem Assistida por Computador , Análise e Desempenho de Tarefas , Gravação em Vídeo , Aceleração , Fenômenos Biomecânicos , Feminino , Humanos , Masculino , Movimento , Fadiga Muscular
16.
Ergonomics ; 58(2): 173-83, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25343340

RESUMO

A new equation for predicting the hand activity level (HAL) used in the American Conference for Government Industrial Hygienists threshold limit value®(TLV®) was based on exertion frequency (F) and percentage duty cycle (D). The TLV® includes a table for estimating HAL from F and D originating from data in Latko et al. (Latko WA, Armstrong TJ, Foulke JA, Herrin GD, Rabourn RA, Ulin SS, Development and evaluation of an observational method for assessing repetition in hand tasks. American Industrial Hygiene Association Journal, 58(4):278-285, 1997) and post hoc adjustments that include extrapolations outside of the data range. Multimedia video task analysis determined D for two additional jobs from Latko's study not in the original data-set, and a new nonlinear regression equation was developed to better fit the data and create a more accurate table. The equation, HAL = 6:56 ln D[F(1:31) /1+3:18 F(1:31), generally matches the TLV® HAL lookup table, and is a substantial improvement over the linear model, particularly for F>1.25 Hz and D>60% jobs. The equation more closely fits the data and applies the TLV® using a continuous function.


Assuntos
Mãos/fisiologia , Esforço Físico , Análise e Desempenho de Tarefas , Trabalho/fisiologia , Fenômenos Biomecânicos , Humanos , Movimento , Saúde Ocupacional , Análise de Regressão , Níveis Máximos Permitidos
17.
Ergonomics ; 58(12): 2057-66, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25978764

RESUMO

Marker-less 2D video tracking was studied as a practical means to measure upper limb kinematics for ergonomics evaluations. Hand activity level (HAL) can be estimated from speed and duty cycle. Accuracy was measured using a cross-correlation template-matching algorithm for tracking a region of interest on the upper extremities. Ten participants performed a paced load transfer task while varying HAL (2, 4, and 5) and load (2.2 N, 8.9 N and 17.8 N). Speed and acceleration measured from 2D video were compared against ground truth measurements using 3D infrared motion capture. The median absolute difference between 2D video and 3D motion capture was 86.5 mm/s for speed, and 591 mm/s(2) for acceleration, and less than 93 mm/s for speed and 656 mm/s(2) for acceleration when camera pan and tilt were within ± 30 degrees. Single-camera 2D video had sufficient accuracy (< 100 mm/s) for evaluating HAL. Practitioner Summary: This study demonstrated that 2D video tracking had sufficient accuracy to measure HAL for ascertaining the American Conference of Government Industrial Hygienists Threshold Limit Value(®) for repetitive motion when the camera is located within ± 30 degrees off the plane of motion when compared against 3D motion capture for a simulated repetitive motion task.


Assuntos
Aceleração , Algoritmos , Movimento , Exposição Ocupacional/análise , Extremidade Superior/fisiologia , Gravação em Vídeo/métodos , Adolescente , Adulto , Fenômenos Biomecânicos , Ergonomia , Feminino , Humanos , Masculino , Doenças Musculoesqueléticas , Doenças Profissionais , Adulto Jovem
18.
Ergonomics ; 58(2): 184-94, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25343278

RESUMO

An equation was developed for estimating hand activity level (HAL) directly from tracked root mean square (RMS) hand speed (S) and duty cycle (D). Table lookup, equation or marker-less video tracking can estimate HAL from motion/exertion frequency (F) and D. Since automatically estimating F is sometimes complex, HAL may be more readily assessed using S. Hands from 33 videos originally used for the HAL rating were tracked to estimate S, scaled relative to hand breadth (HB), and single-frame analysis was used to measure D. Since HBs were unknown, a Monte Carlo method was employed for iteratively estimating the regression coefficients from US Army anthropometry survey data. The equation: HAL = 10[e(-15:87+0:02D+2:25 ln S)/(1+e(-15:87+0:02D+2:25 ln S)], R(2) = 0.97, had a residual range ± 0.5 HAL. The S equation superiorly fits the Latko et al. ( 1997 ) data and predicted independently observed HAL values (Harris 2011) better (MSE = 0.16) than the F equation (MSE = 1.28).


Assuntos
Mãos/fisiologia , Esforço Físico , Análise e Desempenho de Tarefas , Trabalho/fisiologia , Antropometria/métodos , Fenômenos Biomecânicos , Humanos , Militares , Movimento , Saúde Ocupacional , Análise de Regressão , Níveis Máximos Permitidos , Estados Unidos
19.
Ergonomics ; 55(5): 526-37, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22506483

RESUMO

It is not well understood how people perceive the difficulty of performing brain-computer interface (BCI) tasks, which specific aspects of mental workload contribute the most, and whether there is a difference in perceived workload between participants who are able-bodied and disabled. This study evaluated mental workload using the NASA Task Load Index (TLX), a multi-dimensional rating procedure with six subscales: Mental Demands, Physical Demands, Temporal Demands, Performance, Effort, and Frustration. Able-bodied and motor disabled participants completed the survey after performing EEG-based BCI Fitts' law target acquisition and phrase spelling tasks. The NASA-TLX scores were similar for able-bodied and disabled participants. For example, overall workload scores (range 0-100) for 1D horizontal tasks were 48.5 (SD = 17.7) and 46.6 (SD 10.3), respectively. The TLX can be used to inform the design of BCIs that will have greater usability by evaluating subjective workload between BCI tasks, participant groups, and control modalities. PRACTITIONER SUMMARY: Mental workload of brain-computer interfaces (BCI) can be evaluated with the NASA Task Load Index (TLX). The TLX is an effective tool for comparing subjective workload between BCI tasks, participant groups (able-bodied and disabled), and control modalities. The data can inform the design of BCIs that will have greater usability.


Assuntos
Encéfalo/fisiologia , Auxiliares de Comunicação para Pessoas com Deficiência , Educação , Interface Usuário-Computador , Carga de Trabalho/psicologia , Adulto , Idoso , Criança , Eletroencefalografia , Feminino , Humanos , Masculino , Fadiga Mental , Pessoa de Meia-Idade , Doenças Neuromusculares , Adulto Jovem
20.
IEEE Trans Hum Mach Syst ; 51(6): 734-739, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35677387

RESUMO

A robust computer vision-based approach is developed to estimate the load asymmetry angle defined in the revised NIOSH lifting equation (RNLE). The angle of asymmetry enables the computation of a recommended weight limit for repetitive lifting operations in a workplace to prevent lower back injuries. An open-source package OpenPose is applied to estimate the 2D locations of skeletal joints of the worker from two synchronous videos. Combining these joint location estimates, a computer vision correspondence and depth estimation method is developed to estimate the 3D coordinates of skeletal joints during lifting. The angle of asymmetry is then deduced from a subset of these 3D positions. Error analysis reveals unreliable angle estimates due to occlusions of upper limbs. A robust angle estimation method that mitigates this challenge is developed. We propose a method to flag unreliable angle estimates based on the average confidence level of 2D joint estimates provided by OpenPose. An optimal threshold is derived that balances the percentage variance reduction of the estimation error and the percentage of angle estimates flagged. Tested with 360 lifting instances in a NIOSH-provided dataset, the standard deviation of angle estimation error is reduced from 10.13° to 4.99°. To realize this error variance reduction, 34% of estimated angles are flagged and require further validation.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA