Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.860
Filter
2.
Front Psychol ; 15: 1465841, 2024.
Article in English | MEDLINE | ID: mdl-39220393

ABSTRACT

[This corrects the article DOI: 10.3389/fpsyg.2023.1139373.].

3.
Appl Ergon ; 122: 104373, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39232339

ABSTRACT

The metro is susceptible to disruption risks and requires a system response capability to build resilience to manage disruptions. Achieving such resilient response state requires readiness in both the technology side, e.g., utilizing digital technologies (DTs) to monitor system components, and the human factors side, e.g., fostering positive human coping capabilities; however, these two sides are usually considered independently, without sufficient integration. This paper aims to develop and empirically test a model in which monitoring-enabled DTs, employees' reactions, and their positive capabilities are simultaneously considered in terms of their interplay and impact on system response capability. The results showed that while DTs for monitoring physical components enhanced perceived management commitment and fostered collective efficacy, DTs for monitoring human components increased psychological strain and inhibited improvisation capability, creating a "double-edged sword" effect on system response capability. Additionally, explicit management commitment buffered the adverse effect of DTs-induced psychological strain on individual improvisation.

4.
J Safety Res ; 90: 254-271, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39251284

ABSTRACT

INTRODUCTION: Industry 4.0 has brought new paradigms to businesses based on high levels of automation and interconnectivity and the use of technologies. This new context has an impact on the work environment and workers. Nevertheless, these impacts are still inconclusive and controversial, requiring new investigative perspectives. This study aimed to investigate the requirements sought, the risk factors identified, and the adverse effects on workers caused by the characteristics of I4.0. METHOD: The methodology was based on a systematic literature review utilizing the PRISMA protocol, and 30 articles were found eligible. A descriptive and bibliometric analysis of these studies was performed. RESULTS: The results identified the main topics that emerged and have implications for workers' Occupational Health and Safety (OHS) and divided them into categories. The requirements are related mainly to cognitive, organizational, and technological demands. The most significant risk factors generated were associated with the psychosocial ones, but organizational, technological, and occupational factors were also identified. The adverse effects cited were categorized as psychic, cognitive, physical, and organizational; stress was the most cited effect. An explanatory theoretical model of interaction was proposed to represent the pathway of causal relations between the requirements and risk factors for the effects caused by I4.0. CONCLUSIONS AND PRACTICAL APPLICATIONS: This review has found just how complex the relationships between the principles of Industry 4.0 are (e.g., requirements, risk factors, and effects) and the human factors. It also suggests a pathway for how these relationships occur, bridging the gap left by the limited studies focused on connecting these topics. These results can help organizational managers understand the impacts of I4.0 on workers' safety and health.


Subject(s)
Occupational Health , Humans , Industry , Risk Factors , Workplace , Safety Management
5.
BMC Public Health ; 24(1): 2458, 2024 Sep 10.
Article in English | MEDLINE | ID: mdl-39256672

ABSTRACT

BACKGROUND: While Human Factors (HF) methods have been applied to the design of decision support systems (DSS) to aid clinical decision-making, the role of HF to improve decision-support for population health outcomes is less understood. We sought to comprehensively understand how HF methods have been used in designing digital population health DSS. MATERIALS AND METHODS: We searched English documents published in health sciences and engineering databases (Medline, Embase, PsychINFO, Scopus, Comendex, Inspec, IEEE Xplore) between January 1990 and September 2023 describing the development, validation or application of HF principles to decision support tools in population health. RESULTS: We identified 21,581 unique records and included 153 studies for data extraction and synthesis. We included research articles that had a target end-user in population health and that used HF. HF methods were applied throughout the design lifecycle. Users were engaged early in the design lifecycle in the needs assessment and requirements gathering phase and design and prototyping phase with qualitative methods such as interviews. In later stages in the lifecycle, during user testing and evaluation, and post deployment evaluation, quantitative methods were more frequently used. However, only three studies used an experimental framework or conducted A/B testing. CONCLUSIONS: While HF have been applied in a variety of contexts in the design of data-driven DSSs for population health, few have used Human Factors to its full potential. We offer recommendations for how HF can be leveraged throughout the design lifecycle. Most crucially, system designers should engage with users early on and throughout the design process. Our findings can support stakeholders to further empower public health systems.


Subject(s)
Ergonomics , Population Health , Humans , Decision Support Systems, Clinical , Software Design
6.
Accid Anal Prev ; 207: 107758, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39222546

ABSTRACT

The shared control authority between drivers and the steering system may lead to human-machine conflicts, threatening both traffic safety and driving experience of collaborative driving systems. Previous evaluation methods relied on subjective judgment and had a singular set of evaluation criteria, making it challenging to obtain a comprehensive and objective assessment. Therefore, we propose a two-phase novel method that integrates eye-tracking data, electromyography signals and vehicle dynamic features to evaluate human-machine conflicts. Firstly, through driving simulation experiments, the correlations between subjective driving experience and objective indices are analyzed. Strongly correlated indices are screened as the effective criteria. In the second phase, the indices are integrated through sparse principal component analysis (SPCA) to formulate a comprehensive objective measure. Subjective driving experience collected from post-drive questionnaires was applied to examine its effectiveness. The results show that the error between the two sets of data is less than 7%, proving the effectives of the proposed method. This study provides a low-cost, high-efficiency method for evaluating human-machine conflicts, which contributes to the development of safer and more harmonious human-machine collaborative driving.


Subject(s)
Automobile Driving , Electromyography , Man-Machine Systems , Humans , Automobile Driving/psychology , Male , Female , Adult , Principal Component Analysis , Eye-Tracking Technology , Computer Simulation , Young Adult , Surveys and Questionnaires
7.
Ergonomics ; : 1-13, 2024 Aug 17.
Article in English | MEDLINE | ID: mdl-39154216

ABSTRACT

This study proposes a generic approach for creating human factors-based assessment tools to enhance operational system quality by reducing errors. The approach was driven by experiences and lessons learned in creating the warehouse error prevention (WEP) tool and other system engineering tools. The generic approach consists of 1) identifying tool objectives, 2) identifying system failure modes, 3) specifying design-related quality risk factors for each failure mode, 4) designing the tool, 5) conducting user evaluations, and 6) validating the tool. The WEP tool exemplifies this approach and identifies human factors related to design flaws associated with quality risk factors in warehouse operations. The WEP tool can be used at the initial stage of design or later for process improvement and training. While this process can be adapted for various contexts, further study is necessary to support the teams in creating tools to identify design-related human factors contributing to quality issues.


This paper describes a generic approach to creating human factors­based quality assessment tools. The approach is illustrated with the Warehouse Error Prevention (WEP) tool, which is designed to help users identify HF-related quality risk factors in warehouse system designs (available for free: Setayesh et al. 2022b).

8.
Front Robot AI ; 11: 1375490, 2024.
Article in English | MEDLINE | ID: mdl-39104806

ABSTRACT

Safefy-critical domains often employ autonomous agents which follow a sequential decision-making setup, whereby the agent follows a policy to dictate the appropriate action at each step. AI-practitioners often employ reinforcement learning algorithms to allow an agent to find the best policy. However, sequential systems often lack clear and immediate signs of wrong actions, with consequences visible only in hindsight, making it difficult to humans to understand system failure. In reinforcement learning, this is referred to as the credit assignment problem. To effectively collaborate with an autonomous system, particularly in a safety-critical setting, explanations should enable a user to better understand the policy of the agent and predict system behavior so that users are cognizant of potential failures and these failures can be diagnosed and mitigated. However, humans are diverse and have innate biases or preferences which may enhance or impair the utility of a policy explanation of a sequential agent. Therefore, in this paper, we designed and conducted human-subjects experiment to identify the factors which influence the perceived usability with the objective usefulness of policy explanations for reinforcement learning agents in a sequential setting. Our study had two factors: the modality of policy explanation shown to the user (Tree, Text, Modified Text, and Programs) and the "first impression" of the agent, i.e., whether the user saw the agent succeed or fail in the introductory calibration video. Our findings characterize a preference-performance tradeoff wherein participants perceived language-based policy explanations to be significantly more useable; however, participants were better able to objectively predict the agent's behavior when provided an explanation in the form of a decision tree. Our results demonstrate that user-specific factors, such as computer science experience (p < 0.05), and situational factors, such as watching agent crash (p < 0.05), can significantly impact the perception and usefulness of the explanation. This research provides key insights to alleviate prevalent issues regarding innapropriate compliance and reliance, which are exponentially more detrimental in safety-critical settings, providing a path forward for XAI developers for future work on policy-explanations.

9.
Resusc Plus ; 19: 100721, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39108281

ABSTRACT

Background: A new cardiopulmonary resuscitation technique, chest compressions with sustained inflation (CC + SI) might be an alternative to both the neonatal [3:1compressions to ventilations (3:1C:V)] and paediatric [chest compression with asynchronous ventilation (CCaV)] approaches. The human factors associated with this technique are unknown. We aimed to compare the physical, cognitive, and team-based human factors for CC + SI to standard CPR (3:1C:V or CCaV). Methods: Randomized crossover simulation study including 40 participants on 20 two-person teams. Workload [National Aeronautics and Space Administration Task Load Index (NASA-TLX)], crisis resource management skills (CRM) [Ottawa Global Rating Scale (OGRS)], and debrief analysis were compared. Results: There was no difference in paired NASA-TLX scores for any dimension between the CC + SI and standard, adjusting for CPR order. There was no difference in CRM scores for CC + SI compared to standard. Participants were less familiar with CC + SI although many found it simpler to perform, better for transitions/switching roles, and better for communication. Conclusions: The human factors are no more physically or cognitively demanding with CC + SI compared to standard CPR (NASA-TLX and participant debrief) and team performance was no different with CC + SI compared to standard CPR (OGRS score).

10.
BMJ Qual Saf ; 2024 Aug 23.
Article in English | MEDLINE | ID: mdl-39179376

ABSTRACT

OBJECTIVE: To develop and evaluate measures of patient work system factors in medication management that may be modifiable for improvement during the care transition from hospital to home among older adults. DESIGN, SETTINGS AND PARTICIPANTS: Measures were developed and evaluated in a multisite prospective observational study of older adults (≥65 years) discharged home from medical units of two US hospitals from August 2018 to July 2019. MAIN MEASURES: Patient work system factors for managing medications were assessed during hospital stays using six capacity indicators, four task indicators and three medication management practice indicators. Main outcomes were assessed at participants' homes approximately a week after discharge for (1) Medication discrepancies between the medications taken at home and those listed in the medical record, and (2) Patient experiences with new medication regimens. RESULTS: 274 of the 376 recruited participants completed home assessment (72.8%). Among capacity indicators, most older adults (80.6%) managed medications during transition without a caregiver, 41.2% expressed low self-efficacy in managing medications and 18.3% were not able to complete basic medication administration tasks. Among task indicators, more than half (57.7%) had more than 10 discharge medications and most (94.7%) had medication regimen changes. Having more than 10 discharge medications, more than two medication regimen changes and low self-efficacy in medication management increased the risk of feeling overwhelmed (OR 2.63, 95% CI 1.08 to 6.38, OR 3.16, 95% CI 1.29 to 7.74 and OR 2.56, 95% CI 1.25 to 5.26, respectively). Low transportation independence, not having a home caregiver, low medication administration skills and more than 10 discharge medications increased the risk of medication discrepancies (incidence rate ratio 1.39, 95% CI 1.01 to 1.91, incidence rate ratio 1.73, 95% CI 1.13 to 2.66, incidence rate ratio 1.99, 95% CI 1.37 to 2.89 and incidence rate ratio 1.91, 95% CI 1.24 to 2.93, respectively). CONCLUSIONS: Patient work system factors could be assessed before discharge with indicators for increased risk of poor patient experience and medication discrepancies during older adults' care transition from hospital to home.

11.
Heliyon ; 10(12): e32675, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-39183871

ABSTRACT

Research facilities such as spallation sources and synchrotrons generate radiation for use in atomic-level or molecular-scale experiments. These facilities can be viewed as complex safety-critical systems. An important aspect of the safety management of such systems is the short safety education and training programme the users are required to undergo in order to gain facility access. As research on the topic is limited, this study aimed to increase the knowledge about current education design and practice using the perspectives of safety science and pedagogy. Study objectives were to identify preconditions that impact the safety education design, to describe current design and practice of the safety education, and to identify weaknesses and possibilities for improvement. Site visits with a total of 20 interviews were performed at three research facilities. The results show the need for sufficient resources to maintain learning activities for users, provide pedagogical continuing education for educators, and maintain safety culture-enhancing activities to meet the challenges of having large numbers of short-term facility users. Increased focus should be placed on safety-related competence needs and the mapping of these to match the competence of individual users. New thinking and innovation can benefit the design and provision of such education activities, based on both socio-technical system and system safety perspectives.

12.
Expert Opin Drug Deliv ; : 1-16, 2024 Aug 29.
Article in English | MEDLINE | ID: mdl-39210626

ABSTRACT

BACKGROUND: The administration of repository corticotropin injection (Acthar Gel) via a single-dose prefilled injector (SelfJect) is intended to provide a simple, ergonomic alternative to traditional injection. Iterative human factors (HF) studies were conducted to identify potential use deviations and ensure appropriate device use. RESEARCH DESIGN AND METHODS: This article presents seven formative studies, a validation study (with prior pilot validation studies), and a supplemental validation study with participants including lay users, patients, caregivers, and healthcare providers. Participant interactions with SelfJect and the user interface were assessed. Use deviations, user preferences, and participants' ability to successfully complete tasks were evaluated to generate modifications to the device and user interface. RESULTS: In the validation study, 91% of participants successfully administered their first injection. Use errors were rare with simulated-use (6.9%) and knowledge-based (1.6%) testing. Use deviations were commonly attributed to experimental artifact or information oversight, and device warming had the most use errors (49% of participants), even with extensive testing and adjustments to the user interface. CONCLUSIONS: SelfJect was able to be used in a safe and effective manner by the intended users. Iterative HF studies informed the mitigation of use-related risks to reduce the occurrence of use deviations during simulated use.

13.
J Forensic Sci ; 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39185731

ABSTRACT

This study examined how variations in signature complexity affected the ability of forensic document examiners (FDEs) and laypeople to determine whether signatures are authentic or simulated (forged), as well as whether they are disguised. Forty-five FDEs from nine countries evaluated nine different signature comparisons in this online study. Receiver Operating Characteristic (ROC) analyses revealed that FDEs performed in excess of chance levels, but performance varied as a function of signature complexity: Sensitivity (the true-positive rate) did not differ much between complexity levels (i.e., 65% vs. 79% vs. 79% for low vs medium vs high complexity), but specificity (the true-negative rate) was the highest (95%) for the medium complexity signatures and lowest (73%) for low complexity signatures. The specificity of high-complexity signatures (83%) was between these values. The sensitivity for disguised comparisons was only 11% and did not vary across complexity levels. One hundred-one novices also completed the study. A comparison of the area under the ROC curve (AUCs) revealed that FDEs outperformed novices in medium and high-complexity signatures but not low-complexity signatures. Novices also struggled to detect disguised signatures. While these findings elucidate the role of signature complexity in lay and expert evaluations, the error rates observed here may differ from those in forensic practice due to differences in the experimental stimuli and circumstances under which they were evaluated. This investigation of the role of signature complexity in the evaluation process was not intended to estimate error rates in forensic practice.

14.
JMIR Hum Factors ; 11: e56605, 2024 Aug 16.
Article in English | MEDLINE | ID: mdl-39150762

ABSTRACT

BACKGROUND: Malaria impacts nearly 250 million individuals annually. Specifically, Uganda has one of the highest burdens, with 13 million cases and nearly 20,000 deaths. Controlling the spread of malaria relies on vector surveillance, a system where collected mosquitos are analyzed for vector species' density in rural areas to plan interventions accordingly. However, this relies on trained entomologists known as vector control officers (VCOs) who identify species via microscopy. The global shortage of entomologists and this time-intensive process cause significant reporting delays. VectorCam is a low-cost artificial intelligence-based tool that identifies a mosquito's species, sex, and abdomen status with a picture and sends these results electronically from surveillance sites to decision makers, thereby deskilling the process to village health teams (VHTs). OBJECTIVE: This study evaluates the usability of the VectorCam system among VHTs by assessing its efficiency, effectiveness, and satisfaction. METHODS: The VectorCam system has imaging hardware and a phone app designed to identify mosquito species. Two users are needed: (1) an imager to capture images of mosquitos using the app and (2) a loader to load and unload mosquitos from the hardware. Critical success tasks for both roles were identified, which VCOs used to train and certify VHTs. In the first testing phase (phase 1), a VCO and a VHT were paired to assume the role of an imager or a loader. Afterward, they swapped. In phase 2, two VHTs were paired, mimicking real use. The time taken to image each mosquito, critical errors, and System Usability Scale (SUS) scores were recorded for each participant. RESULTS: Overall, 14 male and 6 female VHT members aged 20 to 70 years were recruited, of which 12 (60%) participants had smartphone use experience. The average throughput values for phases 1 and 2 for the imager were 70 (SD 30.3) seconds and 56.1 (SD 22.9) seconds per mosquito, respectively, indicating a decrease in the length of time for imaging a tray of mosquitos. The loader's average throughput values for phases 1 and 2 were 50.0 and 55.7 seconds per mosquito, respectively, indicating a slight increase in time. In terms of effectiveness, the imager had 8% (6/80) critical errors and the loader had 13% (10/80) critical errors in phase 1. In phase 2, the imager (for VHT pairs) had 14% (11/80) critical errors and the loader (for VHT pairs) had 12% (19/160) critical errors. The average SUS score of the system was 70.25, indicating positive usability. A Kruskal-Wallis analysis demonstrated no significant difference in SUS (H value) scores between genders or users with and without smartphone use experience. CONCLUSIONS: VectorCam is a usable system for deskilling the in-field identification of mosquito specimens in rural Uganda. Upcoming design updates will address the concerns of users and observers.


Subject(s)
Malaria , Mosquito Vectors , Animals , Malaria/epidemiology , Humans , Uganda , Culicidae/classification , Mobile Applications , Female , Mosquito Control/instrumentation , Mosquito Control/methods , Male
15.
Children (Basel) ; 11(8)2024 Aug 21.
Article in English | MEDLINE | ID: mdl-39201956

ABSTRACT

BACKGROUND: Eye-tracking technology could be used to study human factors during teamwork. OBJECTIVES: This work aimed to compare the visual attention (VA) of a team member acting as both a team leader and managing the airway, compared to a team member performing the focused task of managing the airway in the presence of a dedicated team leader. This work also aimed to report differences in team performance, behavioural skills, and workload between the two groups using validated tools. METHODS: We conducted a simulation-based, pilot randomised controlled study. The participants included were volunteer paediatric trainees, nurse practitioners, and neonatal nurses. Three teams consisting of four team members were formed. Each team participated in two identical neonatal resuscitation simulation scenarios in a random order, once with and once without a team leader. Using a commercially available eye-tracking device, we analysed VA regarding attention to (1) a manikin, (2) a colleague, and (3) a monitor. Only the trainee who was the airway operator would wear eye-tracking glasses in both simulations. RESULTS: In total, 6 simulation scenarios and 24 individual role allocations were analysed. Participants in a no-team-leader capacity had a greater number of total fixations on manikin and monitors, though this was not significant. There were no significant differences in team performance, behavioural skills, and individual workload. Physical demand was reported as significantly higher by participants in the group without a team leader. During debriefing, all the teams expressed their preference for having a dedicated team leader. CONCLUSION: In our pilot study using low-cost technology, we could not demonstrate the difference in VA with the presence of a team leader.

18.
Clin Simul Nurs ; 942024 Sep.
Article in English | MEDLINE | ID: mdl-39183981

ABSTRACT

Background: There is a need to understand the clinical decision-making and work practices within ostomy nursing care to support expanding nursing training. Objective: To develop and evaluate a new metric-based simulation for assessing ostomy nursing care using a human factors approach. Sample: This pilot study involved eleven stakeholders in the needs assessment, six nurse participants performing simulated ostomy care, and three independent observers assessing procedure reliability. Method: We conducted a needs assessment of ostomy nursing care and training, developed an enhanced metric-based simulation for ostomy appliance change procedures, and statistically evaluated its reliability for measuring the simulated tasks. Results: The enhanced metric-based simulation captured different tasks within four task categories: product selection; stoma and peristomal skin care; baseplate sizing and adhesion; and infection control strategies. The video review procedure was reliable for assessing continuous (average ICC≥0.96) and categorical (average κ>0.96) variables. Conclusion: The new metric-based simulation was suitable for characterizing a broad range of clinical decision-making and work practices in ostomy nursing care.

19.
Article in English | MEDLINE | ID: mdl-39184954

ABSTRACT

This study focuses on understanding the influence of cognitive biases in the intra-operative decision-making process within cardiac surgery teams, recognizing the complexity and high-stakes nature of such environments. We aimed to investigate the perceived prevalence and impact of cognitive biases among cardiac surgery teams, and how these biases may affect intraoperative decisions and patient safety and outcomes. A mixed-methods approach was utilized, combining quantitative ratings across 32 different cognitive biases (0 to 100 visual analogue scale), regarding their "likelihood of occurring" and "potential for patient harm" during the intraoperative phase of cardiac surgery. Based on these ratings, we collected qualitative insights on the most-rated cognitive biases from semi-structured interviews with surgeons, anaesthesiologists, and perfusionists who work in a cardiac operating room. A total of 16 participants, including cardiac surgery researchers and clinicians, took part in the study. We found a significant presence of cognitive biases, particularly confirmation bias and overconfidence, which influenced decision-making processes and had the potential for patient harm. Of 32 cognitive biases, 6 were rated above the 75th percentile for both criteria (potential for patient harm, likelihood of occurring). Our preliminary findings provide a first step toward a deeper understanding of the complex cognitive mechanisms that underlie clinical reasoning and decision-making in the operating room. Future studies should further explore this topic, especially the relationship between the occurrence of intraoperative cognitive biases and postoperative surgical outcomes. Additionally, the impact of metacognition strategies (e.g. debiasing training) on reducing the impact of cognitive bias and improving intraoperative performance should also be investigated.

20.
Workplace Health Saf ; : 21650799241271099, 2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39193841
SELECTION OF CITATIONS
SEARCH DETAIL