Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Hum Factors ; : 187208241283321, 2024 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-39293023

RESUMEN

OBJECTIVE: This study examines the extent to which cybersecurity attacks on autonomous vehicles (AVs) affect human trust dynamics and driver behavior. BACKGROUND: Human trust is critical for the adoption and continued use of AVs. A pressing concern in this context is the persistent threat of cyberattacks, which pose a formidable threat to the secure operations of AVs and consequently, human trust. METHOD: A driving simulator experiment was conducted with 40 participants who were randomly assigned to one of two groups: (1) Experience and Feedback and (2) Experience-Only. All participants experienced three drives: Baseline, Attack, and Post-Attack Drive. The Attack Drive prevented participants from properly operating the vehicle in multiple incidences. Only the "Experience and Feedback" group received a security update in the Post-Attack drive, which was related to the mitigation of the vehicle's vulnerability. Trust and foot positions were recorded for each drive. RESULTS: Findings suggest that attacks on AVs significantly degrade human trust, and remains degraded even after an error-less drive. Providing an update about the mitigation of the vulnerability did not significantly affect trust repair. CONCLUSION: Trust toward AVs should be analyzed as an emergent and dynamic construct that requires autonomous systems capable of calibrating trust after malicious attacks through appropriate experience and interaction design. APPLICATION: The results of this study can be applied when building driver and situation-adaptive AI systems within AVs.

2.
Hum Factors ; 60(5): 626-639, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-29613819

RESUMEN

OBJECTIVE: Incident correlation is a vital step in the cybersecurity threat detection process. This article presents research on the effect of group-level information-pooling bias on collaborative incident correlation analysis in a synthetic task environment. BACKGROUND: Past research has shown that uneven information distribution biases people to share information that is known to most team members and prevents them from sharing any unique information available with them. The effect of such biases on security team collaborations are largely unknown. METHOD: Thirty 3-person teams performed two threat detection missions involving information sharing and correlating security incidents. Incidents were predistributed to each person in the team based on the hidden profile paradigm. Participant teams, randomly assigned to three experimental groups, used different collaboration aids during Mission 2. RESULTS: Communication analysis revealed that participant teams were 3 times more likely to discuss security incidents commonly known to the majority. Unaided team collaboration was inefficient in finding associations between security incidents uniquely available to each member of the team. Visualizations that augment perceptual processing and recognition memory were found to mitigate the bias. CONCLUSION: The data suggest that (a) security analyst teams, when conducting collaborative correlation analysis, could be inefficient in pooling unique information from their peers; (b) employing off-the-shelf collaboration tools in cybersecurity defense environments is inadequate; and (c) collaborative security visualization tools developed considering the human cognitive limitations of security analysts is necessary. APPLICATION: Potential applications of this research include development of team training procedures and collaboration tool development for security analysts.


Asunto(s)
Seguridad Computacional , Conducta Cooperativa , Procesos de Grupo , Adulto , Humanos , Adulto Joven
3.
Appl Ergon ; 108: 103908, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36403509

RESUMEN

Many cyberattacks begin with a malicious email message, known as spear phishing, targeted at unsuspecting victims. Although security technologies have improved significantly in recent years, spear phishing continues to be successful due to the bespoke nature of such attacks. Crafting such emails requires attackers to conduct careful research about their victims and collect personal information about them and their acquaintances. Despite the widespread nature of spear-phishing attacks, little is understood about the human factors behind them. This is particularly the case when considering the role of attack personalization on end-user vulnerability. To study spear-phishing attacks in the laboratory, we developed a simulation environment called SpearSim that simulates the tasks involved in the generation and reception of spear-phishing messages. Using SpearSim, we conducted a laboratory experiment with human subjects to study the effect of information availability and information exploitation end-user vulnerability. The results of the experiment show that end-users in the high information-availability condition were 2.97 times more vulnerable to spear-phishing attacks than those in the low information-availability condition. We found that access to more personal information about targets can result in attacks involving contextually meaningful impersonation and narratives. We discuss the implications of this research for the design of anti-phishing training solutions.


Asunto(s)
Correo Electrónico , Comunicación Persuasiva , Humanos , Decepción , Simulación por Computador
4.
Front Psychol ; 9: 135, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29515478

RESUMEN

Success of phishing attacks depend on effective exploitation of human weaknesses. This research explores a largely ignored, but crucial aspect of phishing: the adversarial behavior. We aim at understanding human behaviors and strategies that adversaries use, and how these may determine the end-user response to phishing emails. We accomplish this through a novel experiment paradigm involving two phases. In the adversarial phase, 105 participants played the role of a phishing adversary who were incentivized to produce multiple phishing emails that would evade detection and persuade end-users to respond. In the end-user phase, 340 participants performed an email management task, where they examined and classified phishing emails generated by participants in phase-one along with benign emails. Participants in the adversary role, self-reported the strategies they employed in each email they created, and responded to a test of individual creativity. Data from both phases of the study was combined and analyzed, to measure the effect of adversarial behaviors on end-user response to phishing emails. We found that participants who persistently used specific attack strategies (e.g., sending notifications, use of authoritative tone, or expressing shared interest) in all their attempts were overall more successful, compared to others who explored different strategies in each attempt. We also found that strategies largely determined whether an end-user was more likely to respond to an email immediately, or delete it. Individual creativity was not a reliable predictor of adversarial performance, but it was a predictor of an adversary's ability to evade detection. In summary, the phishing example provided initially, the strategies used, and the participants' persistence with some of the strategies led to higher performance in persuading end-users to respond to phishing emails. These insights may be used to inform tools and training procedures to detect phishing strategies in emails.

5.
Front Psychol ; 9: 2133, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30510527

RESUMEN

A critical requirement for developing a cyber capable workforce is to understand how to challenge, assess, and rapidly develop human cyber skill-sets in realistic cyber operational environments. Fortunately, cyber team competitions make use of simulated operational environments with scoring criteria of task performance that objectively define overall team effectiveness, thus providing the means and context for observation and analysis of cyber teaming. Such competitions allow researchers to address the key determinants that make a cyber defense team more or less effective in responding to and mitigating cyber attacks. For this purpose, we analyzed data collected at the 12th annual Mid-Atlantic Collegiate Cyber Defense Competition (MACCDC, http://www.maccdc.org), where eight teams were evaluated along four independent scoring dimensions: maintaining services, incident response, scenario injects, and thwarting adversarial activities. Data collected from the 13-point OAT (Observational Assessment of Teamwork) instrument by embedded observers and a cyber teamwork survey completed by all participants were used to assess teamwork and leadership behaviors and team composition and work processes, respectively. The scores from the competition were used as an outcome measure in our analysis to extract key features of team process, structure, leadership, and skill-sets in relation to effective cyber defense. We used Bayesian regression to relate scored performance during the competition to team skill composition, team experience level, and an observational construct of team collaboration. Our results indicate that effective collaboration, experience, and functional role-specialization within the teams are important factors that determine the success of these teams in the competition and are important observational predictors of the timely detection and effective mitigation of ongoing cyber attacks. These results support theories of team maturation and the development of functional team cognition applied to mastering cybersecurity.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA