Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
J Am Geriatr Soc ; 72(4): 1242-1251, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38243756

RESUMEN

BACKGROUND: Kinematic driving data studies are a novel methodology relevant to health care, but prior studies have considerable variance in their methods, populations, and findings suggesting a need for critical analysis and appraisal for feasibility and methodological guidelines. METHODS: We assessed kinematic driving studies of adults with chronic conditions for study feasibility, characteristics, and key findings, to generate recommendations for future study designs, and to identify promising directions for applications of kinematic driving data. PRISMA was used to guide the review and searches included PubMed, CINAHL, and Compendex. Of 379 abstract/titles screened, 49 full-text articles were reviewed, and 29 articles met inclusion criteria of analyzing trip-level kinematic driving data from adult drivers with chronic conditions. RESULTS: The predominant chronic conditions studied were Alzheimer's disease and related Dementias, obstructive sleep apnea, and diabetes mellitus. Study objectives included feasibility testing of kinematic driving data collection in the context of chronic conditions, comparisons of simulation with real-world kinematic driving behavior, assessments of driving behavior effects associated with chronic conditions, and prognostication or disease classification drawn from kinematic driving data. Across the studies, there was no consensus on devices, measures, or sampling parameters; however, studies showed evidence that driving behavior could reliably differentiate between adults with chronic conditions and healthy controls. CONCLUSIONS: Vehicle sensors can provide driver-specific measures relevant to clinical assessment and interventions. Using kinematic driving data to assess and address driving measures of individuals with multiple chronic conditions is positioned to amplify a functional outcome measure that matters to patients.


Asunto(s)
Enfermedad de Alzheimer , Evaluación de Resultado en la Atención de Salud , Humanos , Fenómenos Biomecánicos , Investigación , Enfermedad Crónica
2.
Hum Factors ; 62(6): 1019-1035, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-31237788

RESUMEN

OBJECTIVE: The objective of this study was to analyze a set of driver performance and physiological data using advanced machine learning approaches, including feature generation, to determine the best-performing algorithms for detecting driver distraction and predicting the source of distraction. BACKGROUND: Distracted driving is a causal factor in many vehicle crashes, often resulting in injuries and deaths. As mobile devices and in-vehicle information systems become more prevalent, the ability to detect and mitigate driver distraction becomes more important. METHOD: This study trained 21 algorithms to identify when drivers were distracted by secondary cognitive and texting tasks. The algorithms included physiological and driving behavioral input processed with a comprehensive feature generation package, Time Series Feature Extraction based on Scalable Hypothesis tests. RESULTS: Results showed that a Random Forest algorithm, trained using only driving behavior measures and excluding driver physiological data, was the highest-performing algorithm for accurately classifying driver distraction. The most important input measures identified were lane offset, speed, and steering, whereas the most important feature types were standard deviation, quantiles, and nonlinear transforms. CONCLUSION: This work suggests that distraction detection algorithms may be improved by considering ensemble machine learning algorithms that are trained with driving behavior measures and nonstandard features. In addition, the study presents several new indicators of distraction derived from speed and steering measures. APPLICATION: Future development of distraction mitigation systems should focus on driver behavior-based algorithms that use complex feature generation techniques.


Asunto(s)
Conducción de Automóvil , Conducción Distraída , Envío de Mensajes de Texto , Accidentes de Tránsito , Humanos , Aprendizaje Automático
3.
Hum Factors ; 61(1): 105-118, 2019 02.
Artículo en Inglés | MEDLINE | ID: mdl-30059239

RESUMEN

OBJECTIVE: This study investigated the impact of in-vehicle interface characteristics on drivers' multitasking performance measures relating to visual attention management, concerning the distraction potential of in-vehicle touchscreens. BACKGROUND: Compared with physical controls providing drivers with naturalistic nonvisual cues, in-vehicle touchscreen interaction relies on vision to a greater extent, leading to more time with eyes off the road and concerns for safety. Little is known from existing research about the extent to which synthetic feedback of in-vehicle touchscreens support visual attention of multitasking drivers, while automakers are increasingly incorporating nondriving functions into in-vehicle touchscreens. METHOD: Twenty-nine participants drove an instrumented vehicle on a closed course and acknowledged visual probes obscured on the roadside, while performing a manual data entry task with input interfaces mounted on the center console. The interfaces differed by interface type, key feedback modality, and key size; the configuration of interface characteristics was the within-subject variable. The collected data include performance measures concerning visual detection and touchscreen interaction, in addition to perceived workload. RESULTS: The addition of nonvisual feedback to touchscreen interaction significantly improved accuracy and promptness of visual detection. No significant difference was found between different sizes of touchscreen keys when synthetic nonvisual feedback was available. Given multisensory feedback, no measure showed a difference between touchscreen conditions and a physical keypad. CONCLUSION: The provision of synthetic nonvisual feedback to touchscreen interaction can support visual attention and enhance multitasking performance in driving. APPLICATION: This study can inform in-vehicle interface designers and policy makers concerned with distracted driving and safety.


Asunto(s)
Atención , Conducción de Automóvil , Retroalimentación Sensorial , Sistemas Hombre-Máquina , Comportamiento Multifuncional , Adulto , Femenino , Humanos , Masculino , Seguridad , Análisis y Desempeño de Tareas , Adulto Joven
4.
Hum Factors ; 59(4): 671-688, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-28186420

RESUMEN

OBJECTIVE: This study evaluated the individual and combined effects of voice (vs. manual) input and head-up (vs. head-down) display in a driving and device interaction task. BACKGROUND: Advances in wearable technology offer new possibilities for in-vehicle interaction but also present new challenges for managing driver attention and regulating device usage in vehicles. This research investigated how driving performance is affected by interface characteristics of devices used for concurrent secondary tasks. A positive impact on driving performance was expected when devices included voice-to-text functionality (reducing demand for visual and manual resources) and a head-up display (HUD) (supporting greater visibility of the driving environment). METHOD: Driver behavior and performance was compared in a texting-while-driving task set during a driving simulation. The texting task was completed with and without voice-to-text using a smartphone and with voice-to-text using Google Glass's HUD. RESULTS: Driving task performance degraded with the addition of the secondary texting task. However, voice-to-text input supported relatively better performance in both driving and texting tasks compared to using manual entry. HUD functionality further improved driving performance compared to conditions using a smartphone and often was not significantly worse than performance without the texting task. CONCLUSION: This study suggests that despite the performance costs of texting-while-driving, voice input methods improve performance over manual entry, and head-up displays may further extend those performance benefits. APPLICATION: This study can inform designers and potential users of wearable technologies as well as policymakers tasked with regulating the use of these technologies while driving.


Asunto(s)
Conducción de Automóvil , Microcomputadores , Comportamiento Multifuncional/fisiología , Seguridad , Interfaz Usuario-Computador , Adulto , Atención/fisiología , Simulación por Computador , Anteojos , Femenino , Humanos , Masculino , Análisis y Desempeño de Tareas , Adulto Joven
6.
Hum Factors ; 53(6): 600-11, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22235523

RESUMEN

OBJECTIVE: A novel vibrotactile display type was investigated to determine the potential benefits for supporting the attention and task management of anesthesiologists. BACKGROUND: Recent research has shown physiological monitoring and multitasking performance can benefit from displaying patient data via alarm-like tactile notifications and via continuously informing auditory displays (e.g., sonifications). The current study investigated a novel combination of these two approaches: continuously informing tactile displays. METHOD: A tactile alarm and two continuously informing tactile display designs were evaluated in an anesthesia induction simulation with anesthesiologists as participants. Several performance measures were collected for two tasks: physiological monitoring and anesthesia induction. A multitask performance score equivalently weighted components from each task, normalized across experimental scenarios. Subjective rankings of the displays were also collected. RESULTS: Compared to the baseline (visual and auditory only) display configuration, each tactile display significantly improved performance in several objective measures, including multitask performance score. The continuously informing display that encoded the severity of patient health into the salience of its signals supported significantly better performance than the other two tactile displays. Contrasting the objective results, participants subjectively ranked the tactile alarm display highest. CONCLUSION: Continuously informing tactile displays with alarm-like properties (e.g., salience mapping) can better support anesthesiologists' physiological monitoring and multitasking performance under the high task demands of anesthesia induction. Adaptive display mechanisms may improve user acceptance. APPLICATION: This study can inform display design to support multitasking performance of anesthesiologists in the clinical setting and other supervisory control operators in work domains characterized by high demands for visual and auditory resources.


Asunto(s)
Anestesiología , Atención , Presentación de Datos , Estimulación Acústica , Adulto , Simulación por Computador , Femenino , Humanos , Masculino , Análisis y Desempeño de Tareas , Tacto
7.
IEEE Trans Haptics ; 3(3): 199-210, 2010.
Artículo en Inglés | MEDLINE | ID: mdl-27788074

RESUMEN

The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.

8.
Hum Factors ; 50(1): 17-26, 2008 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-18354968

RESUMEN

OBJECTIVES: This study sought to determine whether performance effects of cross-modal spatial links that were observed in earlier laboratory studies scale to more complex environments and need to be considered in multimodal interface design. It also revisits the unresolved issue of cross-modal cuing asymmetries. BACKGROUND: Previous laboratory studies employing simple cues, tasks, and/or targets have demonstrated that the efficiency of processing visual, auditory, and tactile stimuli is affected by the modality, lateralization, and timing of surrounding cues. Very few studies have investigated these cross-modal constraints in the context of more complex environments to determine whether they scale and how complexity affects the nature of cross-modal cuing asymmetries. METHOD: Amicroworld simulation of battlefield operations with a complex task set and meaningful visual, auditory, and tactile stimuli was used to investigate cuing effects for all cross-modal pairings. RESULTS: Significant asymmetric performance effects of cross-modal spatial links were observed. Auditory cues shortened response latencies for collocated visual targets but visual cues did not do the same for collocated auditory targets. Responses to contralateral (rather than ipsilateral) targets were faster for tactually cued auditory targets and each visual-tactile cue-target combination, suggesting an inhibition-of-return effect. CONCLUSIONS: The spatial relationships between multimodal cues and targets significantly affect target response times in complex environments. The performance effects of cross-modal links and the observed cross-modal cuing asymmetries need to be examined in more detail and considered in future interface design. APPLICATION: The findings from this study have implications for the design of multimodal and adaptive interfaces and for supporting attention management in complex, data-rich domains.


Asunto(s)
Percepción Auditiva , Desempeño Psicomotor , Tacto , Percepción Visual , Señales (Psicología) , Humanos , Personal Militar , Tiempo de Reacción , Estados Unidos , Interfaz Usuario-Computador
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...