Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
1.
Hum Factors ; 63(5): 833-853, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-33030381

RESUMO

OBJECTIVE: We proposed and demonstrate a theory-driven, quantitative, individual-level estimate of the degree to which cognitive processes are degraded or enhanced when multiple tasks are simultaneously completed. BACKGROUND: To evaluate multitasking, we used a performance-based cognitive model to predict efficient performance. The model controls for single-task performance at the individual level and does not depend on parametric assumptions, such as normality, which do not apply to many performance evaluations. METHODS: Twenty participants attempted to maintain their isolated task performance in combination for three dual-task and one triple-task scenarios. We utilized a computational model of multiple resource theory to form hypotheses for how performance in each environment would compare, relative to the other multitask contexts. We assessed if and to what extent multitask performance diverged from the model of efficient multitasking in each combination of tasks across multiple sessions. RESULTS: Across the two sessions, we found variable individual task performances but consistent patterns of multitask efficiency such that deficits were evident in all task combinations. All participants exhibited decrements in performing the triple-task condition. CONCLUSIONS: We demonstrate a modeling framework that characterizes multitasking efficiency with a single score. Because it controls for single-task differences and makes no parametric assumptions, the measure enables researchers and system designers to directly compare efficiency across various individuals and complex situations. APPLICATION: Multitask efficiency scores offer practical implications for the design of adaptive automation and training regimes. Furthermore, a system may be tailored for individuals or suggest task combinations that support productivity and minimize performance costs.


Assuntos
Análise e Desempenho de Tarefas , Humanos
2.
Behav Res Methods ; 51(3): 1179-1186, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-29845553

RESUMO

A key question in the field of scene perception is what information people use when making decisions about images of scenes. A significant body of evidence has indicated the importance of global properties of a scene image. Ideally, well-controlled, real-world images would be used to examine the influence of these properties on perception. Unfortunately, real-world images are generally complex and impractical to control. In the current research, we elicit ratings of naturalness and openness from a large number of subjects using Amazon Mechanic Turk. Subjects were asked to indicate which of a randomly chosen pair of scene images was more representative of a global property. A score and rank for each image was then estimated based on those comparisons using the Bradley-Terry-Luce model. These ranked images offer the opportunity to exercise control over the global scene properties in stimulus set drawn from complex real-world images. This will allow a deeper exploration of the relationship between global scene properties and behavioral and neural responses.


Assuntos
Percepção Visual/fisiologia , Reconhecimento Visual de Modelos/fisiologia
3.
Behav Res Methods ; 50(5): 2074-2096, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-29076106

RESUMO

The first stage of analyzing eye-tracking data is commonly to code the data into sequences of fixations and saccades. This process is usually automated using simple, predetermined rules for classifying ranges of the time series into events, such as "if the dispersion of gaze samples is lower than a particular threshold, then code as a fixation; otherwise code as a saccade." More recent approaches incorporate additional eye-movement categories in automated parsing algorithms by using time-varying, data-driven thresholds. We describe an alternative approach using the beta-process vector auto-regressive hidden Markov model (BP-AR-HMM). The BP-AR-HMM offers two main advantages over existing frameworks. First, it provides a statistical model for eye-movement classification rather than a single estimate. Second, the BP-AR-HMM uses a latent process to model the number and nature of the types of eye movements and hence is not constrained to predetermined categories. We applied the BP-AR-HMM both to high-sampling rate gaze data from Andersson et al. (Behavior Research Methods 49(2), 1-22 2016) and to low-sampling rate data from the DIEM project (Mital et al., Cognitive Computation 3(1), 5-24 2011). Driven by the data properties, the BP-AR-HMM identified over five categories of movements, some which clearly mapped on to fixations and saccades, and others potentially captured post-saccadic oscillations, smooth pursuit, and various recording errors. The BP-AR-HMM serves as an effective algorithm for data-driven event parsing alone or as an initial step in exploring the characteristics of gaze data sets.


Assuntos
Algoritmos , Coleta de Dados , Movimentos Oculares , Cadeias de Markov , Visualização de Dados , Humanos
4.
Behav Brain Sci ; 40: e145, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-29342608

RESUMO

Much of the evidence for theories in visual search (including Hulleman & Olivers' [H&O's]) comes from inferences made using changes in mean RT as a function of the number of items in a display. We have known for more than 40 years that these inferences are based on flawed reasoning and obscured by model mimicry. Here we describe a method that avoids these problems.


Assuntos
Atenção , Tempo de Reação
5.
Behav Res Methods ; 49(4): 1261-1277, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-27503304

RESUMO

The extent to which distracting information influences decisions can be informative about the nature of the underlying cognitive and perceptual processes. In a recent paper, a response time-based measure for quantifying the degree of interference (or facilitation) from distracting information termed resilience was introduced. Despite using a statistical measure, the analysis was limited to qualitative comparisons between different model predictions. In this paper, we demonstrate how statistical procedures from workload capacity analysis can be applied to the new resilience functions. In particular, we present an approach to null-hypothesis testing of resilience functions and a method based on functional principal components analysis for analyzing differences in the functional form of the resilience functions across participants and conditions.


Assuntos
Pesquisa Comportamental/métodos , Análise de Componente Principal , Resiliência Psicológica , Humanos , Masculino , Tempo de Reação
6.
Behav Res Methods ; 46(2): 307-30, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24019062

RESUMO

Systems factorial technology (SFT) comprises a set of powerful nonparametric models and measures, together with a theory-driven experiment methodology termed the double factorial paradigm (DFP), for assessing the cognitive information-processing mechanisms supporting the processing of multiple sources of information in a given task (Townsend and Nozawa, Journal of Mathematical Psychology 39:321-360, 1995). We provide an overview of the model-based measures of SFT, together with a tutorial on designing a DFP experiment to take advantage of all SFT measures in a single experiment. Illustrative examples are given to highlight the breadth of applicability of these techniques across psychology. We further introduce and demonstrate a new package for performing SFT analyses using R for statistical computing.


Assuntos
Cognição/fisiologia , Simulação por Computador , Modelos Psicológicos , Estatísticas não Paramétricas , Análise de Sistemas , Atenção/fisiologia , Análise Fatorial , Humanos , Processos Estocásticos , Análise e Desempenho de Tarefas , Percepção Visual/fisiologia , Carga de Trabalho/psicologia
7.
Behav Res Methods ; 45(4): 1048-57, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23475829

RESUMO

Workload capacity, an important concept in many areas of psychology, describes processing efficiency across changes in workload. The capacity coefficient is a function across time that provides a useful measure of this construct. Until now, most analyses of the capacity coefficient have focused on the magnitude of this function, and often only in terms of a qualitative comparison (greater than or less than one). This work explains how a functional extension of principal components analysis can capture the time-extended information of these functional data, using a small number of scalar values chosen to emphasize the variance between participants and conditions. This approach provides many possibilities for a more fine-grained study of differences in workload capacity across tasks and individuals.


Assuntos
Modelos Psicológicos , Modelos Estatísticos , Análise de Componente Principal , Carga de Trabalho/psicologia , Humanos , Tempo de Reação , Avaliação da Capacidade de Trabalho
8.
Top Cogn Sci ; 2023 Jul 13.
Artigo em Inglês | MEDLINE | ID: mdl-37439275

RESUMO

In the modern world, many important tasks have become too complex for a single unaided individual to manage. Teams conduct some safety-critical tasks to improve task performance and minimize the risk of error. These teams have traditionally consisted of human operators, yet, nowadays, artificial intelligence and machine systems are incorporated into team environments to improve performance and capacity. We used a computerized task modeled after a classic arcade game to investigate the performance of human-machine and human-human teams. We manipulated the group conditions between team members; sometimes, they were instructed to collaborate, compete, or work separately. We evaluated players' performance in the main task (gameplay) and, in post hoc analyses, participant behavioral patterns to inform group strategies. We compared game performance between team types (human-human vs. human-machine) and group conditions (competitive, collaborative, independent). Adapting workload capacity analysis to human-machine teams, we found performance under both team types and all group conditions suffered a performance efficiency cost. However, we observed a reduced cost in collaborative over competitive teams within human-human pairings, but this effect was diminished when playing with a machine partner. The implications of workload capacity analysis as a powerful tool for human-machine team performance measurement are discussed.

9.
Acta Psychol (Amst) ; 238: 103986, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37454588

RESUMO

The Word Superiority Effect (WSE) refers to the phenomenon where a single letter is recognized more accurately when presented within a word, compared to when it is presented alone or in a random string. However, previous research has produced conflicting findings regarding whether this effect also occurs in the processing of Chinese characters. The current study employed the capacity coefficient, a measure derived from the Systems Factorial Technology framework, to investigate processing efficiency and test for the superiority effect in Chinese characters and English words. We hypothesized that WSE would result in more efficient processing of characters/words compared to their individual components, as reflected by super capacity processing. However, contrary to our predictions, results from both the "same" (Experiment 1) and "different" (Experiment 2) judgment tasks revealed that native Chinese speakers exhibited limited processing capacity (inefficiency) for both English words and Chinese characters. In addition, results supported an English WSE with participants integrating English words and pseudowords more efficiently than nonwords, and decomposing nonwords more efficiently than words and pseudowords. In contrast, no superiority effect was observed for Chinese characters. To conclude, the current work suggests that the superiority effect only applies to English processing efficiency with specific context rules and does not extend to Chinese characters.


Assuntos
Reconhecimento Visual de Modelos , Processamento de Texto , Humanos , Leitura , Percepção Visual
10.
Heliyon ; 9(9): e19736, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37809370

RESUMO

Previous research has presented conflicting evidence regarding whether Chinese characters are processed holistically. In past work, we applied Systems Factorial Technology (SFT) and discovered that native Chinese speakers exhibited limited capacity when processing characters and words. To pinpoint the source of this limitation, our current research delved further into the mental architecture involved in processing Chinese characters and English words, taking into consideration information from each component. In our current study, participants were directed to make the same/different judgments on characters/words presented sequentially. Our results indicated that participants utilized a parallel self-terminating strategy when both or neither of the left/right components differed (Experiment 1). Faced with the decisional uncertainty that either the left/right component would also differ, most participants processed with a parallel exhaustive architecture, while a few exhibited the coactive architecture (Experiment 2). Taken together, our work provides evidence that in word/character perception, there is weak holistic processing (parallel self-terminating processing) when partial information is sufficient for the decision; robust holistic processing (coactive or parallel exhaustive processing) occurs under decisional uncertainty. Our findings underscore the significant role that the task and presentation context play in visual word processing.

11.
Front Neuroergon ; 3: 1007673, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-38235464

RESUMO

Introduction: A well-designed brain-computer interface (BCI) can make accurate and reliable predictions of a user's state through the passive assessment of their brain activity; in turn, BCI can inform an adaptive system (such as artificial intelligence, or AI) to intelligently and optimally aid the user to maximize the human-machine team (HMT) performance. Various groupings of spectro-temporal neural features have shown to predict the same underlying cognitive state (e.g., workload) but vary in their accuracy to generalize across contexts, experimental manipulations, and beyond a single session. In our work we address an outstanding challenge in neuroergonomic research: we quantify if (how) identified neural features and a chosen modeling approach will generalize to various manipulations defined by the same underlying psychological construct, (multi)task cognitive workload. Methods: To do this, we train and test 20 different support vector machine (SVM) models, each given a subset of neural features as recommended from previous research or matching the capabilities of commercial devices. We compute each model's accuracy to predict which (monitoring, communications, tracking) and how many (one, two, or three) task(s) were completed simultaneously. Additionally, we investigate machine learning model accuracy to predict task(s) within- vs. between-sessions, all at the individual-level. Results: Our results indicate gamma activity across all recording locations consistently outperformed all other subsets from the full model. Our work demonstrates that modelers must consider multiple types of manipulations which may each influence a common underlying psychological construct. Discussion: We offer a novel and practical modeling solution for system designers to predict task through brain activity and suggest next steps in expanding our framework to further contribute to research and development in the neuroergonomics community. Further, we quantified the cost in model accuracy should one choose to deploy our BCI approach using a mobile EEG-systems with fewer electrodes-a practical recommendation from our work.

12.
Atten Percept Psychophys ; 82(7): 3340-3356, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32557004

RESUMO

Despite the increasing focus on target prevalence in visual search research, few papers have thoroughly examined the effect of how target prevalence is communicated. Findings in the judgment and decision-making literature have demonstrated that people behave differently depending on whether probabilistic information is made explicit or learned through experience, hence there is potential for a similar difference when communicating prevalence in visual search. Our current research examined how visual search changes depending on whether the target prevalence information was explicitly given to observers or they learned the prevalence through experience with additional manipulations of target reward and salience. We found that when the target prevalence was low, learning prevalence from experience resulted in more target-present responses and longer search times before quitting compared to when observers were explicitly informed of the target probability. The discrepancy narrowed with increased prevalence and reversed in the high target prevalence condition. Eye-tracking results indicated that search with experience consistently resulted in longer fixation durations, with the largest difference in low-prevalence conditions. Longer search time was primarily due to observers re-visited more items. Our work addressed the importance of exploring influences brought by probability communication in future prevalence visual search studies.


Assuntos
Julgamento , Aprendizagem , Humanos , Prevalência , Probabilidade , Percepção Visual
13.
Atten Percept Psychophys ; 82(2): 426-456, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32133598

RESUMO

The mechanisms guiding visual attention are of great interest within cognitive and perceptual psychology. Many researchers have proposed models of these mechanisms, which serve to both formalize their theories and to guide further empirical investigations. The assumption that a number of basic features are processed in parallel early in the attentional process is common among most models of visual attention and visual search. To date, much of the evidence for parallel processing has been limited to set-size manipulations. Unfortunately, set-size manipulations have been shown to be insufficient evidence for parallel processing. We applied Systems Factorial Technology, a general nonparametric framework, to test this assumption, specifically whether color and shape are processed in parallel or in serial, in three experiments representative of feature search, conjunctive search, and odd-one-out search, respectively. Our results provide strong evidence that color and shape information guides search through parallel processes. Furthermore, we found evidence for facilitation between color and shape when the target was known in advance but performance consistent with unlimited capacity, independent parallel processing in odd-one-out search. These results confirm core assumptions about color and shape feature processing instantiated in most models of visual search and provide more detailed clues about the manner in which color and shape information is combined to guide search.


Assuntos
Percepção de Cores/fisiologia , Percepção de Forma/fisiologia , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Adulto , Atenção/fisiologia , Humanos , Masculino , Reconhecimento Visual de Modelos/fisiologia , Distribuição Aleatória , Adulto Jovem
14.
Top Cogn Sci ; 11(1): 261-276, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30592180

RESUMO

The time-based resource-sharing (TBRS) model envisions working memory as a rapidly switching, serial, attentional refreshing mechanism. Executive attention trades its time between rebuilding decaying memory traces and processing extraneous activity. To thoroughly investigate the implications of the TBRS theory, we integrated TBRS within the ACT-R cognitive architecture, which allowed us to test the TBRS model against both participant accuracy and response time data in a dual task environment. In the current work, we extend the model to include articulatory rehearsal, which has been argued in the literature to be a separate mechanism from attentional refreshing. Additionally, we use the model to predict performance under a larger range of cognitive load (CL) than typically administered to human subjects. Our simulations support the hypothesis that working memory capacity is a linear function of CL and suggest that this effect is less pronounced when articulatory rehearsal is available.


Assuntos
Atenção/fisiologia , Função Executiva/fisiologia , Memória de Curto Prazo/fisiologia , Modelos Teóricos , Reconhecimento Visual de Modelos/fisiologia , Humanos
15.
Vision Res ; 148: 49-58, 2018 07.
Artigo em Inglês | MEDLINE | ID: mdl-29678536

RESUMO

Ideal observer analysis is a fundamental tool used widely in vision science for analyzing the efficiency with which a cognitive or perceptual system uses available information. The performance of an ideal observer provides a formal measure of the amount of information in a given experiment. The ratio of human to ideal performance is then used to compute efficiency, a construct that can be directly compared across experimental conditions while controlling for the differences due to the stimuli and/or task specific demands. In previous research using ideal observer analysis, the effects of varying experimental conditions on efficiency have been tested using ANOVAs and pairwise comparisons. In this work, we present a model that combines Bayesian estimates of psychometric functions with hierarchical logistic regression for inference about both unadjusted human performance metrics and efficiencies. Our approach improves upon the existing methods by constraining the statistical analysis using a standard model connecting stimulus intensity to human observer accuracy and by accounting for variability in the estimates of human and ideal observer performance scores. This allows for both individual and group level inferences.


Assuntos
Teorema de Bayes , Modelos Logísticos , Limiar Sensorial/fisiologia , Percepção Visual , Humanos , Modelos Lineares , Psicometria/métodos
16.
Psychol Methods ; 22(2): 288-303, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-28594226

RESUMO

The question of cognitive architecture-how cognitive processes are temporally organized-has arisen in many areas of psychology. This question has proved difficult to answer, with many proposed solutions turning out to be spurious. Systems factorial technology (Townsend & Nozawa, 1995) provided the first rigorous empirical and analytical method of identifying cognitive architecture, using the survivor interaction contrast (SIC) to determine when people are using multiple sources of information in parallel or in series. Although the SIC is based on rigorous nonparametric mathematical modeling of response time distributions, for many years inference about cognitive architecture has relied solely on visual assessment. Houpt and Townsend (2012) recently introduced null hypothesis significance tests, and here we develop both parametric and nonparametric (encompassing prior) Bayesian inference. We show that the Bayesian approaches can have considerable advantages. (PsycINFO Database Record


Assuntos
Teorema de Bayes , Cognição , Modelos Teóricos , Humanos , Tempo de Reação
17.
Cogn Res Princ Implic ; 1(1): 31, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28180181

RESUMO

Multi-spectral imagery can enhance decision-making by supplying multiple complementary sources of information. However, overloading an observer with information can deter decision-making. Hence, it is critical to assess multi-spectral image displays using human performance. Accuracy and response times (RTs) are fundamental for assessment, although without sophisticated empirical designs, they offer little information about why performance is better or worse. Systems factorial technology (SFT) is a framework for study design and analysis that examines observers' processing mechanisms, not just overall performance. In the current work, we use SFT to compare a display with two sensor images alongside each another with a display in which there is a single composite image. In our first experiment, the SFT results indicated that both display approaches suffered from limited workload capacity and more so for the composite imagery. In the second experiment, we examined the change in observer performance over the course of multiple days of practice. Participants' accuracy and RTs improved with training, but their capacity limitations were unaffected. Using SFT, we found that the capacity limitation was not due to an inefficient serial examination of the imagery by the participants. There are two clear implications of these results: Observers are less efficient with multi-spectral images than single images and the side-by-side display of source images is a viable alternative to composite imagery. SFT was necessary for these conclusions because it provided an appropriate mechanism for comparing single-source images to multi-spectral images and because it ruled out serial processing as the source of the capacity limitation.

18.
Vision Res ; 126: 19-33, 2016 09.
Artigo em Inglês | MEDLINE | ID: mdl-25986994

RESUMO

While there is widespread agreement among vision researchers on the importance of some local aspects of visual stimuli, such as hue and intensity, there is no general consensus on a full set of basic sources of information used in perceptual tasks or how they are processed. Gestalt theories place particular value on emergent features, which are based on the higher-order relationships among elements of a stimulus rather than local properties. Thus, arbitrating between different accounts of features is an important step in arbitrating between local and Gestalt theories of perception in general. In this paper, we present the capacity coefficient from Systems Factorial Technology (SFT) as a quantitative approach for formalizing and rigorously testing predictions made by local and Gestalt theories of features. As a simple, easily controlled domain for testing this approach, we focus on the local feature of location and the emergent features of Orientation and Proximity in a pair of dots. We introduce a redundant-target change detection task to compare our capacity measure on (1) trials where the configuration of the dots changed along with their location against (2) trials where the amount of local location change was exactly the same, but there was no change in the configuration. Our results, in conjunction with our modeling tools, favor the Gestalt account of emergent features. We conclude by suggesting several candidate information-processing models that incorporate emergent features, which follow from our approach.


Assuntos
Teoria Gestáltica , Reconhecimento Visual de Modelos/fisiologia , Percepção Visual/fisiologia , Análise de Variância , Atenção/fisiologia , Humanos , Modelos Psicológicos , Mascaramento Perceptivo , Estimulação Luminosa , Tempo de Reação
19.
Front Psychol ; 6: 594, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26074828

RESUMO

Working memory capacity (WMC) is typically measured by the amount of task-relevant information an individual can keep in mind while resisting distraction or interference from task-irrelevant information. The current research investigated the extent to which differences in WMC were associated with performance on a novel redundant memory probes (RMP) task that systematically varied the amount of to-be-remembered (targets) and to-be-ignored (distractor) information. The RMP task was designed to both facilitate and inhibit working memory search processes, as evidenced by differences in accuracy, response time, and Linear Ballistic Accumulator (LBA) model estimates of information processing efficiency. Participants (N = 170) completed standard intelligence tests and dual-span WMC tasks, along with the RMP task. As expected, accuracy, response-time, and LBA model results indicated memory search and retrieval processes were facilitated under redundant-target conditions, but also inhibited under mixed target/distractor and redundant-distractor conditions. Repeated measures analyses also indicated that, while individuals classified as high (n = 85) and low (n = 85) WMC did not differ in the magnitude of redundancy effects, groups did differ in the efficiency of memory search and retrieval processes overall. Results suggest that redundant information reliably facilitates and inhibits the efficiency or speed of working memory search, and these effects are independent of more general limits and individual differences in the capacity or space of working memory.

20.
J Exp Psychol Hum Percept Perform ; 41(4): 1007-20, 2015 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-25938253

RESUMO

When engaged in a visual search for two targets, participants are slower and less accurate in their responses, relative to their performance when searching for singular targets. Previous work on this "dual-target cost" has primarily focused on the breakdown of attentional guidance when looking for two items. Here, we investigated how object identification processes are affected by dual-target search. Our goal was to chart the speed at which distractors could be rejected, to assess whether dual-target search impairs object identification. To do so, we examined the capacity coefficient, which measures the speed at which decisions can be made, and provides a baseline of parallel performance against which to compare. We found that participants could search at or above this baseline, suggesting that dual-target search does not impair object identification abilities. We also found substantial differences in performance when participants were asked to search for simple versus complex images. Somewhat paradoxically, participants were able to reject complex images more rapidly than simple images. We suggest that this reflects the greater number of features that can be used to identify complex images, a finding that has important consequences for understanding object identification in visual search more generally.


Assuntos
Atenção/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Desempenho Psicomotor/fisiologia , Adulto , Humanos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa