RESUMO
Extensive work has investigated the neural processing of single faces, including the role of shape and surface properties. However, much less is known about the neural basis of face ensemble perception (e.g., simultaneously viewing several faces in a crowd). Importantly, the contribution of shape and surface properties have not been elucidated in face ensemble processing. Furthermore, how single central faces are processed within the context of an ensemble remains unclear. Here, we probe the neural dynamics of ensemble representation using pattern analyses as applied to electrophysiology data in healthy adults (seven males, nine females). Our investigation relies on a unique set of stimuli, depicting different facial identities, which vary parametrically and independently along their shape and surface properties. These stimuli were organized into ensemble displays consisting of six surround faces arranged in a circle around one central face. Overall, our results indicate that both shape and surface properties play a significant role in face ensemble encoding, with the latter demonstrating a more pronounced contribution. Importantly, we find that the neural processing of the center face precedes that of the surround faces in an ensemble. Further, the temporal profile of center face decoding is similar to that of single faces, while those of single faces and face ensembles diverge extensively from each other. Thus, our work capitalizes on a new center-surround paradigm to elucidate the neural dynamics of ensemble processing and the information that underpins it. Critically, our results serve to bridge the study of single and ensemble face perception.
Assuntos
Reconhecimento Facial , Adulto , Masculino , Feminino , Humanos , Reconhecimento Facial/fisiologiaRESUMO
Recognising intent in collaborative human robot tasks can improve team performance and human perception of robots. Intent can differ from the observed outcome in the presence of mistakes which are likely in physically dynamic tasks. We created a dataset of 1227 throws of a ball at a target from 10 participants and observed that 47% of throws were mistakes with 16% completely missing the target. Our research leverages facial images capturing the person's reaction to the outcome of a throw to predict when the resulting throw is a mistake and then we determine the actual intent of the throw. The approach we propose for outcome prediction performs 38% better than the two-stream architecture used previously for this task on front-on videos. In addition, we propose a 1D-CNN model which is used in conjunction with priors learned from the frequency of mistakes to provide an end-to-end pipeline for outcome and intent recognition in this throwing task.