RESUMO
This research report introduces a learning system designed to detect the object that humans are gazing at, using solely visual feedback. By incorporating face detection, human attention prediction, and online object detection, the system enables the robot to perceive and interpret human gaze accurately, thereby facilitating the establishment of joint attention with human partners. Additionally, a novel dataset collected with the humanoid robot iCub is introduced, comprising more than 22,000 images from ten participants gazing at different annotated objects. This dataset serves as a benchmark for human gaze estimation in table-top human-robot interaction (HRI) contexts. In this work, we use it to assess the proposed pipeline's performance and examine each component's effectiveness. Furthermore, the developed system is deployed on the iCub and showcases its functionality. The results demonstrate the potential of the proposed approach as a first step to enhancing social awareness and responsiveness in social robotics. This advancement can enhance assistance and support in collaborative scenarios, promoting more efficient human-robot collaborations.
RESUMO
A widely known result from gaze-perception research is the overestimation effect where gaze endpoints are seen farther to the side than they actually are. While horizontal gaze directions have been studied repeatedly, there is scarce research on other directions after early reports that vertical gaze is perceived accurately. It is argued that if participants base their judgment on the movements of the iris-pupil-complex in relation to eye size, vertical gaze should be overestimated similarly as horizontal gaze. This is what was found in the reported experiment. However, horizontal gaze was actually overestimated more than diagonal and vertical gaze. The small difference in overestimation between the axes may be explained by the horizontal-vertical illusion, entailing that horizontal extensions are seen as shorter than vertical extensions.
RESUMO
In this article, we reported a rare case of nine syndrome, which is characterized by clinical signs of the one-and-a-half syndrome, ipsilateral facial palsy, contralateral hemiparesis, hemihypesthesia, or ataxia. A 44-year-old male presented with sudden onset of double vision for 3 days. Examination revealed left horizontal gaze palsy, internuclear ophthalmoplegia, left lower motor neuron facial nerve palsy with right hemiplegia, and hemihypesthesia. Magnetic resonance imaging brain showed evidence of acute infarction at the left paramedian pons. Magnetic resonance angiography revealed a beaded small-caliber basilar artery suggestive of intracranial vasculopathy in the posterior circulation. The patient has been treated with an antiplatelet and lipid-lowering agent. His right hemiparesis has improved, but the ocular motility and left facial paresis persisted. The literature reviews of 14 cases of nine syndrome were discussed, and the biography background, clinical pictures, etiology, neuroimaging, treatment, and recovery status were described.
RESUMO
Previous research has demonstrated that social cues (e.g., eye gaze, walking direction of biological motion) can automatically guide people's focus of attention, a well-known phenomenon called social attention. The current research shows that voluntarily generated social cues via visual mental imagery, without being physically presented, can produce robust attentional orienting similar to the classic social attentional orienting effect. Combining a visual imagery task with a dot-probe task, we found that imagining a non-predictive gaze cue could orient attention towards the gazed-at hemifield. Such attentional effect persisted even when the imagery gaze cue was counter-predictive of the target hemifield, and could be generalized to biological motion cue. Besides, this effect could not be simply attributed to low-level motion signal embedded in gaze cues. More importantly, an eye-tracking experiment carefully monitoring potential eye movements demonstrated the imagery-induced attentional orienting effect induced by social cues, but not by non-social cues (i.e., arrows), suggesting that such effect is specialized to visual imagery of social cues. These findings accentuate the demarcation between social and non-social attentional orienting, and may take a preliminary step in conceptualizing voluntary visual imagery as a form of internally directed attention.
RESUMO
Ocular strabismus, a common condition in the present generation is an absolute risk factor for amblyopia and blinding premorbid visual loss. Despite the availability of new optometry tools with eye-tracking data, the issues persist in attaining accuracy and reliability in diagnosing strabismus. These two concerns are specifically accommodated in this study by the proposed novel approach that involves CNNs with eye-tracking datasets from subjects. The presented work aims to improve the accuracy of diagnostics in ophthalmology utilizing the integration of the further proposed algorithms into an automatic strabismus detection system. For this purpose, the proposed FedCNN model combines the CNN with eXtreme Gradient Boosting (XGBoost) and uses the Gaze deviation (GaDe) images to capture dynamic eye movements. This method tries to make the feature extraction as accurate as possible in its best working state to enhance the diagnosis precision. The model proves to be accurate, reaching 95.2%, which is even more prominent because of the more or less detailed connection layer of the CNN, which is used for the selection of features designated for such tasks of strabismus recognition. The presented method has the potential of shifting the approach to diagnosing diseases of the eyes in more or less half of the patients.
Assuntos
Algoritmos , Descolamento Retiniano , Estrabismo , Humanos , Estrabismo/diagnóstico , Descolamento Retiniano/diagnóstico , Movimentos Oculares/fisiologia , Redes Neurais de Computação , Tecnologia de Rastreamento Ocular , Reprodutibilidade dos Testes , Masculino , Adulto , FemininoRESUMO
Vision has previously been correlated with performance in acrobatic sports, highlighting visuomotor expertise adaptations. However, we still poorly understand the visuomotor strategies athletes use while executing twisting somersaults, even though this knowledge might be helpful for skill development. Thus, the present study sought to identify the differences in gaze behavior between elite and sub-elite trampolinists during the execution of four acrobatics of increasing difficulty. Seventeen inertial measurement units and a wearable eye-tracker were used to record the body and gaze kinematics of 17 trampolinists (8 elites, 9 sub-elites). Six typical metrics were analyzed using a mixed analysis of variance (ANOVA) with the Expertise as inter-subject and the Acrobatics as intra-subject factors. To complement this analysis, advanced temporal eye-tracking metrics are reported, such as the dwell time on areas of interest, the scan path on the trampoline bed, the temporal evolution of the gaze orientation endpoint (SPGO), and the time spent executing specific neck and eye strategies. A significant main effect of Expertise was only evidenced in one of the typical metrics, where elite athletes exhibited a higher number of fixations compared to sub-elites (p = 0.033). Significant main effects of Acrobatics were observed on all metrics (p < 0.05), revealing that gaze strategies are task-dependent in trampolining. The recordings of eyes and neck movements performed in this study confirmed the use of "spotting" at the beginning and end of the acrobatics. They also revealed a unique sport-specific visual strategy that we termed as self-motion detection. This strategy consists of not moving the eyes during fast head rotations, a strategy mainly used by trampolinists during the twisting phase. This study proposes a detailed exploration of trampolinists' gaze behavior in highly realistic settings and a temporal description of the visuomotor strategies to enhance understanding of perception-action interactions during the execution of twisting somersaults.
RESUMO
Objective:To explore the predictive value of HINTS bedside examination and e-NIHSS scale for posterior circulation ischemia with vestibular symptoms. Methods:136 cases in total patients with acute vestibular syndromeï¼AVSï¼ hospitalized in our hospital from April 2021 to April 2023 were selected as the study subjects, According to the classification of AVS etiology, patients with central AVS, namely posterior circulation ischemiaï¼PCIï¼, were divided into case groupï¼68 casesï¼ and peripheral AVS patients were control groupï¼68 casesï¼, Collect data and perform head impulse test-nystagmus-test of skew deviation test beside the bed, two doctors evaluated the NIHSS and e-NIHSS scales for PCI patients with vestibular symptoms respecb tively, and recorded the results after they were consistent, and improved the head MRI examination. Results:The positive rate of head pulse test in PCI patients with vestibular symptoms was 3 casesï¼4.41%ï¼, and 60 casesï¼88.24%ï¼ with peripheral symptoms; The positive rate of nystagmus test in PCI group was 64 casesï¼94.12%ï¼ and peripheral 21 casesï¼30.88%ï¼; The positive rate of eye deviation test in PCI group was 55 casesï¼80.88%ï¼ and peripheral 8 casesï¼11.76%ï¼. Comparing the data with the consistency of the final diagnosis, the sensitivity was 97.0%, the specificity was 95.7%, and the accuracy was 0.963. It passed the Kappa consistency test Kappa=0.926ï¼P<0.01ï¼. The patients in PCI group were scored, in which the NIHSS score of brainstem group was 1.51±0.59, and the e-NIHSS score was 4.05±1.71ï¼P<0.05ï¼; The NIHSS score of cerebellar group was 1.42±0.62, and the e-NIHSS score was 3.86±1.59ï¼P<0.05ï¼; NIHSS score of thalamus group was 1.31±0.73, e-NIHSS score was 3.56±1.27 ï¼P<0.05ï¼; NIHSS score of non-focus group was 1.11±0.43, e-NIHSS score was 3.06±1.20 ï¼P<0.01ï¼. The difference between e-NIHSS score and NIHSS score in each group was statistically significant. Conclusion:HINTS examination is highly consistent with the final diagnosis of the gold standard. The e-NIHSS scoring scale has a higher detection rate than the NIHSS scoring scale for patients with posterior circulation ischemia mainly characterizeãby vestibular symptoms.
Assuntos
AVC Isquêmico , Humanos , AVC Isquêmico/diagnóstico , Feminino , Masculino , Pessoa de Meia-Idade , Doenças Vestibulares/diagnóstico , Idoso , Teste do Impulso da Cabeça/métodosRESUMO
BACKGROUND: Altered gaze in social settings is a hallmark of social anxiety; however, little research directly examines gaze in anxiety-provoking contexts among youth with anxiety disorders, limiting mechanistic insight into pediatric anxiety. The present study leveraged mobile eye-tracking technology to examine gaze behavior during a naturalistic stressor in a clinical developmental sample. METHODS: Sixty-one youth (ages 8-17 years; 28 with anxiety disorders, 33 non-anxious controls) completed a naturalistic social stress task (public speaking in front of a videotaped classroom audience) while wearing eye-tracking glasses. Gaze behavior and state anxiety were quantified in each group during two task conditions: while giving a speech and while passively viewing the audience. RESULTS: Anxiety-related differences emerged in state anxiety and gaze behavior. First, a significant interaction between diagnostic group and task condition on state anxiety indicated that while anxiety increased among non-anxious controls following the speech, youth with anxiety disorders reported persistently elevated anxiety across all assessments. Second, a significant interaction emerged between social anxiety symptom severity and task condition on gaze time on the audience. While youth overall showed low dwell time on the audience during speech delivery, individuals with greater social anxiety showed longer gaze on the audience during the passive viewing condition. This pattern was specific to dimensional analyses of social anxiety symptom severity. LIMITATIONS: The current study was not sufficiently powered to examine age-related differences. CONCLUSIONS: These findings highlight anxiety-related differences in gaze behavior in youth, providing new mechanistic insight into pediatric anxiety using mobile eye-tracking.
RESUMO
Introduction: Although it is well established that humans spontaneously attend to where others are looking, it remains debated whether this gaze following behavior occurs because gaze communicates directional information (i.e., where an agent is looking) or because gaze communicates an agent's inferred mental content (i.e., what an agent perceives), both of which rely on the processes involved in the general Theory of Mind ability. Methods: To address this question, in two Experiments we used a novel task to measure how spatially dissociated and spatially combined effects of an agent's gaze direction and perceived mental content influence target performance. We also contrasted performance for social directional cues and nonsocial arrows. Results: Our data revealed that performance was compromised when cue direction and mental content dissociated relative to when they combined. Performance for dissociated components was especially prominent when a social avatar served as a cue relative to a comparison arrow. Discussion: Together, these data show that a typical gaze signal communicates information about both where an agent is attending and what they are attending to.
RESUMO
This study aimed to determine the visual assessment skills during an observation-based gait analysis. Participants (N=40) included 20 physiotherapists (PTs) with>10 years of clinical experience (physiotherapists) and 20 physiotherapy students. Both groups watched a video of the gait of a subject with Guillain-Barré syndrome before and after being provided with information regarding other movements. Further, visual lines were measured using an EMR-8 eye mark recorder, and the results were compared between both groups. The average gaze duration was longer for students than for PTs (F1,79=53.3; p<0.01), whereas PTs gazed more often than the students (F1,79=87.6; p< 0.01). Furthermore, the PTs moved their eyes vertically more often than the students (F1,151=9.1; P< 0.01). We found that being able to discriminate the relative physical relationship of body locations by frequent and rapid vertical gazes could be an indication of the level of skills as an index to express the visual assessment skill in an observation-based gait analysis.
RESUMO
This study aimed to explore the effects of a four-week intensive eye-tracking intervention on children with dyskinetic cerebral palsy (DCP), focusing on goal attainment, communication competencies, stress levels, subjective workload, and caregivers' perception of psychosocial impact. A multiple case study design with non-concurrent, staggered multiple baselines was employed, involving three children aged 7, 12, and 13 years. The study included a randomized baseline period of two or three weeks, an intensive eye-tracking intervention, and a six-month follow-up. Two individual eye-tracking goals were identified and assessed using the Goal Attainment Scale, while communication competencies were evaluated with the Augmentative and Alternative Communication Profile: A Continuum of Learning. Stress levels were monitored through Heart Rate Variability measured by the Bittium Faros 360° ECG Holter during eye-tracking tasks. Subjective workload and psychosocial impact were assessed using pictograms and the Psychosocial Impact of Assistive Devices Scale, respectively. Descriptive statistics were applied for analysis. All participants attained and retained their eye-tracking goals, regardless of their initial functional profiles or prior experience with eye-tracking technology. Post-intervention improvements in communication competencies were maintained at the six-month follow-up. Variations in stress levels, subjective workload, and psychosocial impact were observed among participants across different phases of the study, aiding the interpretation of the results. The study concludes that a structured, tailored, four-week intensive eye-tracking intervention can yield successful results in children with DCP, irrespective of their baseline communication abilities or functional profile. Recommendations for future research, including more robust methodologies and reliable computerized tests, are provided.
A four-week intensive and structured eye-tracking intervention in children with dyskinetic cerebral palsy appears feasible and may lead to the acquisition and retention of meaningful eye-tracking goals, regardless of functional profile or prior eye-tracking experience.Objective measurement of baseline communication competencies could assist the clinical practice in identifying areas of difficulty, thereby facilitating a tailored goal-setting and goal-attainment approach.Cognition, effort, and motivation may influence intervention outcomes and should be strongly considered in future studies with more robust methodology.Reliable computerized versions of pen-paper assessments with eye-tracking as a response modality are needed to enable a better understanding of skill-acquisition processes in the goal group.
RESUMO
Walking, a basic physical movement of the human body, is a resource for observers in forming interpersonal impressions. We have previously investigated the expression and perception of the attractiveness of female gaits. In this paper, drawing on our previous research, additional analysis, and reviewing previous studies, we seek to deepen our understanding of the function of gait attractiveness. First, we review previous research on gait as nonverbal information. Then, we show that fashion models' gaits reflect sociocultural genderlessness, while nonmodels express reproductive-related biological attractiveness. Next, we discuss the functions of gait attractiveness based on statistical models that link gait parameters and attractiveness scores. Finally, we focus on observers' perception of attractiveness, constructing a model of the visual information processing with respect to gait attractiveness. Overall, our results suggest that there are not only biological but also sociocultural criteria for gait attractiveness, and men and women place greater importance on the former and latter criteria, respectively, when assessing female gait attractiveness. This paper forms a major step forward in neuroaesthetics to understand the beauty of the human body and the generation of biological motions.
RESUMO
Objective: This study aims to clarify the gazing characteristics of older drivers while driving cars using a gaze analysis device. Methods: The participants included 16 older and 12 middle-aged drivers who drove cars daily. After conducting cognitive and attentional function tests, eye gaze while watching driving videos was measured using an eye tracker. Ten driving videos were prepared. In addition, a total of 34 hazard areas were analyzed. Results: The results of the gaze measurement parameters were statistically compared between the two groups. In the older group, the gaze analysis results indicated that while viewing driving videos, the search for areas close to the car was expanded. In addition, in several hazard areas, we observed a decrease in the number of drivers gazing at the driver, shortened total gazing time, delay in the timing of gazing, and decrease in the number of visits. Conclusions: Older drivers' eye movement is increased; however, it is characterized by gazing at unimportant areas, indicating an inefficient scanning pattern. Although these results do not indicate an obvious decline in driving ability among older drivers, the decline in hazard perception may become apparent in some situations. The data contain underpowered results and require revalidation in larger studies.
RESUMO
This study explores the intersection of personality, attention and task performance in traditional 2D and immersive virtual reality (VR) environments. A visual search task was developed that required participants to find anomalous images embedded in normal background images in 3D space. Experiments were conducted with 30 subjects who performed the task in 2D and VR environments while their eye movements were tracked. Following an exploratory correlation analysis, we applied machine learning techniques to investigate the predictive power of gaze features on human data derived from different data collection methods. Our proposed methodology consists of a pipeline of steps for extracting fixation and saccade features from raw gaze data and training machine learning models to classify the Big Five personality traits and attention-related processing speed/accuracy levels computed from the Group Bourdon test. The models achieved above-chance predictive performance in both 2D and VR settings despite visually complex 3D stimuli. We also explored further relationships between task performance, personality traits and attention characteristics.
RESUMO
BACKGROUND AND PURPOSE: Patients with postural tachycardia syndrome report position-dependent visual symptoms. Despite their impact on daily life, these symptoms have remained largely unexplored in research. The aim of this study was to investigate the nature of visual symptoms in postural tachycardia syndrome and possible underlying pathophysiological mechanisms. METHODS: Fifteen patients with postural tachycardia syndrome and 15 healthy controls were included in the study. Through a comprehensive array of measurements, including haemodynamics, subjective symptom assessments, eye movement tracking and pupil diameter analysis, participants were assessed during free image exploration in both supine and 60° head-up tilt positions. RESULTS: During head-up tilt, patients showed a decreased number and duration of fixations, as well as a decreased number, peak velocity and amplitude of saccades compared to the supine position and the control group. This reduction in visual exploration occurred primarily in the peripheral field of view and coincided with the occurrence of subjective visual symptoms. No significant differences in the saccade main sequence were observed between the two groups in either body position. CONCLUSIONS: Patients with postural tachycardia syndrome have a reduced exploration of the peripheral field of view when in an upright body position, potentially leading to tunnel vision. Since the normality of the saccade main sequence in patients combined with the focus on the centre of the field of view and the lower saccade amplitudes points to an intact brainstem function, the decrease in peripheral visual exploration may be attributed to a position-dependent dysfunction of the frontal eye field.
RESUMO
Humans appear to be endowed with the ability to readily share attention with interactive partners through the utilization of social direction cues, such as eye gaze and biological motion (BM). Here, we investigated the specialized brain mechanism underlying this fundamental social attention ability by incorporating different types of social (i.e., BM, gaze) and non-social (arrow) cues and combining functional magnetic resonance imaging (fMRI) with a modified central cueing paradigm. Using multi-voxel pattern analysis (MVPA), we found that although gaze- and BM-mediated attentional orienting could be decoded from neural activity in a wide range of brain areas, only the right anterior and posterior superior temporal sulcus (aSTS and pSTS) could specifically decode attentional orienting triggered by social but not non-social cues. Critically, cross-category MVPA further revealed that social attention could be decoded across BM and gaze cues in the right STS and the right superior temporal gyrus (STG). However, these regions could not decode attentional orienting across social and non-social cues. These findings together provide evidence for the existence of a specialized social attention module in the human brain, with the right STS/STG being the critical neural site dedicated to social attention.
Assuntos
Atenção , Sinais (Psicologia) , Fixação Ocular , Imageamento por Ressonância Magnética , Percepção de Movimento , Humanos , Atenção/fisiologia , Masculino , Feminino , Fixação Ocular/fisiologia , Adulto Jovem , Adulto , Percepção de Movimento/fisiologia , Mapeamento Encefálico/métodos , Percepção Social , Encéfalo/fisiologia , Encéfalo/diagnóstico por imagem , Lobo Temporal/fisiologia , Lobo Temporal/diagnóstico por imagemRESUMO
Infants explore the world around them based on their intrinsically motivated curiosity. However, the cognitive mechanisms underlying such curiosity-driven exploratory behaviour remain largely unknown. Here, infants could freely explore two novel categories, triggering a new exemplar from a category by fixating on either of the two associated areas on a computer screen. This gaze-contingent design enabled us to distinguish between exploration - switching from one category to another - and exploitation - consecutively triggering exemplars from the same category. Data from 10 to 12-month-old infants (N = 68) indicated that moment-to-moment sampling choices were non-random but guided by the infants' exploration history. Self-generated sequences grouped into three clusters of brief yet explorative, longer exploitative, and overall more balanced sampling patterns. Bayesian hierarchical binomial regression models indicated that across sequence patterns, infants' longer trigger time, shorter looking time, and more gaze-shifting were associated with trial-by-trial decisions to disengage from exploiting one category and making an exploratory switch, especially after consecutively viewed stimuli of high similarity. These findings offer novel insights into infants' curiosity-driven exploration and pave the way for future investigations, also regarding individual differences.
RESUMO
Attention in social interactions is directed by social cues such as the face or eye region of an interaction partner. Several factors that influence these attentional biases have been identified in the past. However, most findings are based on paradigms with static stimuli and no interaction potential. Therefore, the current study investigated the influence of one of these factors, namely facial affect in natural social interactions using an evaluated eye-tracking setup. In a sample of 35 female participants, we examined how individuals' gaze behavior responds to changes in the facial affect of an interaction partner trained in affect modulation. Our goal was to analyze the effects on attention to facial features and to investigate their temporal dynamics in a natural social interaction. The study results, obtained from both aggregated and dynamic analyses, indicate that facial affect has only subtle influences on gaze behavior during social interactions. In a sample with high measurement precision, these findings highlight the difficulties of capturing the subtleties of social attention in more naturalistic settings. The methodology used in this study serves as a foundation for future research on social attention differences in more ecologically valid scenarios.
RESUMO
In recent years, the field of neuroscience has increasingly recognized the importance of studying animal behaviors in naturalistic environments to gain deeper insights into ethologically relevant behavioral processes and neural mechanisms. The common marmoset (Callithrix jacchus), due to its small size, prosocial nature, and genetic proximity to humans, has emerged as a pivotal model toward this effort. However, traditional research methodologies often fail to fully capture the nuances of marmoset social interactions and cooperative behaviors. To address this critical gap, we developed the Marmoset Apparatus for Automated Pulling (MarmoAAP), a novel behavioral apparatus designed for studying cooperative behaviors in common marmosets. MarmoAAP addresses the limitations of traditional behavioral research methods by enabling high-throughput, detailed behavior outputs that can be integrated with video and audio recordings, allowing for more nuanced and comprehensive analyses even in a naturalistic setting. We also highlight the flexibility of MarmoAAP in task parameter manipulation which accommodates a wide range of behaviors and individual animal capabilities. Furthermore, MarmoAAP provides a platform to perform investigations of neural activity underlying naturalistic social behaviors. MarmoAAP is a versatile and robust tool for advancing our understanding of primate behavior and related cognitive processes. This new apparatus bridges the gap between ethologically relevant animal behavior studies and neural investigations, paving the way for future research in cognitive and social neuroscience using marmosets as a model organism.
Cooperation is one of the most important and advanced forms of social behaviour, yet studying it in laboratory settings can be particularly challenging. This is partly because animal species typically used in research do not cooperate in a way similar to humans. More recently, marmosets have gained recognition as an important model for studying collaboration, as these small primates naturally exhibit cooperative behaviours. However traditional research methods have struggled to capture these dynamics in a reliable and detailed way. A lack of approaches that allow researchers to methodically prompt naturalistic behaviours in freely moving animals under various controlled circumstances has hampered efforts to study the factors that influence cooperation. This limitation has also hindered investigations into the brain processes that underpin this unique social trait. To address this gap, Meisner et al. developed MarmoAAP, an apparatus that allows two marmosets in adjacent, transparent enclosures to observe each other and coordinate their actions so they can simultaneously pull levers and both receive a reward. This tool is compatible with advanced tracking technologies to monitor behaviour and brain activity. Testing revealed that the marmosets exhibited cooperative behaviour much more consistently and in greater numbers with MarmoAAP than in previous experiments using traditional, non-automated methods, making the apparatus an effective tool for studying this complex social behaviour. In addition to studying cooperation, MarmoAAP offers a standardised platform for testing the effects of drugs in marmosets, which could help develop new treatments for further testing in humans. Importantly, performance on the task could be precisely quantified using the detailed metrics provided by the apparatus. This is crucial for better understanding the factors that influence cooperative ability, and how these behaviours can be enhanced or disrupted. Neuroscientists could also use this combination of adaptable design and high-resolution data gathering to better understand brain activity in a wide range of complex primate behaviours.
Assuntos
Comportamento Animal , Callithrix , Comportamento Cooperativo , Animais , Callithrix/fisiologia , Comportamento Animal/fisiologia , Comportamento Social , Masculino , FemininoRESUMO
The eyes play a special role in human communications. Previous psychological studies have reported reflexive attention orienting in response to another individual's eyes during live interactions. Although robots are expected to collaborate with humans in various social situations, it remains unclear whether robot eyes have the potential to trigger attention orienting similarly to human eyes, specifically based on mental attribution. We investigated this issue in a series of experiments using a live gaze-cueing paradigm with an android. In Experiment 1, the non-predictive cue was the eyes and head of an android placed in front of human participants. Light-emitting diodes in the periphery served as target signals. The reaction times (RTs) required to localize the valid cued targets were faster than those for invalid cued targets for both types of cues. In Experiment 2, the gaze direction of the android eyes changed before the peripheral target lights appeared with or without barriers that made the targets non-visible, such that the android did not attend to them. The RTs were faster for validly cued targets only when there were no barriers. In Experiment 3, the targets were changed from lights to sounds, which the android could attend to even in the presence of barriers. The RTs to the target sounds were faster with valid cues, irrespective of the presence of barriers. These results suggest that android eyes may automatically induce attention orienting in humans based on mental state attribution.