RESUMO
Rapid eye movement sleep (REM) is believed to have a binary temporal structure with "phasic" and "tonic" microstates, characterized by motoric activity versus quiescence, respectively. However, we observed in mice that the frequency of theta activity (a marker of rodent REM) fluctuates in a nonbinary fashion, with the extremes of that fluctuation correlating with phasic-type and tonic-type facial motricity. Thus, phasic and tonic REM may instead represent ends of a continuum. These cycles of brain physiology and facial movement occurred at 0.01 to 0.06 Hz, or infraslow frequencies, and affected cross-frequency coupling and neuronal activity in the neocortex, suggesting network functional impact. We then analyzed human data and observed that humans also demonstrate nonbinary phasic/tonic microstates, with continuous 0.01 to 0.04-Hz respiratory rate cycles matching the incidence of eye movements. These fundamental properties of REM can yield insights into our understanding of sleep health.
Assuntos
Neocórtex , Sono REM , Humanos , Animais , Camundongos , Sono REM/fisiologia , Sono/fisiologia , Movimentos Oculares , Neocórtex/fisiologiaRESUMO
Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors). Here, we present mouth and facial informativeness (MaFI) for words, i.e., how visually informative words are based on their corresponding mouth and facial movements. MaFI was quantified for 2276 English words, varying in length, frequency, and age of acquisition, using phonological distance between a word and participants' speechreading guesses. The results showed that MaFI norms capture well the dynamic nature of mouth and facial movements per word, with words containing phonemes with roundness and frontness features, as well as visemes characterized by lower lip tuck, lip rounding, and lip closure being visually more informative. We also showed that the more of these features there are in a word, the more informative it is based on mouth and facial movements. Finally, we demonstrated that the MaFI norms generalize across different variants of English language. The norms are freely accessible via Open Science Framework ( https://osf.io/mna8j/ ) and can benefit any language researcher using audiovisual stimuli (e.g., to control for the effect of speech-linked mouth and facial movements).
RESUMO
PURPOSE: To evaluate optimal stimulation parameters with regard to discomfort and tolerability for transcutaneous electrostimulation of facial muscles in healthy participants and patients with postparetic facial synkinesis. METHODS: Two prospective studies were performed. First, single pulse monophasic stimulation with rectangular pulses was compared to triangular pulses in 48 healthy controls. Second, 30 healthy controls were compared to 30 patients with postparetic facial synkinesis with rectangular pulse form. Motor twitch threshold, tolerability threshold, and discomfort were assessed using a numeric rating scale at both thresholds. RESULTS: Discomfort at motor threshold was significantly lower for rectangular than for triangular pulses. Average motor and tolerability thresholds were higher for patients than for healthy participants. Discomfort at motor threshold was significantly lower for healthy controls compared to patients. Major side effects were not seen. CONCLUSIONS: Surface electrostimulation for selective functional and tolerable facial muscle contractions in patients with postparetic facial synkinesis is feasible.
Assuntos
Terapia por Estimulação Elétrica , Paralisia Facial , Sincinesia , Adulto , Músculos Faciais , Paralisia Facial/terapia , Humanos , Estudos Prospectivos , Sincinesia/etiologia , Sincinesia/terapiaRESUMO
During seizures, a myriad of clinical manifestations may occur. The analysis of these signs, known as seizure semiology, gives clues to the underlying cerebral networks involved. When patients with drug-resistant epilepsy are monitored to assess their suitability for epilepsy surgery, semiology is a vital component to the presurgical evaluation. Specific patterns of facial movements, head motions, limb posturing and articulations, and hand and finger automatisms may be useful in distinguishing between mesial temporal lobe epilepsy (MTLE) and extratemporal lobe epilepsy (ETLE). However, this analysis is time-consuming and dependent on clinical experience and training. Given this limitation, an automated analysis of semiological patterns, i.e., detection, quantification, and recognition of body movement patterns, has the potential to help increase the diagnostic precision of localization. While a few single modal quantitative approaches are available to assess seizure semiology, the automated quantification of patients' behavior across multiple modalities has seen limited advances in the literature. This is largely due to multiple complicated variables commonly encountered in the clinical setting, such as analyzing subtle physical movements when the patient is covered or room lighting is inadequate. Semiology encompasses the stepwise/temporal progression of signs that is reflective of the integration of connected neuronal networks. Thus, single signs in isolation are far less informative. Taking this into account, here, we describe a novel modular, hierarchical, multimodal system that aims to detect and quantify semiologic signs recorded in 2D monitoring videos. Our approach can jointly learn semiologic features from facial, body, and hand motions based on computer vision and deep learning architectures. A dataset collected from an Australian quaternary referral epilepsy unit analyzing 161 seizures arising from the temporal (nâ¯=â¯90) and extratemporal (nâ¯=â¯71) brain regions has been used in our system to quantitatively classify these types of epilepsy according to the semiology detected. A leave-one-subject-out (LOSO) cross-validation of semiological patterns from the face, body, and hands reached classification accuracies ranging between 12% and 83.4%, 41.2% and 80.1%, and 32.8% and 69.3%, respectively. The proposed hierarchical multimodal system is a potential stepping-stone towards developing a fully automated semiology analysis system to support the assessment of epilepsy.
Assuntos
Automatismo/fisiopatologia , Aprendizado Profundo , Epilepsia do Lobo Temporal/diagnóstico , Epilepsia/diagnóstico , Face/fisiopatologia , Mãos/fisiopatologia , Movimento/fisiologia , Monitorização Neurofisiológica/métodos , Convulsões/diagnóstico , Fenômenos Biomecânicos , Conjuntos de Dados como Assunto , HumanosRESUMO
A rich pattern of connectivity is present in non-human primates between the dorsal premotor cortex (PMCd) and the motor cortex (M1). By analogy, similar connections are hypothesized in humans between the PMCd and the ipsilateral hand-related M1. However, the technical difficulty of applying transcranial magnetic stimulation (TMS) with a dual-coil paradigm to two cortical regions in such close spatial proximity renders their in vivo demonstration difficult. The present work aims at assessing in humans the existence of short-latency influences of the left PMCd on the ipsilateral corticofacial system by means of TMS. A dual-coil TMS paradigm was used with 16 participants. Test TMS pulses were applied to the left orofacial M1, and conditioning TMS pulses were applied to three distinct points of the ipsilateral PMCd along the caudal part of the superior frontal sulcus. The inter-stimulus interval (ISI) between condTMS and testTMS varied in 2-ms steps between 2 and 8 ms. Motor evoked potentials (MEPs) in the active orbicularis oris muscle were recorded. CondTMS exerted a robust effect on the corticofacial system only when applied to one specific portion of the PMCd and only at one specific ISI (6 ms). The effect consisted in a systematic suppression of facial MEPs compared to those obtained by testTMS alone. No other effect was found. We provide evidence for a specific short-latency inhibitory effect of the PMCd on the ipsilateral M1, likely witnessing direct corticocortical connectivity in humans. We also describe a novel paradigm to test ipsilateral PMCd-M1 in humans.
Assuntos
Lateralidade Funcional/fisiologia , Córtex Motor/fisiologia , Músculo Esquelético/fisiologia , Vias Neurais/fisiologia , Córtex Pré-Frontal/fisiologia , Estimulação Magnética Transcraniana , Adulto , Análise de Variância , Mapeamento Encefálico , Eletromiografia , Potencial Evocado Motor , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Tempo de Reação , Adulto JovemRESUMO
The Center for Epidemiologic Studies Depression Scale (CES-D) performs well in screening depression in primary care. However, people are looking for alternatives because it screens for too many items. With the popularity of social media platforms, facial movement can be recorded ecologically. Considering that there are nonverbal behaviors, including facial movement, associated with a depressive state, this study aims to establish an automatic depression recognition model to be easily used in primary healthcare. We integrated facial activities and gaze behaviors to establish a machine learning algorithm (Kernal Ridge Regression, KRR). We compared different algorithms and different features to achieve the best model. The results showed that the prediction effect of facial and gaze features was higher than that of only facial features. In all of the models we tried, the ridge model with a periodic kernel showed the best performance. The model showed a mutual fund R-squared (R2) value of 0.43 and a Pearson correlation coefficient (r) value of 0.69 (p < 0.001). Then, the most relevant variables (e.g., gaze directions and facial action units) were revealed in the present study.
RESUMO
Patients who have lost limb control ability, such as upper limb amputation and high paraplegia, are usually unable to take care of themselves. Establishing a natural, stable, and comfortable human-computer interface (HCI) for controlling rehabilitation assistance robots and other controllable equipments will solve a lot of their troubles. In this study, a complete limbs-free face-computer interface (FCI) framework based on facial electromyography (fEMG) including offline analysis and online control of mechanical equipments was proposed. Six facial movements related to eyebrows, eyes, and mouth were used in this FCI. In the offline stage, 12 models, eight types of features, and three different feature combination methods for model inputing were studied and compared in detail. In the online stage, four well-designed sessions were introduced to control a robotic arm to complete drinking water task in three ways (by touch screen, by fEMG with and without audio feedback) for verification and performance comparison of proposed FCI framework. Three features and one model with an average offline recognition accuracy of 95.3%, a maximum of 98.8%, and a minimum of 91.4% were selected for use in online scenarios. In contrast, the way with audio feedback performed better than that without audio feedback. All subjects completed the drinking task in a few minutes with FCI. The average and smallest time difference between touch screen and fEMG under audio feedback were only 1.24 and 0.37 min, respectively.
RESUMO
Using computer-vision and image processing techniques, we aim to identify specific visual cues as induced by facial movements made during monosyllabic speech production. The method is named ADFAC: Automatic Detection of Facial Articulatory Cues. Four facial points of interest were detected automatically to represent head, eyebrow and lip movements: nose tip (proxy for head movement), medial point of left eyebrow, and midpoints of the upper and lower lips. The detected points were then automatically tracked in the subsequent video frames. Critical features such as the distance, velocity, and acceleration describing local facial movements with respect to the resting face of each speaker were extracted from the positional profiles of each tracked point. In this work, a variant of random forest is proposed to determine which facial features are significant in classifying speech sound categories. The method takes in both video and audio as input and extracts features from any video with a plain or simple background. The method is implemented in MATLAB and scripts are made available on GitHub for easy access.â¢Using innovative computer-vision and image processing techniques to automatically detect and track keypoints on the face during speech production in videos, thus allowing more natural articulation than previous sensor-based approaches.â¢Measuring multi-dimensional and dynamic facial movements by extracting time-related, distance-related and kinematics-related features in speech production.â¢Adopting the novel random forest classification approach to determine and rank the significance of facial features toward accurate speech sound categorization.
RESUMO
Background: Many methods have been proposed to automatically identify the presence of mental illness, but these have mostly focused on one specific mental illness. In some non-professional scenarios, it would be more helpful to understand an individual's mental health status from all perspectives. Methods: We recruited 100 participants. Their multi-dimensional psychological symptoms of mental health were evaluated using the Symptom Checklist 90 (SCL-90) and their facial movements under neutral stimulation were recorded using Microsoft Kinect. We extracted the time-series characteristics of the key points as the input, and the subscale scores of the SCL-90 as the output to build facial prediction models. Finally, the convergent validity, discriminant validity, criterion validity, and the split-half reliability were respectively assessed using a multitrait-multimethod matrix and correlation coefficients. Results: The correlation coefficients between the predicted values and actual scores were 0.26 and 0.42 (P < 0.01), which indicated good criterion validity. All models except depression had high convergent validity but low discriminant validity. Results also indicated good levels of split-half reliability for each model [from 0.516 (hostility) to 0.817 (interpersonal sensitivity)] (P < 0.001). Conclusion: The validity and reliability of facial prediction models were confirmed for the measurement of mental health based on the SCL-90. Our research demonstrated that fine-grained aspects of mental health can be identified from the face, and provided a feasible evaluation method for multi-dimensional prediction models.
RESUMO
OBJECTIVES/HYPOTHESIS: Using surface electrostimulation, we aimed to use facial nerve mapping (FNM) in healthy subjects and patients with postparetic facial synkinesis (PPFS) to define functional facial target regions that can be stimulated selectively. STUDY DESIGN: Single-center prospective cohort study. METHODS: FNM was performed bilaterally in 20 healthy subjects and 20 patients with PPFS. Single-pulse surface FNM started at the main trunk of the facial nerve and followed the peripheral branches in a distal direction. Stimulation started with 0.1 mA and increased in 0.1 mA increments. The procedure was simultaneously video recorded and evaluated offline. RESULTS: A total of 1,873 spots were stimulated, and 1,875 facial movements were evaluated. The stimulation threshold was higher on the PPFS side (average = 9.8 ± 1.0 mA) compared to the contralateral side (4.1 ± 0.8 mA) for all stimulation sites or compared to healthy subjects (4.1 ± 0.5 mA; all P < .01). In healthy subjects, selective electrostimulation ± one unintended coactivation was possible at all sites in >80% of cases, with the exception of pulling up the corner of the mouth (65%-75%). On the PPFS side, stimulation was possible for puckering lips movements in 60%/75% (selective stimulation ± one coactivation, respectively), blinking in 55%/80%, pulling up the corner of the mouth in 50%/85%, brow raising in 5%/85, and raising the chin in 0%/35% of patients, respectively. CONCLUSIONS: FNM mapping for surgical planning and selective electrostimulation of functional facial regions is possible even in patients with PPFS. FNM may be a tool for patient-specific evaluation and placement of electrodes to stimulate the correct nerve branches in future bionic devices (e.g., for a bionic eye blink). LEVEL OF EVIDENCE: 2b Laryngoscope, 130:E320-E326, 2020.
Assuntos
Terapia por Estimulação Elétrica/métodos , Músculos Faciais/inervação , Nervo Facial/fisiopatologia , Paralisia Facial/terapia , Sincinesia/terapia , Músculos Faciais/fisiopatologia , Paralisia Facial/fisiopatologia , Seguimentos , Humanos , Estudos Prospectivos , Gravação em VídeoRESUMO
Functional (psychogenic) movement disorders (FMDs) may present with a broad spectrum of phenomenology including stereotypic movements. We aimed to characterize the phenomenology of functional stereotypies and compare these features with those observed in 65 patients with tardive dyskinesia (TD). From a cohort of 184 patients with FMDs, we identified 19 (10.3%) with functional stereotypies (FS). There were 15 women and 4 men, with a mean age at onset of 38.6 ± 17.4 years. Among the patients with FS, there were 9 (47%) with orolingual dyskinesia/stereotypy, 9 (47%) with limb stereotypies, 6 (32%) with trunk stereotypies, and 2 (11%) with respiratory dyskinesia as part of orofacial-laryngeal-trunk stereotypy. These patients showed signs commonly seen in FMDs such as sudden onset (84%), prominent distractibility (58%), and periods of unexplained improvement (84%) that were not reported in patients with TD. Besides a much lower frequency of exposure to potential offending drugs, patients with FS differed from those with classic TD by a younger age at onset, lack of self-biting, uncommon chewing movements, more frequent lingual movements without mouth dyskinesia, and associated functional tremor and abnormal speech. Lack of self-biting showed the highest sensitivity (1.0) and abnormal speech showed the highest specificity (0.9) for the diagnosis of functional orolingual dyskinesia. FS represent part of the clinical spectrum of FMDs. Clinical and demographic features are helpful in distinguishing patients with FS from those with TD.
Assuntos
Transtornos Psicofisiológicos/diagnóstico , Transtornos Psicofisiológicos/fisiopatologia , Transtorno de Movimento Estereotipado/diagnóstico , Transtorno de Movimento Estereotipado/fisiopatologia , Discinesia Tardia/diagnóstico , Discinesia Tardia/fisiopatologia , Adulto , Idade de Início , Diagnóstico Diferencial , Face , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Sensibilidade e Especificidade , Transtorno de Movimento Estereotipado/etiologiaRESUMO
BACKGROUND: Functional magnetic resonance imaging (fMRI) mapping can present the activated cortical area during movement, while little is known about precise location in facial and tongue movements. OBJECTIVE: To investigate the representation of facial and tongue movements by task fMRI. METHODS: Twenty right-handed healthy subjects were underwent block design task fMRI examination. Task movements included lip pursing, cheek bulging, grinning and vertical tongue excursion. Statistical parametric mapping (SPM8) was applied to analysis the data. RESULTS: One-sample t-test was used to calculate the common activation area between facial and tongue movements. Also, paired t-test was used to test for areas of over- or underactivation in tongue movement compared with each group of facial movements. CONCLUSIONS: The common areas within facial and tongue movements suggested the similar motor circuits of activation in both movements. Prior activation in tongue movement was situated laterally and inferiorly in sensorimotor area relative to facial movements. Prior activation of tongue movement was investigated in left superior parietal lobe relative to lip pursing. Also, prior activation in bilateral cuneus lobe in grinning compared with tongue movement was detected.
Assuntos
Mapeamento Encefálico/métodos , Ondas Encefálicas , Expressão Facial , Músculos Faciais/fisiologia , Imageamento por Ressonância Magnética , Atividade Motora , Córtex Motor/fisiologia , Língua/fisiologia , Adulto , Músculos Faciais/inervação , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Valor Preditivo dos Testes , Língua/inervaçãoRESUMO
OBJECTIVES/HYPOTHESIS: To examine by intraoperative electric stimulation which peripheral facial nerve (FN) branches are functionally connected to which facial muscle functions. STUDY DESIGN: Single-center prospective clinical study. METHODS: Seven patients whose peripheral FN branching was exposed during parotidectomy under FN monitoring received a systematic electrostimulation of each branch starting with 0.1 mA and stepwise increase to 2 mA with a frequency of 3 Hz. The electrostimulation and the facial and neck movements were video recorded simultaneously and evaluated independently by two investigators. RESULTS: A uniform functional allocation of specific peripheral FN branches to a specific mimic movement was not possible. Stimulation of the whole spectrum of branches of the temporofacial division could lead to eye closure (orbicularis oculi muscle function). Stimulation of the spectrum of nerve branches of the cervicofacial division could lead to reactions in the midface (nasal and zygomatic muscles) as well as around the mouth (orbicularis oris and depressor anguli oris muscle function). Frontal and eye region were exclusively supplied by the temporofacial division. The region of the mouth and the neck was exclusively supplied by the cervicofacial division. Nose and zygomatic region were mainly supplied by the temporofacial division, but some patients had also nerve branches of the cervicofacial division functionally supplying the nasal and zygomatic region. CONCLUSIONS: FN branches distal to temporofacial and cervicofacial division are not necessarily covered by common facial nerve monitoring. Future bionic devices will need a patient-specific evaluation to stimulate the correct peripheral nerve branches to trigger distinct muscle functions. LEVEL OF EVIDENCE: 4 Laryngoscope, 127:1288-1295, 2017.
Assuntos
Estimulação Elétrica/métodos , Músculos Faciais/inervação , Nervo Facial/fisiologia , Bochecha/inervação , Pálpebras/inervação , Face/inervação , Músculos Faciais/cirurgia , Nervo Facial/cirurgia , Feminino , Humanos , Masculino , Músculos da Mastigação/inervação , Pessoa de Meia-Idade , Boca/inervação , Órbita/inervação , Glândula Parótida/cirurgia , Estudos ProspectivosRESUMO
Whereas the somatotopy of finger movements has been extensively studied with neuroimaging, the neural foundations of facial movements remain elusive. Therefore, we systematically studied the neuronal correlates of voluntary facial movements using the Facial Action Coding System (FACS, Ekman et al., 2002). The facial movements performed in the MRI scanner were defined as Action Units (AUs) and were controlled by a certified FACS coder. The main goal of the study was to investigate the detailed somatotopy of the facial primary motor area (facial M1). Eighteen participants were asked to produce the following four facial movements in the fMRI scanner: AU1+2 (brow raiser), AU4 (brow lowerer), AU12 (lip corner puller) and AU24 (lip presser), each in alternation with a resting phase. Our facial movement task induced generally high activation in brain motor areas (e.g., M1, premotor cortex, supplementary motor area, putamen), as well as in the thalamus, insula, and visual cortex. BOLD activations revealed overlapping representations for the four facial movements. However, within the activated facial M1 areas, we could find distinct peak activities in the left and right hemisphere supporting a rough somatotopic upper to lower face organization within the right facial M1 area, and a somatotopic organization within the right M1 upper face part. In both hemispheres, the order was an inverse somatotopy within the lower face representations. In contrast to the right hemisphere, in the left hemisphere the representation of AU4 was more lateral and anterior compared to the rest of the facial movements. Our findings support the notion of a partial somatotopic order within the M1 face area confirming the "like attracts like" principle (Donoghue et al., 1992). AUs which are often used together or are similar are located close to each other in the motor cortex.
RESUMO
In this review, we introduced our three studies that focused on facial movements. In the first study, we examined the temporal characteristics of neural responses elicited by viewing mouth movements, and assessed differences between the responses to mouth opening and closing movements and an averting eyes condition. Our results showed that the occipitotemporal area, the human MT/V5 homologue, was active in the perception of both mouth and eye motions. Viewing mouth and eye movements did not elicit significantly different activity in the occipitotemporal area, which indicated that perception of the movement of facial parts may be processed in the same manner, and this is different from motion in general. In the second study, we investigated whether early activity in the occipitotemporal region evoked by eye movements was influenced by the facial contour and/or features such as the mouth. Our results revealed specific information processing for eye movements in the occipitotemporal region, and this activity was significantly influenced by whether movements appeared with the facial contour and/or features, in other words, whether the eyes moved, even if the movement itself was the same. In the third study, we examined the effects of inverting the facial contour (hair and chin) and features (eyes, nose, and mouth) on processing for static and dynamic face perception. Our results showed the following: (1) In static face perception, activity in the right fusiform area was affected more by the inversion of features while that in the left fusiform area was affected more by a disruption in the spatial relationship between the contour and features; and (2) In dynamic face perception, activity in the right occipitotemporal area was affected by the inversion of the facial contour.
RESUMO
A left visual field (LVF) bias has been consistently reported in eye movement patterns when adults look at face stimuli, which reflects hemispheric lateralization of face processing and eye movements. However, the emergence of the LVF attentional bias in infancy is less clear. The present study investigated the emergence and development of the LVF attentional bias in infants from 3 to 9 months of age with moving face stimuli. We specifically examined the naturalness of facial movements in infants'LVF attentional bias by comparing eye movement patterns in naturally and artificially moving faces. Results showed that 3- to 5-month-olds exhibited the LVF attentional bias only in the lower half of naturally moving faces, but not in artificially moving faces. Six- to 9-month-olds showed the LVF attentional bias in both the lower and upper face halves only in naturally moving, but not in artificially moving faces. These results suggest that the LVF attentional bias for face processing may emerge around 3 months of age and is driven by natural facial movements. The LVF attentional bias reflects the role of natural face experience in real life situations that may drive the development of hemispheric lateralization of face processing in infancy.
Assuntos
Viés , Face , Lateralidade Funcional/fisiologia , Movimento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Campos Visuais/fisiologia , Movimentos Oculares , Feminino , Humanos , Lactente , Masculino , Estimulação LuminosaRESUMO
BOTH THE SCIENCE AND THE EVERYDAY PRACTICE OF DETECTING A LIE REST ON THE SAME ASSUMPTION: hidden cognitive states that the liar would like to remain hidden nevertheless influence observable behavior. This assumption has good evidence. The insights of professional interrogators, anecdotal evidence, and body language textbooks have all built up a sizeable catalog of non-verbal cues that have been claimed to distinguish deceptive and truthful behavior. Typically, these cues are discrete, individual behaviors-a hand touching a mouth, the rise of a brow-that distinguish lies from truths solely in terms of their frequency or duration. Research to date has failed to establish any of these non-verbal cues as a reliable marker of deception. Here we argue that perhaps this is because simple tallies of behavior can miss out on the rich but subtle organization of behavior as it unfolds over time. Research in cognitive science from a dynamical systems perspective has shown that behavior is structured across multiple timescales, with more or less regularity and structure. Using tools that are sensitive to these dynamics, we analyzed body motion data from an experiment that put participants in a realistic situation of choosing, or not, to lie to an experimenter. Our analyses indicate that when being deceptive, continuous fluctuations of movement in the upper face, and somewhat in the arms, are characterized by dynamical properties of less stability, but greater complexity. For the upper face, these distinctions are present despite no apparent differences in the overall amount of movement between deception and truth. We suggest that these unique dynamical signatures of motion are indicative of both the cognitive demands inherent to deception and the need to respond adaptively in a social context.