Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.664
Filtrar
1.
CBE Life Sci Educ ; 23(2): ar16, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38620007

RESUMO

Interpreting three-dimensional models of biological macromolecules is a key skill in biochemistry, closely tied to students' visuospatial abilities. As students interact with these models and explain biochemical concepts, they often use gesture to complement verbal descriptions. Here, we utilize an embodied cognition-based approach to characterize undergraduate students' gesture production as they described and interpreted an augmented reality (AR) model of potassium channel structure and function. Our analysis uncovered two emergent patterns of gesture production employed by students, as well as common sets of gestures linked across categories of biochemistry content. Additionally, we present three cases that highlight changes in gesture production following interaction with a 3D AR visualization. Together, these observations highlight the importance of attending to gesture in learner-centered pedagogies in undergraduate biochemistry education.


Assuntos
Gestos , Estudantes , Humanos , Bioquímica/educação
2.
Proc Biol Sci ; 291(2020): 20240250, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38565151

RESUMO

Communication needs to be complex enough to be functional while minimizing learning and production costs. Recent work suggests that the vocalizations and gestures of some songbirds, cetaceans and great apes may conform to linguistic laws that reflect this trade-off between efficiency and complexity. In studies of non-human communication, though, clustering signals into types cannot be done a priori, and decisions about the appropriate grain of analysis may affect statistical signals in the data. The aim of this study was to assess the evidence for language-like efficiency and structure in house finch (Haemorhous mexicanus) song across three levels of granularity in syllable clustering. The results show strong evidence for Zipf's rank-frequency law, Zipf's law of abbreviation and Menzerath's law. Additional analyses show that house finch songs have small-world structure, thought to reflect systematic structure in syntax, and the mutual information decay of sequences is consistent with a combination of Markovian and hierarchical processes. These statistical patterns are robust across three levels of granularity in syllable clustering, pointing to a limited form of scale invariance. In sum, it appears that house finch song has been shaped by pressure for efficiency, possibly to offset the costs of female preferences for complexity.


Assuntos
Tentilhões , Animais , Feminino , Idioma , Linguística , Aprendizagem , Gestos , Cetáceos , Vocalização Animal
3.
J Neural Eng ; 21(2)2024 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-38565124

RESUMO

Objective.Recent studies have shown that integrating inertial measurement unit (IMU) signals with surface electromyographic (sEMG) can greatly improve hand gesture recognition (HGR) performance in applications such as prosthetic control and rehabilitation training. However, current deep learning models for multimodal HGR encounter difficulties in invasive modal fusion, complex feature extraction from heterogeneous signals, and limited inter-subject model generalization. To address these challenges, this study aims to develop an end-to-end and inter-subject transferable model that utilizes non-invasively fused sEMG and acceleration (ACC) data.Approach.The proposed non-invasive modal fusion-transformer (NIMFT) model utilizes 1D-convolutional neural networks-based patch embedding for local information extraction and employs a multi-head cross-attention (MCA) mechanism to non-invasively integrate sEMG and ACC signals, stabilizing the variability induced by sEMG. The proposed architecture undergoes detailed ablation studies after hyperparameter tuning. Transfer learning is employed by fine-tuning a pre-trained model on new subject and a comparative analysis is performed between the fine-tuning and subject-specific model. Additionally, the performance of NIMFT is compared to state-of-the-art fusion models.Main results.The NIMFT model achieved recognition accuracies of 93.91%, 91.02%, and 95.56% on the three action sets in the Ninapro DB2 dataset. The proposed embedding method and MCA outperformed the traditional invasive modal fusion transformer by 2.01% (embedding) and 1.23% (fusion), respectively. In comparison to subject-specific models, the fine-tuning model exhibited the highest average accuracy improvement of 2.26%, achieving a final accuracy of 96.13%. Moreover, the NIMFT model demonstrated superiority in terms of accuracy, recall, precision, and F1-score compared to the latest modal fusion models with similar model scale.Significance.The NIMFT is a novel end-to-end HGR model, utilizes a non-invasive MCA mechanism to integrate long-range intermodal information effectively. Compared to recent modal fusion models, it demonstrates superior performance in inter-subject experiments and offers higher training efficiency and accuracy levels through transfer learning than subject-specific approaches.


Assuntos
Gestos , Reconhecimento Psicológico , Rememoração Mental , Fontes de Energia Elétrica , Redes Neurais de Computação , Eletromiografia
4.
PLoS One ; 19(4): e0298699, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38574042

RESUMO

Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.


Assuntos
Algoritmo Florestas Aleatórias , Língua de Sinais , Humanos , Reconhecimento Automatizado de Padrão/métodos , Gestos , Extremidade Superior
5.
Sci Rep ; 14(1): 7906, 2024 04 04.
Artigo em Inglês | MEDLINE | ID: mdl-38575710

RESUMO

This paper delves into the specialized domain of human action recognition, focusing on the Identification of Indian classical dance poses, specifically Bharatanatyam. Within the dance context, a "Karana" embodies a synchronized and harmonious movement encompassing body, hands, and feet, as defined by the Natyashastra. The essence of Karana lies in the amalgamation of nritta hasta (hand movements), sthaana (body postures), and chaari (leg movements). Although numerous, Natyashastra codifies 108 karanas, showcased in the intricate stone carvings adorning the Nataraj temples of Chidambaram, where Lord Shiva's association with these movements is depicted. Automating pose identification in Bharatanatyam poses challenges due to the vast array of variations, encompassing hand and body postures, mudras (hand gestures), facial expressions, and head gestures. To simplify this intricate task, this research employs image processing and automation techniques. The proposed methodology comprises four stages: acquisition and pre-processing of images involving skeletonization and Data Augmentation techniques, feature extraction from images, classification of dance poses using a deep learning network-based convolution neural network model (InceptionResNetV2), and visualization of 3D models through mesh creation from point clouds. The use of advanced technologies, such as the MediaPipe library for body key point detection and deep learning networks, streamlines the identification process. Data augmentation, a pivotal step, expands small datasets, enhancing the model's accuracy. The convolution neural network model showcased its effectiveness in accurately recognizing intricate dance movements, paving the way for streamlined analysis and interpretation. This innovative approach not only simplifies the identification of Bharatanatyam poses but also sets a precedent for enhancing accessibility and efficiency for practitioners and researchers in the Indian classical dance.


Assuntos
Realidade Aumentada , Humanos , Redes Neurais de Computação , Processamento de Imagem Assistida por Computador/métodos , Cabeça , Gestos
6.
Cogn Sci ; 48(3): e13428, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38528790

RESUMO

Public speakers like politicians carefully craft their words to maximize the clarity, impact, and persuasiveness of their messages. However, these messages can be shaped by more than words. Gestures play an important role in how spoken arguments are perceived, conceptualized, and remembered by audiences. Studies of political speech have explored the ways spoken arguments are used to persuade audiences and cue applause. Studies of politicians' gestures have explored the ways politicians illustrate different concepts with their hands, but have not focused on gesture's potential as a tool of persuasion. Our paper combines these traditions to ask first, how politicians gesture when using spoken rhetorical devices aimed at persuading audiences, and second, whether these gestures influence the ways their arguments are perceived. Study 1 examined two rhetorical devices-contrasts and lists-used by three politicians during U.S. presidential debates and asked whether the gestures produced during contrasts and lists differ. Gestures produced during contrasts were more likely to involve changes in hand location, and gestures produced during lists were more likely to involve changes in trajectory. Study 2 used footage from the same debates in an experiment to ask whether gesture influenced the way people perceived the politicians' arguments. When participants had access to gestural information, they perceived contrasted items as more different from one another and listed items as more similar to one another than they did when they only had access to speech. This was true even when participants had access to only gesture (in muted videos). We conclude that gesture is effective at communicating concepts of similarity and difference and that politicians (and likely other speakers) take advantage of gesture's persuasive potential.


Assuntos
Gestos , Fala , Humanos , Idioma , Desenvolvimento da Linguagem , Mãos
7.
Sensors (Basel) ; 24(5)2024 Feb 20.
Artigo em Inglês | MEDLINE | ID: mdl-38474890

RESUMO

RF-based gesture recognition systems outperform computer vision-based systems in terms of user privacy. The integration of Wi-Fi sensing and deep learning has opened new application areas for intelligent multimedia technology. Although promising, existing systems have multiple limitations: (1) they only work well in a fixed domain; (2) when working in a new domain, they require the recollection of a large amount of data. These limitations either lead to a subpar cross-domain performance or require a huge amount of human effort, impeding their widespread adoption in practical scenarios. We propose Wi-AM, a privacy-preserving gesture recognition framework, to address the above limitations. Wi-AM can accurately recognize gestures in a new domain with only one sample. To remove irrelevant disturbances induced by interfering domain factors, we design a multi-domain adversarial scheme to reduce the differences in data distribution between different domains and extract the maximum amount of transferable features related to gestures. Moreover, to quickly adapt to an unseen domain with only a few samples, Wi-AM adopts a meta-learning framework to fine-tune the trained model into a new domain with a one-sample-per-gesture manner while achieving an accurate cross-domain performance. Extensive experiments in a real-world dataset demonstrate that Wi-AM can recognize gestures in an unseen domain with average accuracy of 82.13% and 86.76% for 1 and 3 data samples.


Assuntos
Gestos , Reconhecimento Automatizado de Padrão , Humanos , Reconhecimento Psicológico , Tecnologia da Informação , Inteligência , Algoritmos
8.
Sensors (Basel) ; 24(6)2024 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-38544014

RESUMO

This study investigates the characteristics of a novel origami-based, elastomeric actuator and a soft gripper, which are controlled by hand gestures that are recognized through machine learning algorithms. The lightweight paper-elastomer structure employed in this research exhibits distinct actuation features in four key areas: (1) It requires approximately 20% less pressure for the same bending amplitude compared to pneumatic network actuators (Pneu-Net) of equivalent weight, and even less pressure compared to other actuators with non-linear bending behavior; (2) The control of the device is examined by validating the relationship between pressure and the bending angle, as well as the interaction force and pressure at a fixed bending angle; (3) A soft robotic gripper comprising three actuators is designed. Enveloping and pinch grasping experiments are conducted on various shapes, which demonstrate the gripper's potential in handling a wide range of objects for numerous applications; and (4) A gesture recognition algorithm is developed to control the gripper using electromyogram (EMG) signals from the user's muscles.


Assuntos
Algoritmos , Elastômeros , Eletromiografia , Gestos , Aprendizado de Máquina
9.
Sensors (Basel) ; 24(6)2024 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-38544240

RESUMO

Radio frequency (RF) technology has been applied to enable advanced behavioral sensing in human-computer interaction. Due to its device-free sensing capability and wide availability on Internet of Things devices. Enabling finger gesture-based identification with high accuracy can be challenging due to low RF signal resolution and user heterogeneity. In this paper, we propose MeshID, a novel RF-based user identification scheme that enables identification through finger gestures with high accuracy. MeshID significantly improves the sensing sensitivity on RF signal interference, and hence is able to extract subtle individual biometrics through velocity distribution profiling (VDP) features from less-distinct finger motions such as drawing digits in the air. We design an efficient few-shot model retraining framework based on first component reverse module, achieving high model robustness and performance in a complex environment. We conduct comprehensive real-world experiments and the results show that MeshID achieves a user identification accuracy of 95.17% on average in three indoor environments. The results indicate that MeshID outperforms the state-of-the-art in identification performance with less cost.


Assuntos
Algoritmos , Gestos , Humanos , Reconhecimento Automatizado de Padrão/métodos , Dedos , Movimento (Física)
10.
Med Eng Phys ; 125: 104131, 2024 03.
Artigo em Inglês | MEDLINE | ID: mdl-38508805

RESUMO

Variations in muscular contraction are known to significantly impact the quality of the generated EMG signal and the output decision of a proposed classifier. This is an issue when the classifier is further implemented in prosthetic hand design. Therefore, this study aims to develop a deep learning classifier to improve the classification of hand motion gestures and investigate the effect of force variations on their accuracy on amputees. The contribution of this study showed that the resulting deep learning architecture based on DNN (deep neural network) could recognize the six gestures and robust against different force levels (18 combinations). Additionally, this study recommended several channels that most contribute to the classifier's accuracy. Also, the selected time domain features were used for a classifier to recognize 18 combinations of EMG signal patterns (6 gestures and three forces). The average accuracy of the proposed method (DNN) was also observed at 92.0 ± 6.1 %. Moreover, several other classifiers were used as comparisons, such as support vector machine (SVM), decision tree (DT), K-nearest neighbors, and Linear Discriminant Analysis (LDA). The increase in the mean accuracy of the proposed method compared to other conventional classifiers (SVM, DT, KNN, and LDA), was 17.86 %. Also, the study's implication stated that the proposed method should be applied to developing prosthetic hands for amputees that recognize multi-force gestures.


Assuntos
Amputados , Aprendizado Profundo , Humanos , Eletromiografia , Gestos , Redes Neurais de Computação , Algoritmos
11.
Math Biosci Eng ; 21(3): 3594-3617, 2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38549297

RESUMO

A Multiscale-Motion Embedding Pseudo-3D (MME-P3D) gesture recognition algorithm has been proposed to tackle the issues of excessive parameters and high computational complexity encountered by existing gesture recognition algorithms deployed in mobile and embedded devices. The algorithm initially takes into account the characteristics of gesture motion information, integrating the channel attention (CE) mechanism into the pseudo-3D (P3D) module, thereby constructing a P3D-C feature extraction network that can efficiently extract spatio-temporal feature information while reducing the complexity of the algorithmic model. To further enhance the understanding and learning of the global gesture movement's dynamic information, a Multiscale Motion Embedding (MME) mechanism is subsequently designed. The experimental findings reveal that the MME-P3D model achieves recognition accuracies reaching up to 91.12% and 83.06% on the self-constructed conference gesture dataset and the publicly available Chalearn 2013 dataset, respectively. In comparison with the conventional 3D convolutional neural network, the MME-P3D model demonstrates a significant advantage in terms of parameter count and computational requirements, which are reduced by as much as 82% and 83%, respectively. This effectively addresses the limitations of the original algorithms, making them more suitable for deployment on embedded and mobile devices and providing a more effective means for the practical application of hand gesture recognition technology.


Assuntos
Endrin/análogos & derivados , Gestos , Reconhecimento Automatizado de Padrão , Algoritmos , Redes Neurais de Computação
12.
Curr Biol ; 34(6): R231-R232, 2024 Mar 25.
Artigo em Inglês | MEDLINE | ID: mdl-38531311

RESUMO

Gestures are ubiquitous in human communication, involving movements of body parts produced for a variety of purposes, such as pointing out objects (deictic gestures) or conveying messages (symbolic gestures)1. While displays of body parts have been described in many animals2, their functional similarity to human gestures has primarily been explored in great apes3,4, with little research attention given to other animal groups. To date, only a few studies have provided evidence for deictic gestures in birds and fish5,6,7, but it is unclear whether non-primate animals can employ symbolic gestures, such as waving to mean 'goodbye', which are, in humans, more cognitively demanding than deictic gestures1. Here, we report that the Japanese tit (Parus minor), a socially monogamous bird, uses wing-fluttering to prompt their mated partner to enter the nest first, and that wing-fluttering functions as a symbolic gesture conveying a specific message ('after you'). Our findings encourage further research on animal gestures, which may help in understanding the evolution of complex communication, including language.


Assuntos
Aves , Gestos , Animais , Comunicação Animal
13.
Artigo em Inglês | MEDLINE | ID: mdl-38427549

RESUMO

We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wristband configuration. sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time. After an initial model calibration, participants were presented with one of three types of feedback during a human-learning stage: veridical feedback, in which predicted probabilities from the gesture classification algorithm were displayed without alteration; modified feedback, in which we applied a hidden augmentation of error to these probabilities; and no feedback. User performance was then evaluated in a series of minigames, in which subjects were required to use eight gestures to manipulate their game avatar to complete a task. Experimental results indicated that relative to the baseline, the modified feedback condition led to significantly improved accuracy. Class separation also improved, though this trend was not significant. These findings suggest that real-time feedback in a gamified user interface with manipulation of feedback may enable intuitive, rapid, and accurate task acquisition for sEMG-based gesture recognition applications.


Assuntos
Algoritmos , Gestos , Humanos , Eletromiografia/métodos , Retroalimentação , 60453
14.
Artigo em Inglês | MEDLINE | ID: mdl-38427548

RESUMO

The poor generalization performance and heavy training burden of the gesture classification model contribute as two main barriers that hinder the commercialization of sEMG-based human-machine interaction (HMI) systems. To overcome these challenges, eight unsupervised transfer learning (TL) algorithms developed on the basis of convolutional neural networks (CNNs) were explored and compared on a dataset consisting of 10 gestures from 35 subjects. The highest classification accuracy obtained by CORrelation Alignment (CORAL) reaches more than 90%, which is 10% higher than the methods without using TL. In addition, the proposed model outperforms 4 common traditional classifiers (KNN, LDA, SVM, and Random Forest) using the minimal calibration data (two repeated trials for each gesture). The results also demonstrate the model has a great transfer robustness/flexibility for cross-gesture and cross-day scenarios, with an accuracy of 87.94% achieved using calibration gestures that are different with model training, and an accuracy of 84.26% achieved using calibration data collected on a different day, respectively. As the outcomes confirm, the proposed CNN TL method provides a practical solution for freeing new users from the complicated acquisition paradigm in the calibration process before using sEMG-based HMI systems.


Assuntos
Gestos , Redes Neurais de Computação , Humanos , Calibragem , Eletromiografia/métodos , Algoritmos , Aprendizado de Máquina
15.
Cogn Sci ; 48(3): e13425, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38500335

RESUMO

Temporal perspectives allow us to place ourselves and temporal events on a timeline, making it easier to conceptualize time. This study investigates how we take different temporal perspectives in our temporal gestures. We asked participants (n = 36) to retell temporal scenarios written in the Moving-Ego, Moving-Time, and Time-Reference-Point perspectives in spontaneous and encouraged gesture conditions. Participants took temporal perspectives mostly in similar ways regardless of the gesture condition. Perspective comparisons showed that temporal gestures of our participants resonated better with the Ego- (i.e., Moving-Ego and Moving-Time) versus Time-Reference-Point distinction instead of the classical Moving-Ego versus Moving-Time contrast. Specifically, participants mostly produced more Moving-Ego and Time-Reference-Point gestures for the corresponding scenarios and speech; however, the Moving-Time perspective was not adopted more than the others in any condition. Similarly, the Moving-Time gestures did not favor an axis over the others, whereas Moving-Ego gestures were mostly sagittal and Time-Reference-Point gestures were mostly lateral. These findings suggest that we incorporate temporal perspectives into our temporal gestures to a considerable extent; however, the classical Moving-Ego and Moving-Time classification may not hold for temporal gestures.


Assuntos
Gestos , Percepção do Tempo , Humanos , Fala , Tempo
16.
Infant Behav Dev ; 74: 101927, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38428279

RESUMO

Infants actively initiate social interactions aiming to elicit different types of responses from other people. This study aimed to document a variety of communicative interactions initiated by 18-month-old Turkish infants from diverse SES (N = 43) with their caregivers in their natural home settings. The infant-initiated interactions such as use of deictic gestures (e.g., pointing, holdouts), action demonstrations, vocalizations, and non-specific play actions were coded from video recordings and classified into two categories as need-based and non-need-based. Need-based interactions were further classified as a) biological (e.g., feeding); b) socio-emotional (e.g., cuddling), and non-need-based interactions (i.e., communicative intentions) were coded as a) expressive, b) requestive; c) information/help-seeking; d) information-giving. Infant-initiated non-need-based (88%) interactions were more prevalent compared to need-based interactions (12%). Among the non-need-based interactions, 50% aimed at expressing or sharing attention or emotion, 26% aimed at requesting an object or an action, and 12% aimed at seeking information or help. Infant-initiated information-giving events were rare. We further investigated the effects of familial SES and infant sex, finding no effect of either on the number of infant-initiated interactions. These findings suggest that at 18 months, infants actively communicate with their social partners to fulfil their need-based and non-need-based motivations using a wide range of verbal and nonverbal behaviors, regardless of their sex and socio-economic background. This study thoroughly characterizes a wide and detailed range of infant-initiated spontaneous communicative bids in hard-to-access contexts (infants' daily lives at home) and with a traditionally underrepresented non-WEIRD population.


Assuntos
Gestos , Comportamento do Lactente , Lactente , Humanos , Comportamento do Lactente/fisiologia , Intenção , Emoções , Atenção/fisiologia
17.
Artif Intell Med ; 149: 102777, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462279

RESUMO

Accurate finger gesture recognition with surface electromyography (sEMG) is essential and long-challenge in the muscle-computer interface, and many high-performance deep learning models have been developed to predict gestures. For these models, problem-specific tuning of network architecture is essential for improving the performance, yet it requires substantial knowledge of network architecture design and commitment of time and effort. This process thus imposes a major obstacle to the widespread and flexible application of modern deep learning. To address this issue, we present an auto-learning search framework (ALSF) to generate the integrated block-wised neural network (IBWNN) for sEMG-based gesture recognition. IBWNN contains several feature extraction blocks and dimensional reduction layers, and each feature extraction block integrates two sub-blocks (i.e., multi-branch convolutional block and triplet attention block). Meanwhile, ALSF generates optimal models for gesture recognition through the reinforcement learning method. The results show that the generated models yield state-of-the-art results compared to the modern popular networks on the open dataset Ninapro DB5. Moreover, compared to other networks, the generated models have fewer parameters and can be deployed in practical applications with less resource consumption.


Assuntos
Gestos , Redes Neurais de Computação , Eletromiografia/métodos , Reconhecimento Psicológico , Atenção , Algoritmos
18.
Anim Cogn ; 27(1): 18, 2024 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-38429467

RESUMO

Gestures play a central role in the communication systems of several animal families, including primates. In this study, we provide a first assessment of the gestural systems of a Platyrrhine species, Geoffroy's spider monkeys (Ateles geoffroyi). We observed a wild group of 52 spider monkeys and assessed the distribution of visual and tactile gestures in the group, the size of individual repertoires and the intentionality and effectiveness of individuals' gestural production. Our results showed that younger spider monkeys were more likely than older ones to use tactile gestures. In contrast, we found no inter-individual differences in the probability of producing visual gestures. Repertoire size did not vary with age, but the probability of accounting for recipients' attentional state was higher for older monkeys than for younger ones, especially for gestures in the visual modality. Using vocalizations right before the gesture increased the probability of gesturing towards attentive recipients and of receiving a response, although age had no effect on the probability of gestures being responded. Overall, our study provides first evidence of gestural production in a Platyrrhine species, and confirms this taxon as a valid candidate for research on animal communication.


Assuntos
Ateles geoffroyi , Atelinae , Humanos , Animais , Gestos , Comunicação Animal , Individualidade
19.
J Epidemiol Popul Health ; 72(2): 202194, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38523401

RESUMO

BACKGROUND: The COVID-19 pandemic led many countries to drastically limit social activities. The objective of this study is to describe the factors associated with compliance with protective measures and social distancing in the general adult population in France, between March and December 2020 (first and second waves of the epidemic), before vaccination began at the end of December 2020. METHOD: The data come from the CoviPrev repeated cross-sectional descriptive survey, conducted between March 2020 and December 2022 in metropolitan France. The data collected from March to December 2020 (19 survey waves), from a panel representative of the general population, were used. Three periods were defined: the first epidemic wave (March-April), the inter-wave period (May-June) and the second epidemic wave (November-December). A compliance score was constructed to measure systematic compliance with the five main measures. The association between systematic compliance and different variables (sociodemographic, mental health, level of health literacy, perceived severity of COVID-19, confidence in government, perceived effectiveness of the measures) was described using bivariate and multivariate logistic regression models, using the statistical software R. RESULTS: Systematic compliance with the preventive measures changed over time. Regardless of the period, being a woman, being over 50, perceiving COVID-19 as severe, having a high level of health literacy or anxiety were positively associated with compliance. Having a child under 16 years of age and perceiving the measures as effective were positively associated with compliance with the protective measures during the epidemic waves; conversely, having a high level of depression, living alone, not working were negatively associated in the first epidemic wave. Finally, during the inter-wave period, living in an area heavily affected during the first wave and having a high level of education were positively and negatively associated with systematic compliance with the preventive measures, respectively. CONCLUSION: The factors associated with compliance with the protective measures and social distancing evolved during the epidemic. Monitoring this evolution, in order to adapt communication and awareness strategies, is essential in the context of pandemic response.


Assuntos
COVID-19 , SARS-CoV-2 , Adulto , Feminino , Criança , Humanos , Pandemias/prevenção & controle , Distanciamento Físico , Estudos Transversais , Gestos , França
20.
Res Dev Disabil ; 148: 104711, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38520885

RESUMO

BACKGROUND: Studies on late talkers (LTs) highlighted their heterogeneity and the relevance of describing different communicative profiles. AIMS: To examine lexical skills and gesture use in expressive (E-LTs) vs. receptive-expressive (R/E-LTs) LTs through a structured task. METHODS AND PROCEDURES: Forty-six 30-month-old screened LTs were distinguished into E-LTs (n= 35) and R/E-LTs (n= 11) according to their receptive skills. Lexical skills and gesture use were assessed with a Picture Naming Game by coding answer accuracy (correct, incorrect, no response), modality of expression (spoken, spoken-gestural, gestural), type of gestures (deictic, representational), and spoken-gestural answers' semantic relationship (complementary, equivalent, supplementary). OUTCOMES AND RESULTS: R/E-LTs showed lower scores than E-LTs for noun and predicate comprehension with fewer correct answers, and production with fewer correct and incorrect answers, and more no responses. R/E-LTs also exhibited lower scores in spoken answers, representational gestures, and equivalent spoken-gestural answers for noun production and in all spoken and gestural answers for predicate production. CONCLUSIONS AND IMPLICATIONS: Findings highlighted more impaired receptive and expressive lexical skills and lower gesture use in R/E-LTs compared to E-LTs, underlying the relevance of assessing both lexical and gestural skills through a structured task, besides parental questionnaires and developmental scales, to describe LTs' communicative profiles.


Assuntos
Gestos , Transtornos do Desenvolvimento da Linguagem , Humanos , Compreensão/fisiologia , Pais , Testes de Linguagem , Vocabulário
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...