Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 3.225
Filtrar
1.
Sensors (Basel) ; 22(11)2022 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-35684843

RESUMO

Manual wheelchair dance is an artistic recreational and sport activity for people with disabilities that is becoming more and more popular. It has been reported that a significant part of the dance is dedicated to propulsion. Furthermore, wheelchair dance professionals such as Gladys Foggea highlight the need for monitoring the quantity and timing of propulsions for assessment and learning. This study addresses these needs by proposing a wearable system based on inertial sensors capable of detecting and characterizing propulsion gestures. We called the system WISP. Within our initial configuration, three inertial sensors were placed on the hands and the back. Two machine learning classifiers were used for online bilateral recognition of basic propulsion gestures (forward, backward, and dance). Then, a conditional block was implemented to rebuild eight specific propulsion gestures. Online paradigm is intended for real-time assessment applications using sliding window method. Thus, we evaluate the accuracy of the classifiers in two configurations: "three-sensor" and "two-sensor". Results showed that when using "two-sensor" configuration, it was possible to recognize the propulsion gestures with an accuracy of 90.28%. Finally, the system allows to quantify the propulsions and measure their timing in a manual wheelchair dance choreography, showing its possible applications in the teaching of dance.


Assuntos
Pessoas com Deficiência , Dispositivos Eletrônicos Vestíveis , Cadeiras de Rodas , Fenômenos Biomecânicos , Gestos , Mãos , Humanos
2.
Sensors (Basel) ; 22(11)2022 Jun 02.
Artigo em Inglês | MEDLINE | ID: mdl-35684880

RESUMO

There have been several studies of hand gesture recognition for human-machine interfaces. In the early work, most solutions were vision-based and usually had privacy problems that make them unusable in some scenarios. To address the privacy issues, more and more research on non-vision-based hand gesture recognition techniques has been proposed. This paper proposes a dynamic hand gesture system based on 60 GHz FMCW radar that can be used for contactless device control. In this paper, we receive the radar signals of hand gestures and transform them into human-understandable domains such as range, velocity, and angle. With these signatures, we can customize our system to different scenarios. We proposed an end-to-end training deep learning model (neural network and long short-term memory), that extracts the transformed radar signals into features and classifies the extracted features into hand gesture labels. In our training data collecting effort, a camera is used only to support labeling hand gesture data. The accuracy of our model can reach 98%.


Assuntos
Gestos , Reconhecimento Psicológico , Humanos , Memória de Longo Prazo , Ultrassonografia Doppler , Extremidade Superior
3.
Stud Health Technol Inform ; 290: 1034-1035, 2022 Jun 06.
Artigo em Inglês | MEDLINE | ID: mdl-35673192

RESUMO

Providing urgent and emergency care to migrant children is often hampered or delayed. Reasons for this are language barriers when children, and their care givers, don't speak any of the languages commonly spoken in Switzerland, which include German, French, Italian, and English. By a participatory design process, we want to develop a novel image-based digital communication aid tailored to the needs of migrant patients and nurses within Swiss paediatric clinics.


Assuntos
Barreiras de Comunicação , Serviços Médicos de Emergência , Criança , Comunicação , Gestos , Humanos , Idioma , Comunicação não Verbal
4.
Sci Rep ; 12(1): 6950, 2022 06 09.
Artigo em Inglês | MEDLINE | ID: mdl-35680934

RESUMO

The dog (Canis familiaris) was the first domesticated animal and hundreds of breeds exist today. During domestication, dogs experienced strong selection for temperament, behaviour, and cognitive ability. However, the genetic basis of these abilities is not well-understood. We focused on ancient dog breeds to investigate breed-related differences in social cognitive abilities. In a problem-solving task, ancient breeds showed a lower tendency to look back at humans than other European breeds. In a two-way object choice task, they showed no differences in correct response rate or ability to read human communicative gestures. We examined gene polymorphisms in oxytocin, oxytocin receptor, melanocortin 2 receptor, and a Williams-Beuren syndrome-related gene (WBSCR17), as candidate genes of dog domestication. The single-nucleotide polymorphisms on melanocortin 2 receptor were related to both tasks, while other polymorphisms were associated with the unsolvable task. This indicates that glucocorticoid functions are involved in the cognitive skills acquired during dog domestication.


Assuntos
Cães , Domesticação , Interação Humano-Animal , Animais , Animais Domésticos , Comportamento Animal/fisiologia , Comunicação , Cães/genética , Gestos , Humanos , N-Acetilgalactosaminiltransferases/genética , Ocitocina , Polimorfismo de Nucleotídeo Único , Receptor Tipo 2 de Melanocortina/genética , Receptores de Ocitocina/genética
5.
Artigo em Inglês | MEDLINE | ID: mdl-35622796

RESUMO

How to learn informative representations from Electromyography (EMG) signals is of vital importance for myoelectric control systems. Traditionally, hand-crafted features are extracted from individual EMG channels and combined together for pattern recognition. The spatial topological information between different channels can also be informative, which is seldom considered. This paper presents a radically novel approach to extract spatial structural information within diverse EMG channels based on the symmetric positive definite (SPD) manifold. The object is to learn non-Euclidean representations inside EMG signals for myoelectric pattern recognition. The performance is compared with two classical feature sets using accuracy and F1-score. The algorithm is tested on eleven gestures collected from ten subjects, and the best accuracy reaches 84.85%±5.15% with an improvement of 4.04%~20.25%, which outperforms the contrast method, and reaches a significant improvement with the Wilcoxon signed-rank test. Eleven gestures from three public databases involving Ninapro DB2, DB4, and DB5 are also evaluated, and better performance is observed. Furthermore, the computational cost is less than the contrast method, making it more suitable for low-cost systems. It shows the effectiveness of the presented approach and contributes a new way for myoelectric pattern recognition.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Eletromiografia/métodos , Gestos , Mãos , Humanos , Reconhecimento Automatizado de Padrão/métodos
6.
ACS Appl Mater Interfaces ; 14(22): 25629-25637, 2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35612540

RESUMO

A multifunctional wearable tactile sensor assisted by deep learning algorithms is developed, which can realize the functions of gesture recognition and interaction. This tactile sensor is the fusion of a triboelectric nanogenerator and piezoelectric nanogenerator to construct a hybrid self-powered sensor with a higher power density and sensibility. The power generation performance is characterized with an open-circuit voltage VOC of 200 V, a short-circuit current ISC of 8 µA, and a power density of 0.35 mW cm-2 under a matching load. It also has an excellent sensibility, including a response time of 5 ms, a signal-to-noise ratio of 22.5 dB, and a pressure resolution of 1% (1-10 kPa). The sensor is successfully integrated on a glove to collect the electrical signal output generated by the gesture. Using deep learning algorithms, the functions of gesture recognition and control can be realized in real time. The combination of tactile sensor and deep learning algorithms provides ideas and guidance for its applications in the field of artificial intelligence, such as human-computer interaction, signal monitoring, and smart sensing.


Assuntos
Aprendizado Profundo , Fontes de Energia Elétrica , Inteligência Artificial , Eletricidade , Gestos , Humanos
7.
J Speech Lang Hear Res ; 65(6): 2309-2326, 2022 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-35617450

RESUMO

PURPOSE: Children with autism are found to have delayed and heterogeneous gesture abilities. It is important to understand the growth of gesture abilities and the underlying factors affecting its growth. Addressing these issues can help to design effective intervention programs. METHOD: Thirty-five Chinese-speaking preschoolers with autism spectrum disorder (M age = 4.89 years, SD = 0.91; four girls) participated in four play sessions with their parents over 9 months. Their child-based factors including autism severity, intellectual functioning, and expressive language abilities were assessed. The gestures (deictic, iconic, and conventional) of the children and their parents were coded. Growth curve analyses were conducted to examine individual growth trajectories and the roles of child-based factors and parental input in shaping the children's gesture development. RESULTS: Child-based factors and parental input predicted gesture development differently. Parents' gestures positively predicted their children's gestures of the same type. Autism severity negatively predicted iconic and conventional gestures. Overall growth was found in deictic rather than iconic and conventional gestures. Subgroup variation was also found. Specifically, children with better expressive language ability showed a decrease in deictic gestures. An increase in iconic and conventional gestures was found in children with more severe autism and those with poorer expressive language ability and intellectual functioning, respectively. CONCLUSIONS: Different types of gestures may have different growth trajectories and be predicted by different child-based factors. Particular attention should be given to children who never produced iconic gestures, which is more challenging and may not develop over a short period, and hence require direct instruction.


Assuntos
Transtorno do Espectro Autista , Transtorno Autístico , Pré-Escolar , China , Feminino , Gestos , Humanos , Idioma , Desenvolvimento da Linguagem , Pais
8.
Cognition ; 225: 105127, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35617850

RESUMO

Speakers' visual attention to events is guided by linguistic conceptualization of information in spoken language production and in language-specific ways. Does production of language-specific co-speech gestures further guide speakers' visual attention during message preparation? Here, we examine the link between visual attention and multimodal event descriptions in Turkish. Turkish is a verb-framed language where speakers' speech and gesture show language specificity with path of motion mostly expressed within the main verb accompanied by path gestures. Turkish-speaking adults viewed motion events while their eye movements were recorded during non-linguistic (viewing-only) and linguistic (viewing-before-describing) tasks. The relative attention allocated to path over manner was higher in the linguistic task compared to the non-linguistic task. Furthermore, the relative attention allocated to path over manner within the linguistic task was higher when speakers (a) encoded path in the main verb versus outside the verb and (b) used additional path gestures accompanying speech versus not. Results strongly suggest that speakers' visual attention is guided by language-specific event encoding not only in speech but also in gesture. This provides evidence consistent with models that propose integration of speech and gesture at the conceptualization level of language production and suggests that the links between the eye and the mouth may be extended to the eye and the hand.


Assuntos
Formação de Conceito , Gestos , Adulto , Movimentos Oculares , Humanos , Percepção , Fala
9.
Sensors (Basel) ; 22(10)2022 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-35632058

RESUMO

Upper limb amputation severely affects the quality of life and the activities of daily living of a person. In the last decade, many robotic hand prostheses have been developed which are controlled by using various sensing technologies such as artificial vision and tactile and surface electromyography (sEMG). If controlled properly, these prostheses can significantly improve the daily life of hand amputees by providing them with more autonomy in physical activities. However, despite the advancements in sensing technologies, as well as excellent mechanical capabilities of the prosthetic devices, their control is often limited and usually requires a long time for training and adaptation of the users. The myoelectric prostheses use signals from residual stump muscles to restore the function of the lost limbs seamlessly. However, the use of the sEMG signals in robotic as a user control signal is very complicated due to the presence of noise, and the need for heavy computational power. In this article, we developed motion intention classifiers for transradial (TR) amputees based on EMG data by implementing various machine learning and deep learning models. We benchmarked the performance of these classifiers based on overall generalization across various classes and we presented a systematic study on the impact of time domain features and pre-processing parameters on the performance of the classification models. Our results showed that Ensemble learning and deep learning algorithms outperformed other classical machine learning algorithms. Investigating the trend of varying sliding window on feature-based and non-feature-based classification model revealed interesting correlation with the level of amputation. The study also covered the analysis of performance of classifiers on amputation conditions since the history of amputation and conditions are different to each amputee. These results are vital for understanding the development of machine learning-based classifiers for assistive robotic applications.


Assuntos
Membros Artificiais , Aprendizado Profundo , Robótica , Atividades Cotidianas , Eletromiografia/métodos , Gestos , Humanos , Aprendizado de Máquina , Qualidade de Vida , Extremidade Superior
10.
Sensors (Basel) ; 22(10)2022 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-35632069

RESUMO

Gesture recognition through surface electromyography (sEMG) provides a new method for the control algorithm of bionic limbs, which is a promising technology in the field of human-computer interaction. However, subject specificity of sEMG along with the offset of the electrode makes it challenging to develop a model that can quickly adapt to new subjects. In view of this, we introduce a new deep neural network called CSAC-Net. Firstly, we extract the time-frequency feature from the raw signal, which contains rich information. Secondly, we design a convolutional neural network supplemented by an attention mechanism for further feature extraction. Additionally, we propose to utilize model-agnostic meta-learning to adapt to new subjects and this learning strategy achieves better results than the state-of-the-art methods. By the basic experiment on CapgMyo and three ablation studies, we demonstrate the advancement of CSAC-Net.


Assuntos
Gestos , Redes Neurais de Computação , Algoritmos , Eletromiografia , Humanos , Aprendizagem
11.
Artigo em Inglês | MEDLINE | ID: mdl-35536801

RESUMO

Gestural interfaces based on surface electromyographic (sEMG) signal have been widely explored. Nevertheless, due to the individual differences in the sEMG signals, it is very challenging for a myoelectric pattern recognition control system to adapt cross-user variability. Unsupervised domain adaptation (UDA) has achieved unprecedented success in improving the cross-domain robustness, and it is a promising approach to solve the cross-user challenge. Existing UDA methods largely ignore the instantaneous data distribution during model updating, thus deteriorating the feature representation given a large domain shift. To address this issue, a novel method is proposed based on a UDA model incorporated with a self-guided adaptive sampling (SGAS) strategy. This strategy is designed to utilize the domain distance in a kernel space as an indicator to screen out reliable instantaneous samples for updating the classifier. Thus, it enables improved alignment of feature representations of myoelectric patterns across users. To evaluate the performance of the proposed method, sEMG data were recorded from forearm muscles of nine subjects performing six finger and wrist gestures. Experiment results show that the UDA method with the SGAS strategy achieved a mean accuracy of 90.41% ± 14.44% in a cross-user classification manner, outperformed the state-of-the-art methods with statistical significance ( ). This study demonstrates the effectiveness of the proposed UDA framework and offers a novel tool for implementing cross-user myoelectric pattern recognition towards a multi-user and user-independent control.


Assuntos
Algoritmos , Reconhecimento Automatizado de Padrão , Eletromiografia/métodos , Gestos , Humanos , Músculo Esquelético/fisiologia , Reconhecimento Automatizado de Padrão/métodos
12.
Artigo em Inglês | MEDLINE | ID: mdl-35564377

RESUMO

Pointing is one of the first conventional means of communication and infants have various motives for engaging in it such as imperative, declarative, or informative. Little is known about the developmental paths of producing and understanding these different motives. In our longitudinal study (N = 58) during the second year of life, we experimentally elicited infants' pointing production and comprehension in various settings and under pragmatically valid conditions. We followed two steps in our analyses and assessed the occurrence of canonical index-finger pointing for different motives and the engagement in an ongoing interaction in pursuit of a joint goal revealed by frequency and multimodal utterances. For understanding the developmental paths, we compared two groups: typically developing infants (TD) and infants who have been assessed as having delayed language development (LD). Results showed that the developmental paths differed according to the various motives. When comparing the two groups, for all motives, LD infants produced index-finger pointing 2 months later than TD infants. For the engagement, although the pattern was less consistent across settings, the frequency of pointing was comparable in both groups, but infants with LD used less canonical forms of pointing and made fewer multimodal contributions than TD children.


Assuntos
Gestos , Transtornos do Desenvolvimento da Linguagem , Criança , Desenvolvimento Infantil , Humanos , Lactente , Estudos Longitudinais , Motivação
13.
Neural Netw ; 152: 353-369, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35598404

RESUMO

A recent paper (Mhaskar (2020)) introduces a straightforward and simple kernel based approximation for manifold learning that does not require the knowledge of anything about the manifold, except for its dimension. In this paper, we examine how the pointwise error in approximation using least squares optimization based on similarly localized kernels depends upon the data characteristics and deteriorates as one goes away from the training data. The theory is presented with an abstract localized kernel, which can utilize any prior knowledge about the data being located on an unknown sub-manifold of a known manifold. We demonstrate the performance of our approach using a publicly available micro-Doppler data set, and investigate the use of different preprocessing measures, kernels, and manifold dimensions. Specifically, it is shown that the localized kernel introduced in the above mentioned paper when used with PCA components leads to a near-competitive performance to deep neural networks, and offers significant improvements in training speed and memory requirements. To demonstrate the fact that our methods are agnostic to the domain knowledge, we examine the classification problem in a simple video data set.


Assuntos
Gestos , Radar , Análise dos Mínimos Quadrados , Aprendizado de Máquina , Redes Neurais de Computação
14.
Sci Data ; 9(1): 218, 2022 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-35585077

RESUMO

This paper makes the VISTA database, composed of inertial and visual data, publicly available for gesture and activity recognition. The inertial data were acquired with the SensHand, which can capture the movement of wrist, thumb, index and middle fingers, while the RGB-D visual data were acquired simultaneously from two different points of view, front and side. The VISTA database was acquired in two experimental phases: in the former, the participants have been asked to perform 10 different actions; in the latter, they had to execute five scenes of daily living, which corresponded to a combination of the actions of the selected actions. In both phase, Pepper interacted with participants. The two camera point of views mimic the different point of view of pepper. Overall, the dataset includes 7682 action instances for the training phase and 3361 action instances for the testing phase. It can be seen as a framework for future studies on artificial intelligence techniques for activity recognition, including inertial-only data, visual-only data, or a sensor fusion approach.


Assuntos
Algoritmos , Movimento , Inteligência Artificial , Gestos , Humanos , Punho
15.
Comput Intell Neurosci ; 2022: 1450822, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35535197

RESUMO

Sign language plays a pivotal role in the lives of impaired people having speaking and hearing disabilities. They can convey messages using hand gesture movements. American Sign Language (ASL) recognition is challenging due to the increasing intra-class similarity and high complexity. This paper used a deep convolutional neural network for ASL alphabet recognition to overcome ASL recognition challenges. This paper presents an ASL recognition approach using a deep convolutional neural network. The performance of the DeepCNN model improves with the amount of given data; for this purpose, we applied the data augmentation technique to expand the size of training data from existing data artificially. According to the experiments, the proposed DeepCNN model provides consistent results for the ASL dataset. Experiments prove that the DeepCNN gives a better accuracy gain of 19.84%, 8.37%, 16.31%, 17.17%, 5.86%, and 3.26% as compared to various state-of-the-art approaches.


Assuntos
Redes Neurais de Computação , Línguas de Sinais , Gestos , Humanos , Movimento , Reconhecimento Psicológico
16.
Artigo em Inglês | MEDLINE | ID: mdl-35533170

RESUMO

In order to reduce the gap between the laboratory environment and actual use in daily life of human-machine interaction based on surface electromyogram (sEMG) intent recognition, this paper presents a benchmark dataset of sEMG in non-ideal conditions (SeNic). The dataset mainly consists of 8-channel sEMG signals, and electrode shifts from an 3D-printed annular ruler. A total of 36 subjects participate in our data acquisition experiments of 7 gestures in non-ideal conditions, where non-ideal factors of 1) electrode shifts, 2) individual difference, 3) muscle fatigue, 4) inter-day difference, and 5) arm postures are elaborately involved. Signals of sEMG are validated first in temporal and frequency domains. Results of recognizing gestures in ideal conditions indicate the high quality of the dataset. Adverse impacts in non-ideal conditions are further revealed in the amplitudes of these data and recognition accuracies. To be concluded, SeNic is a benchmark dataset that introduces several non-ideal factors which often degrade the robustness of sEMG-based systems. It could be used as a freely available dataset and a common platform for researchers in the sEMG-based recognition community. The benchmark dataset SeNic are available online via the website (https://github.com/bozhubo/SeNic and https://gitee.com/bozhubo/SeNic).


Assuntos
Gestos , Fadiga Muscular , Algoritmos , Eletrodos , Eletromiografia/métodos , Humanos , Reconhecimento Psicológico
17.
Cogn Sci ; 46(5): e13133, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35613353

RESUMO

Sign languages use multiple articulators and iconicity in the visual modality which allow linguistic units to be organized not only linearly but also simultaneously. Recent research has shown that users of an established sign language such as LIS (Italian Sign Language) use simultaneous and iconic constructions as a modality-specific resource to achieve communicative efficiency when they are required to encode informationally rich events. However, it remains to be explored whether the use of such simultaneous and iconic constructions recruited for communicative efficiency can be employed even without a linguistic system (i.e., in silent gesture) or whether they are specific to linguistic patterning (i.e., in LIS). In the present study, we conducted the same experiment as in Slonimska et al. with 23 Italian speakers using silent gesture and compared the results of the two studies. The findings showed that while simultaneity was afforded by the visual modality to some extent, its use in silent gesture was nevertheless less frequent and qualitatively different than when used within a linguistic system. Thus, the use of simultaneous and iconic constructions for communicative efficiency constitutes an emergent property of sign languages. The present study highlights the importance of studying modality-specific resources and their use for linguistic expression in order to promote a more thorough understanding of the language faculty and its modality-specific adaptive capabilities.


Assuntos
Gestos , Línguas de Sinais , Humanos , Idioma , Desenvolvimento da Linguagem , Linguística
18.
Sensors (Basel) ; 22(9)2022 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-35591156

RESUMO

With the advent of digital technologies, the computer has become a generalized tool for music production. Music can be seen as a creative form of human-human communication via a computer, and therefore, research on human-computer and computer-human interfaces is very important. This paper, for the Sensors Special Issue on 800 Years of Research at Padova University, presents a review of the research in the field of music technologies at Padova University by the Centro di Sonologia Computazionale (CSC), focusing on scientific, technological and musical aspects of interaction between musician and computer and between computer and audience. We discuss input devices for detecting information from gestures or audio signals and rendering systems for audience and user engagement. Moreover, we discuss a multilevel conceptual framework, which allows multimodal expressive content processing and coordination, which is important in art and music. Several paradigmatic musical works that stated new lines of both musical and scientific research are then presented in detail. The preservation of this heritage presents problems very different from those posed by traditional artworks. CSC is actively engaged in proposing new paradigms for the preservation of digital art.


Assuntos
Música , Computadores , Gestos , Humanos , Tecnologia , Universidades
19.
Sensors (Basel) ; 22(9)2022 May 07.
Artigo em Inglês | MEDLINE | ID: mdl-35591250

RESUMO

With the popularization of head-mounted displays (HMDs), many systems for human augmentation have been developed. This will increase the opportunities to use such systems in daily life. Therefore, the user interfaces for these systems must be designed to be intuitive and highly responsive. This paper proposes an intuitive input method that uses natural gestures as input cues for systems for human augmentation. We investigated the appropriate gestures for a system that expands the movements of the user's viewpoint by extending and contracting the neck in a video see-through AR environment. We conducted an experiment to investigate natural gestures by observing the motions when a person wants to extend his/her neck. Furthermore, we determined the operation method for extending/contracting the neck and holding the position through additional experiments. Based on this investigation, we implemented a prototype of the proposed system in a VR environment. Note that we employed a VR environment since we could test our method in various situations, although our target environment is AR. We compared the operability of the proposed method and the handheld controller using our prototype. The results confirmed that the participants felt more immersed using our method, although the positioning speed using controller input was faster than that of our method.


Assuntos
Gestos , Óculos Inteligentes , Feminino , Mãos , Humanos , Masculino , Movimento (Física) , Movimento , Interface Usuário-Computador
20.
J Commun Disord ; 97: 106213, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35397388

RESUMO

INTRODUCTION: Most of the previous articulatory studies of stuttering have focussed on the fluent speech of people who stutter. However, to better understand what causes the actual moments of stuttering, it is necessary to probe articulatory behaviors during stuttered speech. We examined the supralaryngeal articulatory characteristics of stuttered speech using real-time structural magnetic resonance imaging (RT-MRI). We investigated how articulatory gestures differ across stuttered and fluent speech of the same speaker. METHODS: Vocal tract movements of an adult man who stutters during a pseudoword reading task were recorded using RT-MRI. Four regions of interest (ROIs) were defined on RT-MRI image sequences around the lips, tongue tip, tongue body, and velum. The variation of pixel intensity in each ROI over time provided an estimate of the movement of these four articulators. RESULTS: All disfluencies occurred on syllable-initial consonants. Three articulatory patterns were identified. Pattern 1 showed smooth gestural formation and release like fluent speech. Patterns 2 and 3 showed delayed release of gestures due to articulator fixation or oscillation respectively. Block and prolongation corresponded to either pattern 1 or 2. Repetition corresponded to pattern 3 or a mix of patterns. Gestures for disfluent consonants typically exhibited a greater constriction than fluent gestures, which was rarely corrected during disfluencies. Gestures for the upcoming vowel were initiated and executed during these consonant disfluencies, achieving a tongue body position similar to the fluent counterpart. CONCLUSION: Different perceptual types of disfluencies did not necessarily result from distinct articulatory patterns, highlighting the importance of collecting articulatory data of stuttering. Disfluencies on syllable-initial consonants were related to the delayed release and the overshoot of consonant gestures, rather than the delayed initiation of vowel gestures. This suggests that stuttering does not arise from problems with planning the vowel gestures, but rather with releasing the overly constricted consonant gestures.


Assuntos
Gagueira , Adulto , Gestos , Humanos , Imageamento por Ressonância Magnética , Masculino , Fala , Medida da Produção da Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...