Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 43
Filter
Add more filters










Publication year range
1.
JASA Express Lett ; 2(4): 045205, 2022 Apr.
Article in English | MEDLINE | ID: mdl-35495774

ABSTRACT

Individuals who have undergone treatment for oral cancer oftentimes exhibit compensatory behavior in consonant production. This pilot study investigates whether compensatory mechanisms utilized in the production of speech sounds with a given target constriction location vary systematically depending on target manner of articulation. The data reveal that compensatory strategies used to produce target alveolar segments vary systematically as a function of target manner of articulation in subtle yet meaningful ways. When target constriction degree at a particular constriction location cannot be preserved, individuals may leverage their ability to finely modulate constriction degree at multiple constriction locations along the vocal tract.

2.
J Commun Disord ; 97: 106213, 2022.
Article in English | MEDLINE | ID: mdl-35397388

ABSTRACT

INTRODUCTION: Most of the previous articulatory studies of stuttering have focussed on the fluent speech of people who stutter. However, to better understand what causes the actual moments of stuttering, it is necessary to probe articulatory behaviors during stuttered speech. We examined the supralaryngeal articulatory characteristics of stuttered speech using real-time structural magnetic resonance imaging (RT-MRI). We investigated how articulatory gestures differ across stuttered and fluent speech of the same speaker. METHODS: Vocal tract movements of an adult man who stutters during a pseudoword reading task were recorded using RT-MRI. Four regions of interest (ROIs) were defined on RT-MRI image sequences around the lips, tongue tip, tongue body, and velum. The variation of pixel intensity in each ROI over time provided an estimate of the movement of these four articulators. RESULTS: All disfluencies occurred on syllable-initial consonants. Three articulatory patterns were identified. Pattern 1 showed smooth gestural formation and release like fluent speech. Patterns 2 and 3 showed delayed release of gestures due to articulator fixation or oscillation respectively. Block and prolongation corresponded to either pattern 1 or 2. Repetition corresponded to pattern 3 or a mix of patterns. Gestures for disfluent consonants typically exhibited a greater constriction than fluent gestures, which was rarely corrected during disfluencies. Gestures for the upcoming vowel were initiated and executed during these consonant disfluencies, achieving a tongue body position similar to the fluent counterpart. CONCLUSION: Different perceptual types of disfluencies did not necessarily result from distinct articulatory patterns, highlighting the importance of collecting articulatory data of stuttering. Disfluencies on syllable-initial consonants were related to the delayed release and the overshoot of consonant gestures, rather than the delayed initiation of vowel gestures. This suggests that stuttering does not arise from problems with planning the vowel gestures, but rather with releasing the overly constricted consonant gestures.


Subject(s)
Stuttering , Adult , Gestures , Humans , Magnetic Resonance Imaging , Male , Speech , Speech Production Measurement
3.
Sci Data ; 8(1): 187, 2021 07 20.
Article in English | MEDLINE | ID: mdl-34285240

ABSTRACT

Real-time magnetic resonance imaging (RT-MRI) of human speech production is enabling significant advances in speech science, linguistics, bio-inspired speech technology development, and clinical applications. Easy access to RT-MRI is however limited, and comprehensive datasets with broad access are needed to catalyze research across numerous domains. The imaging of the rapidly moving articulators and dynamic airway shaping during speech demands high spatio-temporal resolution and robust reconstruction methods. Further, while reconstructed images have been published, to-date there is no open dataset providing raw multi-coil RT-MRI data from an optimized speech production experimental setup. Such datasets could enable new and improved methods for dynamic image reconstruction, artifact correction, feature extraction, and direct extraction of linguistically-relevant biomarkers. The present dataset offers a unique corpus of 2D sagittal-view RT-MRI videos along with synchronized audio for 75 participants performing linguistically motivated speech tasks, alongside the corresponding public domain raw RT-MRI data. The dataset also includes 3D volumetric vocal tract MRI during sustained speech sounds and high-resolution static anatomical T2-weighted upper airway MRI for each participant.


Subject(s)
Larynx/physiology , Magnetic Resonance Imaging/methods , Speech , Adolescent , Adult , Computer Systems , Female , Humans , Male , Middle Aged , Time Factors , Video Recording , Young Adult
4.
J Acoust Soc Am ; 149(6): 4437, 2021 06.
Article in English | MEDLINE | ID: mdl-34241468

ABSTRACT

The glossectomy procedure, involving surgical resection of cancerous lingual tissue, has long been observed to affect speech production. This study aims to quantitatively index and compare complexity of vocal tract shaping due to lingual movement in individuals who have undergone glossectomy and typical speakers using real-time magnetic resonance imaging data and Principal Component Analysis. The data reveal that (i) the type of glossectomy undergone largely predicts the patterns in vocal tract shaping observed, (ii) gross forward and backward motion of the tongue body accounts for more change in vocal tract shaping than do subtler movements of the tongue (e.g., tongue tip constrictions) in patient data, and (iii) fewer vocal tract shaping components are required to account for the patients' speech data than typical speech data, suggesting that the patient data at hand exhibit less complex vocal tract shaping in the midsagittal plane than do the data from the typical speakers observed.


Subject(s)
Glossectomy , Tongue Neoplasms , Humans , Principal Component Analysis , Speech , Tongue/diagnostic imaging , Tongue/surgery , Tongue Neoplasms/diagnostic imaging , Tongue Neoplasms/surgery
5.
J Acoust Soc Am ; 147(6): 3905, 2020 06.
Article in English | MEDLINE | ID: mdl-32611162

ABSTRACT

Although substantial variability is observed in the articulatory implementation of the constriction gestures involved in /ɹ/ production, studies of articulatory-acoustic relations in /ɹ/ have largely ignored the potential for subtle variation in the implementation of these gestures to affect salient acoustic dimensions. This study examines how variation in the articulation of American English /ɹ/ influences the relative sensitivity of the third formant to variation in palatal, pharyngeal, and labial constriction degree. Simultaneously recorded articulatory and acoustic data from six speakers in the USC-TIMIT corpus was analyzed to determine how variation in the implementation of each constriction across tokens of /ɹ/ relates to variation in third formant values. Results show that third formant values are differentially affected by constriction degree for the different constrictions used to produce /ɹ/. Additionally, interspeaker variation is observed in the relative effect of different constriction gestures on third formant values, most notably in a division between speakers exhibiting relatively equal effects of palatal and pharyngeal constriction degree on F3 and speakers exhibiting a stronger palatal effect. This division among speakers mirrors interspeaker differences in mean constriction length and location, suggesting that individual differences in /ɹ/ production lead to variation in articulatory-acoustic relations.


Subject(s)
Phonetics , Speech Acoustics , Constriction , Language , Pharynx , Speech Production Measurement , United States
6.
J Acoust Soc Am ; 147(6): EL460, 2020 06.
Article in English | MEDLINE | ID: mdl-32611190

ABSTRACT

It has been previously observed [McMicken, Salles, Berg, Vento-Wilson, Rogers, Toutios, and Narayanan. (2017). J. Commun. Disorders, Deaf Stud. Hear. Aids 5(2), 1-6] using real-time magnetic resonance imaging that a speaker with severe congenital tongue hypoplasia (aglossia) had developed a compensatory articulatory strategy where she, in the absence of a functional tongue tip, produced a plosive consonant perceptually similar to /d/ using a bilabial constriction. The present paper provides an updated account of this strategy. It is suggested that the previously observed compensatory bilabial closing that occurs during this speaker's /d/ production is consistent with vocal tract shaping resulting from hyoid raising created with mylohyoid action, which may also be involved in typical /d/ production. Simulating this strategy in a dynamic articulatory synthesis experiment leads to the generation of /d/-like formant transitions.


Subject(s)
Tongue , Voice , Female , Humans , Phonetics , Speech , Tongue/diagnostic imaging
7.
Lang Speech ; 63(3): 526-549, 2020 Sep.
Article in English | MEDLINE | ID: mdl-31385552

ABSTRACT

This study uses a response mouse-tracking paradigm to examine the role of sub-phonemic information in online lexical ambiguity resolution of continuous speech. We examine listeners' sensitivity to the sub-phonemic information that is specific to the ambiguous internal open juncture /s/-stop sequences in American English (e.g., "place kin" vs. "play skin"), that is, voice onset time (VOT) indicating different degrees of aspiration (e.g., long VOT for "kin" vs. short VOT for "skin") in connected speech contexts. A cross-splicing method was used to create two-word sequences (e.g., "place kin" or "play skin") with matching VOTs (long for "kin"; short for "skin") or mismatching VOTs (short for "kin"; long for "skin"). Participants (n = 20) heard the two-word sequences, while looking at computer displays with the second word in the left/right corner ("KIN" and "SKIN"). Then, listeners' click responses and mouse movement trajectories were recorded. Click responses show significant effects of VOT manipulation, while mouse trajectories do not. Our results show that stop-release information, whether temporal or spectral, can (mis)guide listeners' interpretation of the possible location of a word boundary between /s/ and a following stop, even when other aspects in the acoustic signal (e.g., duration of /s/) point to the alternative segmentation. Taken together, our results suggest that segmentation and lexical access are highly attuned to bottom-up phonetic information; our results have implications for a model of spoken language recognition with position-specific representations available at the prelexical level and also allude to the possibility that detailed phonetic information may be stored in the listeners' lexicons.


Subject(s)
Phonetics , Psychomotor Performance/physiology , Recognition, Psychology/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Computer Peripherals , Female , Humans , Ice Cream , Male , Speech , Speech Acoustics , User-Computer Interface
8.
Front Psychol ; 10: 2459, 2019.
Article in English | MEDLINE | ID: mdl-31827451

ABSTRACT

Movements of the head and speech articulators have been observed in tandem during an alternating word pair production task driven by an accelerating rate metronome. Word pairs contrasted either onset or coda dissimilarity with same word controls. Results show that as production effort increased, so did speaker head nodding, and that nodding increased abruptly following errors. More errors occurred under faster production rates, and in coda rather than onset alternations. The greatest entrainment between head and articulators was observed at the fastest rate under coda alternation. Neither jaw coupling nor imposed prosodic stress was observed to be a primary driver of head movement. In alternating pairs, nodding frequency tracked the slower alternation rate rather than the syllable rate, interpreted as recruitment of additional degrees of freedom to stabilize the alternation pattern under increasing production rate pressure.

9.
J Acoust Soc Am ; 145(3): 1504, 2019 03.
Article in English | MEDLINE | ID: mdl-31067947

ABSTRACT

In speech production, the motor system organizes articulators such as the jaw, tongue, and lips into synergies whose function is to produce speech sounds by forming constrictions at the phonetic places of articulation. The present study tests whether synergies for different constriction tasks differ in terms of inter-articulator coordination. The test is conducted on utterances [ɑpɑ], [ɑtɑ], [ɑiɑ], and [ɑkɑ] with a real-time magnetic resonance imaging biomarker that is computed using a statistical model of the forward kinematics of the vocal tract. The present study is the first to estimate the forward kinematics of the vocal tract from speech production data. Using the imaging biomarker, the study finds that the jaw contributes least to the velar stop for [k], more to pharyngeal approximation for [ɑ], still more to palatal approximation for [i], and most to the coronal stop for [t]. Additionally, the jaw contributes more to the coronal stop for [t] than to the bilabial stop for [p]. Finally, the study investigates how this pattern of results varies by participant. The study identifies differences in inter-articulator coordination by constriction task, which support the claim that inter-articulator coordination differs depending on the active articulator synergy.


Subject(s)
Speech , Voice/physiology , Adult , Biomechanical Phenomena , Female , Humans , Jaw/diagnostic imaging , Jaw/physiology , Larynx/diagnostic imaging , Larynx/physiology , Magnetic Resonance Imaging , Male , Pharynx/diagnostic imaging , Pharynx/physiology , Phonetics , Psychomotor Performance
10.
Front Psychol ; 10: 2608, 2019.
Article in English | MEDLINE | ID: mdl-31920767

ABSTRACT

How do we align the distinct neural patterns associated with the articulation and the acoustics of the same utterance in order to guide behaviors that demand sensorimotor interaction, such as vocal learning and the use of feedback during speech production? One hypothesis is that while the representations are distinct, their patterns of change over time (temporal modulation) are systematically related. This hypothesis is pursued in the exploratory study described here, using paired articulatory and acoustic data from the X-ray microbeam corpus. The results show that modulation in both articulatory movement and in the changing acoustics has the form of a pulse-like structure related to syllable structure. The pulses are aligned with each other in time, and the modulation functions are robustly correlated. These results encourage further investigation and testing of the hypothesis.

11.
Phonetica ; 76(5): 363-396, 2019.
Article in English | MEDLINE | ID: mdl-30481752

ABSTRACT

Sequences of similar (i.e., partially identical) words can be hard to say, as indicated by error frequencies, longer reaction and execution times. This study investigates the role of the location of this partial identity and the accompanying differences, i.e. whether errors are more frequent with mismatches in word onsets (top cop), codas (top tock) or both (pop tot). Number of syllables (tippy ticky) and empty positions (top ta) were also varied. Since the gradient nature of errors can be difficult to determine acoustically, articulatory data were investigated. Articulator movements were recorded using electromagnetic articulography, for up to 9 speakers of American English repeatedly producing 2-word sequences to an accelerating metronome. Most word pairs showed more intrusions and greater variability in coda than in onset position, in contrast to the predominance of onset position errors in corpora from perceptual observation.


Subject(s)
Multilingualism , Phonetics , Speech Acoustics , Speech , Adult , Female , Humans , Language , Linguistics , Male , Speech Production Measurement , Young Adult
12.
J Acoust Soc Am ; 144(5): EL380, 2018 11.
Article in English | MEDLINE | ID: mdl-30522297

ABSTRACT

This paper reports on the concurrent use of electroglottography (EGG) and electromagnetic articulography (EMA) in the acquisition of EMA trajectory data for running speech. Static and dynamic intersensor distances, standard deviations, and coefficients of variation associated with inter-sample distances were compared in two conditions: with and without EGG present. Results indicate that measurement discrepancies between the two conditions are within the EMA system's measurement uncertainty. Therefore, potential electromagnetic interference from EGG does not seem to cause differences of practical importance on EMA trajectory behaviors, suggesting that simultaneous EMA and EGG data acquisition is a viable laboratory procedure for speech research.


Subject(s)
Electromagnetic Phenomena , Glottis/physiology , Speech Production Measurement/instrumentation , Speech/physiology , Female , Glottis/anatomy & histology , Humans , Larynx/anatomy & histology , Larynx/physiology , Male , Mouth/anatomy & histology , Mouth/physiology
13.
PLoS One ; 13(8): e0201444, 2018.
Article in English | MEDLINE | ID: mdl-30086554

ABSTRACT

This study uses a maze navigation task in conjunction with a quasi-scripted, prosodically controlled speech task to examine acoustic and articulatory accommodation in pairs of interacting speakers. The experiment uses a dual electromagnetic articulography set-up to collect synchronized acoustic and articulatory kinematic data from two facing speakers simultaneously. We measure the members of a dyad individually before they interact, while they are interacting in a cooperative task, and again individually after they interact. The design is ideally suited to measure speech convergence, divergence, and persistence effects during and after speaker interaction. This study specifically examines how convergence and divergence effects during a dyadic interaction may be related to prosodically salient positions, such as preceding a phrase boundary. The findings of accommodation in fine-grained prosodic measures illuminate our understanding of how the realization of linguistic phrasal structure is coordinated across interacting speakers. Our findings on individual speaker variability and the time course of accommodation provide novel evidence for accommodation at the level of cognitively specified motor control of individual articulatory gestures. Taken together, these results have implications for understanding the cognitive control of interactional behavior in spoken language communication.


Subject(s)
Cognition/physiology , Cooperative Behavior , Interpersonal Relations , Speech/physiology , Adult , Electromagnetic Phenomena , Female , Humans , Male , Speech Production Measurement/instrumentation , Speech Production Measurement/methods , Young Adult
14.
J Phon ; 71: 268-283, 2018 Nov.
Article in English | MEDLINE | ID: mdl-30618477

ABSTRACT

This study presents techniques for quantitatively analyzing coordination and kinematics in multimodal speech using video, audio and electromagnetic articulography (EMA) data. Multimodal speech research has flourished due to recent improvements in technology, yet gesture detection/annotation strategies vary widely, leading to difficulty in generalizing across studies and in advancing this field of research. We describe how FlowAnalyzer software can be used to extract kinematic signals from basic video recordings; and we apply a technique, derived from speech kinematic research, to detect bodily gestures in these kinematic signals. We investigate whether kinematic characteristics of multimodal speech differ dependent on communicative context, and we find that these contexts can be distinguished quantitatively, suggesting a way to improve and standardize existing gesture identification/annotation strategy. We also discuss a method, Correlation Map Analysis (CMA), for quantifying the relationship between speech and bodily gesture kinematics over time. We describe potential applications of CMA to multimodal speech research, such as describing characteristics of speech-gesture coordination in different communicative contexts. The use of the techniques presented here can improve and advance multimodal speech and gesture research by applying quantitative methods in the detection and description of multimodal speech.

15.
J Speech Lang Hear Res ; 60(4): 877-891, 2017 04 14.
Article in English | MEDLINE | ID: mdl-28314241

ABSTRACT

Purpose: Real-time magnetic resonance imaging (MRI) and accompanying analytical methods are shown to capture and quantify salient aspects of apraxic speech, substantiating and expanding upon evidence provided by clinical observation and acoustic and kinematic data. Analysis of apraxic speech errors within a dynamic systems framework is provided and the nature of pathomechanisms of apraxic speech discussed. Method: One adult male speaker with apraxia of speech was imaged using real-time MRI while producing spontaneous speech, repeated naming tasks, and self-paced repetition of word pairs designed to elicit speech errors. Articulatory data were analyzed, and speech errors were detected using time series reflecting articulatory activity in regions of interest. Results: Real-time MRI captured two types of apraxic gestural intrusion errors in a word pair repetition task. Gestural intrusion errors in nonrepetitive speech, multiple silent initiation gestures at the onset of speech, and covert (unphonated) articulation of entire monosyllabic words were also captured. Conclusion: Real-time MRI and accompanying analytical methods capture and quantify many features of apraxic speech that have been previously observed using other modalities while offering high spatial resolution. This patient's apraxia of speech affected the ability to select only the appropriate vocal tract gestures for a target utterance, suppressing others, and to coordinate them in time.


Subject(s)
Apraxias/diagnostic imaging , Magnetic Resonance Imaging/methods , Mouth/diagnostic imaging , Speech Production Measurement/methods , Speech , Brain/diagnostic imaging , Gestures , Humans , Image Processing, Computer-Assisted , Male , Mental Status Schedule , Middle Aged , Motor Skills , Pilot Projects , Primary Progressive Nonfluent Aphasia/diagnostic imaging , Sound Spectrography , Time Factors
16.
Ecol Psychol ; 28(4): 216-261, 2016 Oct 01.
Article in English | MEDLINE | ID: mdl-28367052

ABSTRACT

To become language users, infants must embrace the integrality of speech perception and production. That they do so, and quite rapidly, is implied by the native-language attunement they achieve in each domain by 6-12 months. Yet research has most often addressed one or the other domain, rarely how they interrelate. Moreover, mainstream assumptions that perception relies on acoustic patterns whereas production involves motor patterns entail that the infant would have to translate incommensurable information to grasp the perception-production relationship. We posit the more parsimonious view that both domains depend on commensurate articulatory information. Our proposed framework combines principles of the Perceptual Assimilation Model (PAM) and Articulatory Phonology (AP). According to PAM, infants attune to articulatory information in native speech and detect similarities of nonnative phones to native articulatory patterns. The AP premise that gestures of the speech organs are the basic elements of phonology offers articulatory similarity metrics while satisfying the requirement that phonological information be discrete and contrastive: (a) distinct articulatory organs produce vocal tract constrictions and (b) phonological contrasts recruit different articulators and/or constrictions of a given articulator that differ in degree or location. Various lines of research suggest young children perceive articulatory information, which guides their productions: discrimination of between- versus within-organ contrasts, simulations of attunement to language-specific articulatory distributions, multimodal speech perception, oral/vocal imitation, and perceptual effects of articulator activation or suppression. We conclude that articulatory gesture information serves as the foundation for developmental integrality of speech perception and production.

17.
Lang Speech ; 57(Pt 4): 544-62, 2014 Dec.
Article in English | MEDLINE | ID: mdl-25536847

ABSTRACT

In typical speech words are grouped into prosodic constituents. This study investigates how such grouping interacts with segmental sequencing patterns in the production of repetitive word sequences. We experimentally manipulated grouping behavior using a rhythmic repetition task to elicit speech for perceptual and acoustic analysis to test the hypothesis that prosodic structure and patterns of segmental alternation can interact in the production planning process. Talkers produced alternating sequences of two words (top cop) and non-alternating controls (top top and cop cop), organized into six-word sequences. These sequences were further organized into prosodic groupings of three two-word pairs or two three-word triples by means of visual cues and audible metronome clicks. Results for six speakers showed more speech errors in triples, that is, when pairwise word alternation was mismatched with prosodic subgrouping in triples. This result suggests that the planning process for the segmental units of an utterance interacts with the planning process for the prosodic grouping of its words. It also highlights the importance of extending commonly used experimental speech elicitation methods to include more complex prosodic patterns, in order to evoke the kinds of interaction between prosodic structure and planning that occur in the production of lexical forms in continuous communicative speech.


Subject(s)
Phonetics , Semantics , Sound Spectrography , Speech Acoustics , Speech Perception , Speech Production Measurement , Time Perception , Adult , Female , Humans , Male , Psycholinguistics , Verbal Behavior , Young Adult
18.
J Phon ; 44: 62-82, 2014 May 01.
Article in English | MEDLINE | ID: mdl-25300341

ABSTRACT

This study investigates the coordination of boundary tones as a function of stress and pitch accent. Boundary tone coordination has not been experimentally investigated previously, and the effect of prominence on this coordination, and whether it is lexical (stress-driven) or phrasal (pitch accent-driven) in nature is unclear. We assess these issues using a variety of syntactic constructions to elicit different boundary tones in an Electromagnetic Articulography (EMA) study of Greek. The results indicate that the onset of boundary tones co-occurs with the articulatory target of the final vowel. This timing is further modified by stress, but not by pitch accent: boundary tones are initiated earlier in words with non-final stress than in words with final stress regardless of accentual status. Visual data inspection reveals that phrase-final words are followed by acoustic pauses during which specific articulatory postures occur. Additional analyses show that these postures reach their achievement point at a stable temporal distance from boundary tone onsets regardless of stress position. Based on these results and parallel findings on boundary lengthening reported elsewhere, a novel approach to prosody is proposed within the context of Articulatory Phonology: rather than seeing prosodic (lexical and phrasal) events as independent entities, a set of coordination relations between them is suggested. The implications of this account for prosodic architecture are discussed.

19.
J Acoust Soc Am ; 136(3): 1307, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25190403

ABSTRACT

USC-TIMIT is an extensive database of multimodal speech production data, developed to complement existing resources available to the speech research community and with the intention of being continuously refined and augmented. The database currently includes real-time magnetic resonance imaging data from five male and five female speakers of American English. Electromagnetic articulography data have also been presently collected from four of these speakers. The two modalities were recorded in two independent sessions while the subjects produced the same 460 sentence corpus used previously in the MOCHA-TIMIT database. In both cases the audio signal was recorded and synchronized with the articulatory data. The database and companion software are freely available to the research community.


Subject(s)
Acoustics , Biomedical Research , Databases, Factual , Electromagnetic Phenomena , Magnetic Resonance Imaging , Pharynx/physiology , Speech Acoustics , Speech Production Measurement , Voice Quality , Acoustics/instrumentation , Adult , Biomechanical Phenomena , Female , Humans , Male , Middle Aged , Pharynx/anatomy & histology , Signal Processing, Computer-Assisted , Software , Speech Production Measurement/instrumentation , Time Factors , Transducers
20.
PLoS One ; 9(8): e104168, 2014.
Article in English | MEDLINE | ID: mdl-25133544

ABSTRACT

We address the hypothesis that postures adopted during grammatical pauses in speech production are more "mechanically advantageous" than absolute rest positions for facilitating efficient postural motor control of vocal tract articulators. We quantify vocal tract posture corresponding to inter-speech pauses, absolute rest intervals as well as vowel and consonant intervals using automated analysis of video captured with real-time magnetic resonance imaging during production of read and spontaneous speech by 5 healthy speakers of American English. We then use locally-weighted linear regression to estimate the articulatory forward map from low-level articulator variables to high-level task/goal variables for these postures. We quantify the overall magnitude of the first derivative of the forward map as a measure of mechanical advantage. We find that postures assumed during grammatical pauses in speech as well as speech-ready postures are significantly more mechanically advantageous than postures assumed during absolute rest. Further, these postures represent empirical extremes of mechanical advantage, between which lie the postures assumed during various vowels and consonants. Relative mechanical advantage of different postures might be an important physical constraint influencing planning and control of speech production.


Subject(s)
Speech Acoustics , Biomechanical Phenomena , Female , Humans , Jaw/physiology , Lip/physiology , Motor Skills , Posture , Tongue/physiology , Vocal Cords/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...