Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Science ; 237(4811): 169-71, 1987 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-3603014

RESUMO

Some components of a speech signal, when made more intense, are heard simultaneously as speech and nonspeech--a form of duplex perception. At lower intensities, the speech alone is heard. Such intensity-dependent duplexity implies the existence of a phonetic mode of perception that takes precedence over auditory modes.


Assuntos
Fonética , Percepção da Fala , Adulto , Atenção , Limiar Auditivo , Feminino , Audição , Humanos , Masculino , Percepção , Fala , Testes de Discriminação da Fala
2.
Science ; 243(4890): 489-94, 1989 Jan 27.
Artigo em Inglês | MEDLINE | ID: mdl-2643163

RESUMO

The processes that underlie perception of consonants and vowels are specifically phonetic, distinct from those that localize sources and assign auditory qualities to the sound from each source. This specialization, or module, increases the rate of information flow, establishes the parity between sender and receiver that every communication system must have, and provides for the natural development of phonetic structures in the species and in the individual. The phonetic module has certain properties in common with modules that are "closed" (for example, sound localization or echo ranging in bats) and, like other members of this class, is so placed in the architecture of the auditory system as to preempt information that is relevant to its special function. Accordingly, this information is not available to such "open" modules as those for pitch, loudness, and timbre.


Assuntos
Fonética , Percepção da Fala/fisiologia , Animais , Percepção Auditiva/fisiologia , Comunicação , Humanos , Fala
3.
J Exp Psychol Hum Percept Perform ; 4(4): 621-37, 1978 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-722252

RESUMO

Introducing a short interval of silence between the words SAY and SHOP causes listeners to hear SAY CHOP. Another cue for the fricative-affricate distinction is the duration of the fricative noise in SHOP (CHOP). Now, varying both these temporal cues orthogonally in a sentence context, we find that, within limits, they are perceived in relation to each other: The shorter the duration of the noise, the shorter the silence necessary to convert the fricative into an affricate. On the other hand, when the rate of articulation of the sentence frame is increased while holding noise duration constant, a longer silent interval is needed to hear an affricate, as if the noise duration, but not the silence duration, were effectively longer in the faster sentence. In a second experiment, varying noise and silence durations in GRAY SHIP, we find that given sufficient silence, listeners report GRAY CHIP when the noise is short but GREAT SHIP when it is long. Thus, the long noise in the second syllable disposes listeners to displace the stop to the first syllable, so that they hear not a syllable-initial affricate (i.e., stop-initiated fricative) but a syllable-final stop (followed by a syllable-initial fricative). Repeating the experiment with GREAT SHIP as the original utterance, we obtain the same pattern of results, together with only a moderate increase in GREAT responses. In all such cases, the listeners integrate a numerous, diverse, and temporally distributed set of acoustic cues into a unitary phonetic percept. These several cues have in common only that they are the products of a unitary articulatory act. In effect, then, it is the articulatory act that is perceived.


Assuntos
Sinais (Psicologia) , Fonética , Percepção da Fala , Humanos , Fatores de Tempo
4.
Ann Dyslexia ; 40(1): 51-76, 1990 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24233626

RESUMO

Promoters of Whole Language hew to the belief that learning to read and write can be as natural and effortless as learning to perceive and produce speech. From this it follows that there is no special key to reading and writing, no explicit principle to be taught that, once learned, makes the written language transparent to a child who can speak. Lacking such a principle, Whole Language falls back on a method that encourages children to get from print just enough information to provide a basis for guessing at the gist. A very different method, called Code Emphasis, presupposes that learning the spoken language is, indeed, perfectly natural and seemingly effortless, but only because speech is managed, as reading and writing are not, by a biological specialization that automatically spells or parses all the words the child commands. Hence, a child normally learns to use words without ever becoming explicitly aware that each one is formed by the consonants and vowels that an alphabet represents. Yet it is exactly this awareness that must be taught if the child is to grasp the alphabetic principle and so understand how the artifacts of an alphabet transcribe the natural units of language. There is evidene that preliterate children do not, in fact, have much of this awareness; that the amount they do have predicts their reading achievement; that the awareness can be taught; and that the relative difficulty of learning it that some childen have may be a reflection of a weakness in the phonological component of their natural capacity for language.

5.
Brain Lang ; 78(3): 364-96, 2001 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-11703063

RESUMO

Candidate brain regions constituting a neural network for preattentive phonetic perception were identified with fMRI and multivariate multiple regression of imaging data. Stimuli contrasted along speech/nonspeech, acoustic, or phonetic complexity (three levels each) and natural/synthetic dimensions. Seven distributed brain regions' activity correlated with speech and speech complexity dimensions, including five left-sided foci [posterior superior temporal gyrus (STG), angular gyrus, ventral occipitotemporal cortex, inferior/posterior supramarginal gyrus, and middle frontal gyrus (MFG)] and two right-sided foci (posterior STG and anterior insula). Only the left MFG discriminated natural and synthetic speech. The data also supported a parallel rather than serial model of auditory speech and nonspeech perception.


Assuntos
Encéfalo/anatomia & histologia , Imageamento por Ressonância Magnética , Percepção da Fala/fisiologia , Adulto , Percepção Auditiva/fisiologia , Feminino , Lateralidade Funcional/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Rede Nervosa/fisiologia , Fonética
8.
Cognition ; 21(1): 1-36, 1985 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-4075760
11.
Ann N Y Acad Sci ; 280: 718-24, 1976.
Artigo em Inglês | MEDLINE | ID: mdl-827961
14.
J Psycholinguist Res ; 27(2): 111-22, 1998 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-9561782

RESUMO

Two theories of speech--one quite conventional, the other much less so--account very differently for the biological advantage of speech over writing/reading. The guiding assumption of the more conventional theory is that the elements of speech are sounds, and that these are served by processes of motor control and auditory perception that are in no way specialized for language. Accordingly, there must be a cognitive stage, beyond action and perception, where the motor and auditory representations are somehow invested with linguistic significance. On the conventional view, then, the sounds of speech are just like the letters of the alphabet. Neither has more than an arbitrary relation to language, hence the difference between them is trivially a matter of which of the equally large gaps between signal and message needs to be bridged. On the less conventional theory, the ultimate constituents of speech are not sounds, but articulatory gestures. Having evolved exclusively in the service of language, they form a natural class, a phonetic modality. Being phonetic to begin with, they do not require to be made so by cognitive translation. And that, very simply, is the advantage of speech over writing/reading. Speech has the corollary advantage that it is managed by a module biologically adapted to circumvent limitations of tongue and ear by automatically coarticulating the constituent gestures and coping with the complex acoustic consequences. But a result is that awareness of phonetic structure is not normally a product of having learned to speak: The module "spells"--that is, sequences phonetic segments--for the speaker and recovers the segments for the listener, leaving both in the dark about the way that is done; the gestural representations are immediately phonetic in nature, precluding the cognitive translation that would bring them to notice; and coarticulation destroys all correspondence in segmentation between acoustic and phonetic structures, making it that much harder to demonstrate the alphabetic nature of speech at the acoustic surface. Accordingly, special difficulty in becoming literate might be caused by a weakness of the phonetic module, for that would produce primary representations of a fragile sort, with the consequence that they would be that much harder to bring to awareness--as is required if they are to serve writers and readers as the units of an alphabetic script--and also that much less able to bear the weight of working memory.


Assuntos
Fala/fisiologia , Conscientização , Humanos , Idioma , Linguística , Memória/fisiologia , Fonética , Acústica da Fala , Percepção da Fala/fisiologia
15.
Percept Psychophys ; 58(6): 857-70, 1996 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-8768181

RESUMO

The telling fact about duplex perception is that listeners integrate into a unitary phonetic percept signals that are coherent from a phonetic point of view, even though the signals are, on purely auditory grounds, separate sources. Here we explore the limits on the integration of a sinusoidal consonant cue (the F3 transition for [da] vs. [ga]) with the resonances of the remainder of the syllable. Perceiving duplexly, listeners hear the whistle of the sinusoid, but also the [da] and [ga] for which the sinusoid provides the critical information. In the first experiment, phonetic integration was significantly reduced, but not to zero, by a precursor that extended the transition cue forward in time so that it started 50 msec before the cue. The effect was the same above and below the duplexity threshold (the intensity of sinusoid in the combined pattern at which the whistle was just barely audible). In the second experiment, integration was reduced once again by the precursor, and also, but only below the duplexity threshold, by harmonics of the cues that were simultaneous with it. The third experiment showed that the simultaneous harmonics reduced phonetic integration only by serving as distractors while also permitting the conclusion that the precursor produced its effects by making the cue part of a coherent and competing auditory pattern, and so "capturing" it. The fourth experiment supported this interpretation by showing that for some subjects the amount of capture was reduced when the capturing tone was itself captured by being made part of a tonal complex. The results support the assumption that the independent phonetic system will integrate across disparate sources according to the cohesive power of that system as measured against the evidence for separate sources.


Assuntos
Atenção , Fonética , Localização de Som , Percepção da Fala , Adulto , Testes com Listas de Dissílabos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Psicoacústica , Tempo de Reação
16.
J Acoust Soc Am ; 65(6): 1518-32, 1979 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-489822

RESUMO

The results of several experiments demonstrate that silence is an important cue for the perception of stop-consonant and affricate manner. In some circumstances, silence is necessary; in others, it is sufficient. But silence is not the only cue to these manners. There are other cues that are more or less equivalent in their perceptual effects, though they are quite different acoustically. Finally, silence is effective as a cue when it separates utterances produced by male and female speakers. These findings are taken to imply that, in these instances, perception is constrained as if by some abstract conception of what vocal tracts do when they make linguistically significant gestures.


Assuntos
Sinais (Psicologia) , Fonética , Percepção da Fala , Feminino , Humanos , Masculino , Som
17.
J Nerv Ment Dis ; 189(7): 442-8, 2001 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-11504321

RESUMO

We studied 655 urban police officers (21% female, 48% white, 24% black, and 28% Hispanic) to assess ethnic and gender differences in duty-related symptoms of posttraumatic stress disorder (PTSD). We obtained self-report measures of: a) PTSD symptoms, b) peritraumatic dissociation, c) exposure to duty-related critical incidents, d) general psychiatric symptoms, e) response bias due to social desirability, and f) demographic variables. We found that self-identified Hispanic-American officers evidenced greater PTSD symptoms than both self-identified European-American and self-identified African-American officers. These effects were small in size but they persisted even after controlling for differences in other relevant variables. Contrary to expectation, we found no gender differences in PTSD symptoms. Our findings are of note because: a) they replicate a previous finding of greater PTSD among Hispanic-American military personnel and b) they fail to replicate the well-established finding of greater PTSD symptoms among civilian women.


Assuntos
Etnicidade/estatística & dados numéricos , Polícia/estatística & dados numéricos , Transtornos de Estresse Pós-Traumáticos/epidemiologia , Adulto , Negro ou Afro-Americano/psicologia , Negro ou Afro-Americano/estatística & dados numéricos , California/epidemiologia , Comorbidade , Transtornos Dissociativos/diagnóstico , Transtornos Dissociativos/epidemiologia , Etnicidade/psicologia , Feminino , Hispânico ou Latino/psicologia , Hispânico ou Latino/estatística & dados numéricos , Humanos , Acontecimentos que Mudam a Vida , Masculino , Transtornos Mentais/diagnóstico , Transtornos Mentais/epidemiologia , Cidade de Nova Iorque/epidemiologia , Inventário de Personalidade/estatística & dados numéricos , Fatores Sexuais , Desejabilidade Social , Transtornos de Estresse Pós-Traumáticos/diagnóstico , População Urbana/estatística & dados numéricos
18.
Psychol Sci ; 11(1): 51-6, 2000 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-11228843

RESUMO

Converging evidence from neuroimaging studies of developmental dyslexia reveals dysfunction at posterior brain regions centered in and around the angular gyrus in the left hemisphere. We examined functional connectivity (covariance) between the angular gyrus and related occipital and temporal lobe sites, across a series of print tasks that systematically varied demands on phonological assembly. Results indicate that for dyslexic readers a disruption in functional connectivity in the language-dominant left hemisphere is confined to those tasks that make explicit demands on assembly. In contrast, on print tasks that do not require phonological assembly, functional connectivity is strong for both dyslexic and nonimpaired readers. The findings support the view that neurobiological anomalies in developmental dyslexia are largely confined to the phonological-processing domain. In addition, the findings suggest that right-hemisphere posterior regions serve a compensatory role in mediating phonological performance in dyslexic readers.


Assuntos
Dislexia/fisiopatologia , Lobo Parietal/fisiologia , Adolescente , Adulto , Transtornos da Articulação/fisiopatologia , Estudos de Casos e Controles , Feminino , Lateralidade Funcional , Humanos , Masculino , Pessoa de Meia-Idade , Análise e Desempenho de Tarefas
19.
Proc Natl Acad Sci U S A ; 95(5): 2636-41, 1998 Mar 03.
Artigo em Inglês | MEDLINE | ID: mdl-9482939

RESUMO

Learning to read requires an awareness that spoken words can be decomposed into the phonologic constituents that the alphabetic characters represent. Such phonologic awareness is characteristically lacking in dyslexic readers who, therefore, have difficulty mapping the alphabetic characters onto the spoken word. To find the location and extent of the functional disruption in neural systems that underlies this impairment, we used functional magnetic resonance imaging to compare brain activation patterns in dyslexic and nonimpaired subjects as they performed tasks that made progressively greater demands on phonologic analysis. Brain activation patterns differed significantly between the groups with dyslexic readers showing relative underactivation in posterior regions (Wernicke's area, the angular gyrus, and striate cortex) and relative overactivation in an anterior region (inferior frontal gyrus). These results support a conclusion that the impairment in dyslexia is phonologic in nature and that these brain activation patterns may provide a neural signature for this impairment.


Assuntos
Mapeamento Encefálico , Encéfalo/patologia , Encéfalo/fisiopatologia , Dislexia/fisiopatologia , Leitura , Encéfalo/fisiologia , Dislexia/patologia , Humanos , Processamento de Imagem Assistida por Computador , Idioma , Imageamento por Ressonância Magnética , Valores de Referência
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA