Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Phonetica ; 77(3): 186-208, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31018217

RESUMO

BACKGROUND/AIMS: This work examines the perception of the stop voicing contrast in Spanish and English along four acoustic dimensions, comparing monolingual and bilingual listeners. Our primary goals are to test the extent to which cue-weighting strategies are language-specific in monolinguals, and whether this language specificity extends to bilingual listeners. METHODS: Participants categorized sounds varying in voice onset time (VOT, the primary cue to the contrast) and three secondary cues: fundamental frequency at vowel onset, first formant (F1) onset frequency, and stop closure duration. Listeners heard acoustically identical target stimuli, within language-specific carrier phrases, in English and Spanish modes. RESULTS: While all listener groups used all cues, monolingual English listeners relied more on F1, and less on closure duration, than monolingual Spanish listeners, indicating language specificity in cue use. Early bilingual listeners used the three secondary cues similarly in English and Spanish, despite showing language-specific VOT boundaries. CONCLUSION: While our findings reinforce previous work demonstrating language-specific phonetic representations in bilinguals in terms of VOT boundary, they suggest that this specificity may not extend straightforwardly to cue-weighting strategies.


Assuntos
Sinais (Psicologia) , Idioma , Multilinguismo , Fonética , Voz , Estimulação Acústica , Adulto , Humanos , Modelos Logísticos , Fala , Adulto Jovem
2.
J Acoust Soc Am ; 137(1): EL65-70, 2015 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-25618101

RESUMO

Speech perception studies generally focus on the acoustic information present in the frequency regions below 6 kHz. Recent evidence suggests that there is perceptually relevant information in the higher frequencies, including information affecting speech intelligibility. This experiment examined whether listeners are able to accurately identify a subset of vowels and consonants in CV-context when only high-frequency (above 5 kHz) acoustic information is available (through high-pass filtering and masking of lower frequency energy). The findings reveal that listeners are capable of extracting information from these higher frequency regions to accurately identify certain consonants and vowels.


Assuntos
Fonética , Percepção da Altura Sonora/fisiologia , Inteligibilidade da Fala/fisiologia , Estimulação Acústica , Adulto , Feminino , Humanos , Masculino , Mascaramento Perceptivo/fisiologia , Percepção da Fala , Adulto Jovem
3.
J Acoust Soc Am ; 135(1): 400-6, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24437780

RESUMO

Previous work has shown that human listeners are sensitive to level differences in high-frequency energy (HFE) in isolated vowel sounds produced by male singers. Results indicated that sensitivity to HFE level changes increased with overall HFE level, suggesting that listeners would be more "tuned" to HFE in vocal production exhibiting higher levels of HFE. It follows that sensitivity to HFE level changes should be higher (1) for female vocal production than for male vocal production and (2) for singing than for speech. To test this hypothesis, difference limens for HFE level changes in male and female speech and singing were obtained. Listeners showed significantly greater ability to detect level changes in singing vs speech but not in female vs male speech. Mean differences limen scores for speech and singing were about 5 dB in the 8-kHz octave (5.6-11.3 kHz) but 8-10 dB in the 16-kHz octave (11.3-22 kHz). These scores are lower (better) than those previously reported for isolated vowels and some musical instruments.


Assuntos
Discriminação da Altura Tonal , Canto , Acústica da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adulto , Audiometria da Fala , Feminino , Humanos , Masculino , Psicoacústica , Fatores Sexuais , Espectrografia do Som , Adulto Jovem
4.
Behav Brain Sci ; 37(2): 204-5, 2014 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-24775161

RESUMO

Speech is commonly claimed to relate to mirror neurons because of the alluring surface analogy of mirror neurons to the Motor Theory of speech perception, which posits that perception and production draw upon common motor-articulatory representations. We argue that the analogy fails and highlight examples of systems-level developmental approaches that have been more fruitful in revealing perception-production associations.


Assuntos
Evolução Biológica , Encéfalo/fisiologia , Aprendizagem/fisiologia , Neurônios-Espelho/fisiologia , Percepção Social , Animais , Humanos
5.
Psychol Sci ; 24(11): 2135-42, 2013 Nov 01.
Artigo em Inglês | MEDLINE | ID: mdl-24022652

RESUMO

Bilinguals perceptually accommodate speech variation across languages, but to what extent this flexibility depends on bilingual experience is uncertain. One account suggests that bilingual experience promotes language-specific processing modes, implying that bilinguals can switch as appropriate between the different phonetic systems of the languages they speak. Another account suggests that bilinguals rapidly recalibrate to the unique acoustic properties of each language following language-general processes common to monolinguals. Challenging this latter account, the present results show that Spanish-English bilinguals with exposure to both languages from early childhood, but not English monolinguals, shift perception as appropriate across acoustically controlled English and Spanish contexts. Early bilingual experience appears to promote language-specific phonetic systems.


Assuntos
Multilinguismo , Percepção da Fala/fisiologia , Adulto , Humanos , Fonética , Psicolinguística/métodos , Distribuição Aleatória , Adulto Jovem
6.
J Acoust Soc Am ; 132(3): 1754-64, 2012 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-22978902

RESUMO

The human singing and speech spectrum includes energy above 5 kHz. To begin an in-depth exploration of this high-frequency energy (HFE), a database of anechoic high-fidelity recordings of singers and talkers was created and analyzed. Third-octave band analysis from the long-term average spectra showed that production level (soft vs normal vs loud), production mode (singing vs speech), and phoneme (for voiceless fricatives) all significantly affected HFE characteristics. Specifically, increased production level caused an increase in absolute HFE level, but a decrease in relative HFE level. Singing exhibited higher levels of HFE than speech in the soft and normal conditions, but not in the loud condition. Third-octave band levels distinguished phoneme class of voiceless fricatives. Female HFE levels were significantly greater than male levels only above 11 kHz. This information is pertinent to various areas of acoustics, including vocal tract modeling, voice synthesis, augmentative hearing technology (hearing aids and cochlear implants), and training/therapy for singing and speech.


Assuntos
Canto , Acústica da Fala , Voz , Adulto , Idoso , Análise de Variância , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores Sexuais , Processamento de Sinais Assistido por Computador , Espectrografia do Som , Medida da Produção da Fala , Qualidade da Voz , Adulto Jovem
7.
J Acoust Soc Am ; 129(4): 2263-8, 2011 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21476681

RESUMO

The human voice spectrum above 5 kHz receives little attention. However, there are reasons to believe that this high-frequency energy (HFE) may play a role in perceived quality of voice in singing and speech. To fulfill this role, differences in HFE must first be detectable. To determine human ability to detect differences in HFE, the levels of the 8- and 16-kHz center-frequency octave bands were individually attenuated in sustained vowel sounds produced by singers and presented to listeners. Relatively small changes in HFE were in fact detectable, suggesting that this frequency range potentially contributes to the perception of especially the singing voice. Detection ability was greater in the 8-kHz octave than in the 16-kHz octave and varied with band energy level.


Assuntos
Percepção Auditiva/fisiologia , Música , Fonação/fisiologia , Voz/fisiologia , Adulto , Limiar Auditivo/fisiologia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Pressão , Espectrografia do Som , Treinamento da Voz , Adulto Jovem
8.
Trends Cogn Sci ; 13(3): 110-4, 2009 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19223222

RESUMO

The discovery of mirror neurons, a class of neurons that respond when a monkey performs an action and also when the monkey observes others producing the same action, has promoted a renaissance for the Motor Theory (MT) of speech perception. This is because mirror neurons seem to accomplish the same kind of one to one mapping between perception and action that MT theorizes to be the basis of human speech communication. However, this seeming correspondence is superficial, and there are theoretical and empirical reasons to temper enthusiasm about the explanatory role mirror neurons might have for speech perception. In fact, rather than providing support for MT, mirror neurons are actually inconsistent with the central tenets of MT.


Assuntos
Comportamento Imitativo/fisiologia , Atividade Motora/fisiologia , Neurônios/fisiologia , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Animais , Humanos , Córtex Motor/citologia , Córtex Motor/fisiologia , Neurônios/classificação , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia
9.
J Speech Lang Hear Res ; 52(5): 1334-52, 2009 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-19717656

RESUMO

PURPOSE: In this study, the authors examined whether rhythm metrics capable of distinguishing languages with high and low temporal stress contrast also can distinguish among control and dysarthric speakers of American English with perceptually distinct rhythm patterns. Methods Acoustic measures of vocalic and consonantal segment durations were obtained for speech samples from 55 speakers across 5 groups (hypokinetic, hyperkinetic, flaccid-spastic, ataxic dysarthrias, and controls). Segment durations were used to calculate standard and new rhythm metrics. Discriminant function analyses (DFAs) were used to determine which sets of predictor variables (rhythm metrics) best discriminated between groups (control vs. dysarthrias; and among the 4 dysarthrias). A cross-validation method was used to test the robustness of each original DFA. RESULTS: The majority of classification functions were more than 80% successful in classifying speakers into their appropriate group. New metrics that combined successive vocalic and consonantal segments emerged as important predictor variables. DFAs pitting each dysarthria group against the combined others resulted in unique constellations of predictor variables that yielded high levels of classification accuracy. CONCLUSIONS: This study confirms the ability of rhythm metrics to distinguish control speech from dysarthrias and to discriminate dysarthria subtypes. Rhythm metrics show promise for use as a rational and objective clinical tool.


Assuntos
Disartria/diagnóstico , Disartria/fisiopatologia , Testes de Articulação da Fala , Fala/fisiologia , Análise de Variância , Ataxia/diagnóstico , Ataxia/fisiopatologia , Humanos , Idioma , Valor Preditivo dos Testes , Acústica da Fala , Fatores de Tempo
10.
Cognition ; 182: 318-330, 2019 01.
Artigo em Inglês | MEDLINE | ID: mdl-30415133

RESUMO

Bilinguals understand when the communication context calls for speaking a particular language and can switch from speaking one language to speaking the other based on such conceptual knowledge. There is disagreement regarding whether conceptually-based language selection is also possible in the listening modality. For example, can bilingual listeners perceptually adjust to changes in pronunciation across languages based on their conceptual understanding of which language they're currently hearing? We asked French- and Spanish-English bilinguals to identify nonsense monosyllables as beginning with /b/ or /p/, speech categories that French and Spanish speakers pronounce differently than English speakers. We conceptually cued each bilingual group to one of their two languages or the other by explicitly instructing them that the speech items were word onsets in that language, uttered by a native speaker thereof. Both groups adjusted their /b-p/ identification boundary as a function of this conceptual cue to the language context. These results support a bilingual model permitting conceptually-based language selection on both the speaking and listening end of a communicative exchange.


Assuntos
Multilinguismo , Psicolinguística , Percepção da Fala/fisiologia , Adulto , Sinais (Psicologia) , Humanos , Adulto Jovem
11.
J Acoust Soc Am ; 124(3): 1695-703, 2008 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19045660

RESUMO

Williams [(1986). "Role of dynamic information in the perception of coarticulated vowels," Ph.D. thesis, University of Connecticut, Standford, CT] demonstrated that nonspeech contexts had no influence on pitch judgments of nonspeech targets, whereas context effects were obtained when instructed to perceive the sounds as speech. On the other hand, Holt et al. [(2000). "Neighboring spectral content influences vowel identification," J. Acoust. Soc. Am. 108, 710-722] showed that nonspeech contexts were sufficient to elicit context effects in speech targets. The current study was to test a hypothesis that could explain the varying effectiveness of nonspeech contexts: Context effects are obtained only when there are well-established perceptual categories for the target stimuli. Experiment 1 examined context effects in speech and nonspeech signals using four series of stimuli: steady-state vowels that perceptually spanned from /inverted ohm/-/I/ in isolation and in the context of /w/ (with no steady-state portion) and two nonspeech sine-wave series that mimicked the acoustics of the speech series. In agreement with previous work context effects were obtained for speech contexts and targets but not for nonspeech analogs. Experiment 2 tested predictions of the hypothesis by testing for nonspeech context effects after the listeners had been trained to categorize the sounds. Following training, context-dependent categorization was obtained for nonspeech stimuli in the training group. These results are presented within a general perceptual-cognitive framework for speech perception research.


Assuntos
Percepção Auditiva , Sinais (Psicologia) , Detecção de Sinal Psicológico , Som , Acústica da Fala , Percepção da Fala , Estimulação Acústica , Adulto , Limiar Auditivo , Cognição , Humanos , Percepção da Altura Sonora , Espectrografia do Som , Testes de Discriminação da Fala , Fatores de Tempo , Adulto Jovem
12.
J Speech Lang Hear Res ; 50(1): 83-96, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17344550

RESUMO

Given the interest in comparing speech production development in children with normal hearing and hearing impairment, it is important to evaluate how variables within speech elicitation tasks can differentially affect the acoustics of speech production for these groups. In a first experiment, children (6-14 years old) with cochlear implants produced a set of monosyllabic words either in isolation or while simultaneously signing the word. Acoustical analyses indicated no change in word duration, voice onset time, intensity, or fundamental frequency between isolated and simultaneous signing conditions. In a second experiment, the same children verbally repeated words that were signed by a video model. The model either signed with inflection or without. Words repeated after inflected models were higher in fundamental frequency and intensity and were more intelligible. In addition, children with poorer speech perception skills sometimes produced the monosyllables as 2 syllables, but this only occurred for words that had multiple sign movements. The results have implications for the comparison of speech development between children with normal hearing and those with hearing impairment.


Assuntos
Implantes Cocleares , Estimulação Luminosa , Inteligibilidade da Fala , Comportamento Verbal , Adolescente , Criança , Feminino , Humanos , Masculino , Espectrografia do Som , Acústica da Fala , Medida da Produção da Fala
13.
J Voice ; 29(2): 140-7, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25532813

RESUMO

OBJECTIVES/HYPOTHESIS: Sources of vocal tremor are difficult to categorize perceptually and acoustically. This article describes a preliminary attempt to discriminate vocal tremor sources through the use of spectral measures of the amplitude envelope. The hypothesis is that different vocal tremor sources are associated with distinct patterns of acoustic amplitude modulations. STUDY DESIGN: Statistical categorization methods (discriminant function analysis) were used to discriminate signals from simulated vocal tremor with different sources using only acoustic measures derived from the amplitude envelopes. METHODS: Simulations of vocal tremor were created by modulating parameters of a vocal fold model corresponding to oscillations of respiratory driving pressure (respiratory tremor), degree of vocal fold adduction (adductory tremor), and fundamental frequency of vocal fold vibration (F0 tremor). The acoustic measures were based on spectral analyses of the amplitude envelope computed across the entire signal and within select frequency bands. RESULTS: The signals could be categorized (with accuracy well above chance) in terms of the simulated tremor source using only measures of the amplitude envelope spectrum even when multiple sources of tremor were included. CONCLUSIONS: These results supply initial support for an amplitude-envelope-based approach to identify the source of vocal tremor and provide further evidence for the rich information about talker characteristics present in the temporal structure of the amplitude envelope.


Assuntos
Acústica da Fala , Prega Vocal/fisiopatologia , Distúrbios da Voz/fisiopatologia , Qualidade da Voz/fisiologia , Humanos , Medida da Produção da Fala/métodos
14.
Health Psychol ; 22(1): 3-11, 2003 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-12558196

RESUMO

In 2 studies, the relation between measures of self-assessed health (SAH) and automatic processing of health-relevant information was investigated. In Study 1, 84 male and 86 female undergraduate students completed a modified Stroop task. Results indicated that participants with poorer SAH showed enhanced interference effects for illness versus non-illness words. In Study 2, 27 male and 30 female undergraduate students completed a self-referent encoding task. Results offered a conceptual replication and extension of Study 1 by confirming the specificity of the relation between SAH measures and automatic processing of health (vs. negative or positive general trait) information. These studies provide evidence that individual differences in SAH are reflected in schematic processing of health-relevant information.


Assuntos
Cognição , Nível de Saúde , Processos Mentais , Adolescente , Adulto , Feminino , Humanos , Masculino , Autoavaliação (Psicologia)
15.
Hear Res ; 167(1-2): 156-69, 2002 May.
Artigo em Inglês | MEDLINE | ID: mdl-12117538

RESUMO

One of the central findings of speech perception is that identical acoustic signals can be perceived as different speech sounds depending on adjacent speech context. Although these phonetic context effects are ubiquitous in speech perception, their neural mechanisms remain largely unknown. The present work presents a review of recent data suggesting that spectral content of speech mediates phonetic context effects and argues that these effects are likely to be governed by general auditory processes. A descriptive framework known as spectral contrast is presented as a means of interpreting these findings. Finally, and most centrally, four behavioral experiments that begin to delineate the level of the auditory system at which interactions among stimulus components occur are described. Two of these experiments investigate the influence of diotic versus dichotic presentation upon two phonetic context effects. Results indicate that context effects remain even when context is presented to the ear contralateral to that of the target syllable. The other two experiments examine the time course of phonetic context effects by manipulating the silent interval between context and target syllables. These studies reveal that phonetic context effects persist for hundreds of milliseconds. Results are interpreted in terms of auditory mechanism with particular attention to the putative link between auditory enhancement and phonetic context effects.


Assuntos
Percepção da Fala/fisiologia , Estimulação Acústica , Comportamento , Humanos , Fonética , Psicoacústica
16.
J Speech Lang Hear Res ; 57(5): 1619-37, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-24845730

RESUMO

PURPOSE: Computational modeling was used to examine the consequences of 5 different laryngeal asymmetries on acoustic and perceptual measures of vocal function. METHOD: A kinematic vocal fold model was used to impose 5 laryngeal asymmetries: adduction, edge bulging, nodal point ratio, amplitude of vibration, and starting phase. Thirty /a/ and /ɪ/ vowels were generated for each asymmetry and analyzed acoustically using cepstral peak prominence (CPP), harmonics-to-noise ratio (HNR), and 3 measures of spectral slope (H1*-H2*, B0-B1, and B0-B2). Twenty listeners rated voice quality for a subset of the productions. RESULTS: Increasingly asymmetric adduction, bulging, and nodal point ratio explained significant variance in perceptual rating (R2 = .05, p < .001). The same factors resulted in generally decreasing CPP, HNR, and B0-B2 and in increasing B0-B1. Of the acoustic measures, only CPP explained significant variance in perceived quality (R2 = .14, p < .001). Increasingly asymmetric amplitude of vibration or starting phase minimally altered vocal function or voice quality. CONCLUSION: Asymmetries of adduction, bulging, and nodal point ratio drove acoustic measures and perception in the current study, whereas asymmetric amplitude of vibration and starting phase demonstrated minimal influence on the acoustic signal or voice quality.


Assuntos
Laringe/fisiopatologia , Percepção da Fala/fisiologia , Fala/fisiologia , Paralisia das Pregas Vocais/fisiopatologia , Adolescente , Adulto , Idoso , Simulação por Computador , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Razão Sinal-Ruído , Acústica da Fala , Vibração , Prega Vocal/fisiopatologia , Adulto Jovem
17.
Front Psychol ; 5: 1239, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25400613

RESUMO

Humans routinely produce acoustical energy at frequencies above 6 kHz during vocalization, but this frequency range is often not represented in communication devices and speech perception research. Recent advancements toward high-definition (HD) voice and extended bandwidth hearing aids have increased the interest in the high frequencies. The potential perceptual information provided by high-frequency energy (HFE) is not well characterized. We found that humans can accomplish tasks of gender discrimination and vocal production mode discrimination (speech vs. singing) when presented with acoustic stimuli containing only HFE at both amplified and normal levels. Performance in these tasks was robust in the presence of low-frequency masking noise. No substantial learning effect was observed. Listeners also were able to identify the sung and spoken text (excerpts from "The Star-Spangled Banner") with very few exposures. These results add to the increasing evidence that the high frequencies provide at least redundant information about the vocal signal, suggesting that its representation in communication devices (e.g., cell phones, hearing aids, and cochlear implants) and speech/voice synthesizers could improve these devices and benefit normal-hearing and hearing-impaired listeners.

18.
Front Psychol ; 5: 587, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24982643

RESUMO

While human vocalizations generate acoustical energy at frequencies up to (and beyond) 20 kHz, the energy at frequencies above about 5 kHz has traditionally been neglected in speech perception research. The intent of this paper is to review (1) the historical reasons for this research trend and (2) the work that continues to elucidate the perceptual significance of high-frequency energy (HFE) in speech and singing. The historical and physical factors reveal that, while HFE was believed to be unnecessary and/or impractical for applications of interest, it was never shown to be perceptually insignificant. Rather, the main causes for focus on low-frequency energy appear to be because the low-frequency portion of the speech spectrum was seen to be sufficient (from a perceptual standpoint), or the difficulty of HFE research was too great to be justifiable (from a technological standpoint). The advancement of technology continues to overcome concerns stemming from the latter reason. Likewise, advances in our understanding of the perceptual effects of HFE now cast doubt on the first cause. Emerging evidence indicates that HFE plays a more significant role than previously believed, and should thus be considered in speech and voice perception research, especially in research involving children and the hearing impaired.

19.
Front Psychol ; 4: 399, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23847573

RESUMO

It is well-established that listeners will shift their categorization of a target vowel as a function of acoustic characteristics of a preceding carrier phrase (CP). These results have been interpreted as an example of perceptual normalization for variability resulting from differences in talker anatomy. The present study examined whether listeners would normalize for acoustic variability resulting from differences in speaking style within a single talker. Two vowel series were synthesized that varied between central and peripheral vowels (the vowels in "beat"-"bit" and "bod"-"bud"). Each member of the series was appended to one of four CPs that were spoken in either a "clear" or "reduced" speech style. Participants categorized vowels in these eight contexts. A reliable shift in categorization as a function of speaking style was obtained for three of four phrase sets. This demonstrates that phrase context effects can be obtained with a single talker. However, the directions of the obtained shifts are not reliably predicted on the basis of the speaking style of the talker. Instead, it appears that the effect is determined by an interaction of the average spectrum of the phrase with the target vowel.

20.
Front Psychol ; 3: 203, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22737140

RESUMO

Voices have unique acoustic signatures, contributing to the acoustic variability listeners must contend with in perceiving speech, and it has long been proposed that listeners normalize speech perception to information extracted from a talker's speech. Initial attempts to explain talker normalization relied on extraction of articulatory referents, but recent studies of context-dependent auditory perception suggest that general auditory referents such as the long-term average spectrum (LTAS) of a talker's speech similarly affect speech perception. The present study aimed to differentiate the contributions of articulatory/linguistic versus auditory referents for context-driven talker normalization effects and, more specifically, to identify the specific constraints under which such contexts impact speech perception. Synthesized sentences manipulated to sound like different talkers influenced categorization of a subsequent speech target only when differences in the sentences' LTAS were in the frequency range of the acoustic cues relevant for the target phonemic contrast. This effect was true both for speech targets preceded by spoken sentence contexts and for targets preceded by non-speech tone sequences that were LTAS-matched to the spoken sentence contexts. Specific LTAS characteristics, rather than perceived talker, predicted the results suggesting that general auditory mechanisms play an important role in effects considered to be instances of perceptual talker normalization.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA