Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 153(1): 68, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36732227

RESUMO

Intelligibility measures, which assess the number of words or phonemes a listener correctly transcribes or repeats, are commonly used metrics for speech perception research. While these measures have many benefits for researchers, they also come with a number of limitations. By pointing out the strengths and limitations of this approach, including how it fails to capture aspects of perception such as listening effort, this article argues that the role of intelligibility measures must be reconsidered in fields such as linguistics, communication disorders, and psychology. Recommendations for future work in this area are presented.


Assuntos
Percepção da Fala , Inteligibilidade da Fala , Linguística , Cognição
2.
J Acoust Soc Am ; 154(6): 3973-3985, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-38149818

RESUMO

Face masks offer essential protection but also interfere with speech communication. Here, audio-only sentences spoken through four types of masks were presented in noise to young adult listeners. Pupil dilation (an index of cognitive demand), intelligibility, and subjective effort and performance ratings were collected. Dilation increased in response to each mask relative to the no-mask condition and differed significantly where acoustic attenuation was most prominent. These results suggest that the acoustic impact of the mask drives not only the intelligibility of speech, but also the cognitive demands of listening. Subjective effort ratings reflected the same trends as the pupil data.


Assuntos
Máscaras , Percepção da Fala , Adulto Jovem , Humanos , Inteligibilidade da Fala/fisiologia , Ruído/efeitos adversos , Pupila/fisiologia , Cognição , Percepção da Fala/fisiologia
3.
J Acoust Soc Am ; 152(6): 3216, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36586857

RESUMO

Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.


Assuntos
Ilusões , Percepção da Fala , Humanos , Percepção Visual , Idioma , Fala , Percepção Auditiva , Estimulação Luminosa , Estimulação Acústica
4.
J Acoust Soc Am ; 150(6): 4103, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34972309

RESUMO

Although unfamiliar accents can pose word identification challenges for children and adults, few studies have directly compared perception of multiple nonnative and regional accents or quantified how the extent of deviation from the ambient accent impacts word identification accuracy across development. To address these gaps, 5- to 7-year-old children's and adults' word identification accuracy with native (Midland American, British, Scottish), nonnative (German-, Mandarin-, Japanese-accented English) and bilingual (Hindi-English) varieties (one talker per accent) was tested in quiet and noise. Talkers' pronunciation distance from the ambient dialect was quantified at the phoneme level using a Levenshtein algorithm adaptation. Whereas performance was worse on all non-ambient dialects than the ambient one, there were only interactions between talker and age (child vs adult or across age for the children) for a subset of talkers, which did not fall along the native/nonnative divide. Levenshtein distances significantly predicted word recognition accuracy for adults and children in both listening environments with similar impacts in quiet. In noise, children had more difficulty overcoming pronunciations that substantially deviated from ambient dialect norms than adults. Future work should continue investigating how pronunciation distance impacts word recognition accuracy by incorporating distance metrics at other levels of analysis (e.g., phonetic, suprasegmental).


Assuntos
Percepção da Fala , Adulto , Percepção Auditiva , Criança , Pré-Escolar , Humanos , Idioma , Ruído , Fonética
5.
J Acoust Soc Am ; 147(2): EL151, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-32113314

RESUMO

Unfamiliar second-language (L2) accents present a common challenge to speech understanding. However, the extent to which accurately recognized unfamiliar L2-accented speech imposes a greater cognitive load than native speech remains unclear. The current study used pupillometry to assess cognitive load for native English listeners during the perception of intelligible Mandarin Chinese-accented English and American-accented English. Results showed greater pupil response (indicating greater cognitive load) for the unfamiliar L2-accented speech. These findings indicate that the mismatches between unfamiliar L2-accented speech and native listeners' linguistic representations impose greater cognitive load even when recognition accuracy is at ceiling.

6.
Behav Res Methods ; 52(4): 1795-1799, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-31993960

RESUMO

In everyday language processing, sentence context affects how readers and listeners process upcoming words. In experimental situations, it can be useful to identify words that are predicted to greater or lesser degrees by the preceding context. Here we report completion norms for 3085 English sentences, collected online using a written cloze procedure in which participants were asked to provide their best guess for the word completing a sentence. Sentences varied between eight and ten words in length. At least 100 unique participants contributed to each sentence. All responses were reviewed by human raters to mitigate the influence of mis-spellings and typographical errors. The responses provide a range of predictability values for 13,438 unique target words, 6790 of which appear in more than one sentence context. We also provide entropy values based on the relative predictability of multiple responses. A searchable set of norms is available at http://sentencenorms.net . Finally, we provide the code used to collate and organize the responses to facilitate additional analyses and future research projects.


Assuntos
Compreensão , Idioma , Humanos
7.
J Acoust Soc Am ; 143(5): 3138, 2018 05.
Artigo em Inglês | MEDLINE | ID: mdl-29857746

RESUMO

The goal of this study was to determine how noise affects listeners' subjective judgments of foreign-accented speech and how those judgments relate to the intelligibility of foreign-accented talkers. Fifty native English listeners heard native Mandarin speakers and native English speakers producing English sentences in quiet and in three levels of noise. Participants judged the accent of each speaker on a scale from 1 (native-like) to 9 (foreign). The results show that foreign-accented talkers were rated as less accented in the presence of noise, and that, while lower talker intelligibility was generally associated with higher (more foreign) accent ratings, the presence of noise significantly attenuated this relationship. In contrast, noise increased accent ratings and strengthened the relationship between intelligibility and accent ratings for native talkers. These findings indicate that, by obscuring the acoustic-phonetic cues that listeners use to judge accents, noise makes listeners less confident about the foreign (or native) status of a given talker.


Assuntos
Julgamento/fisiologia , Ruído , Mascaramento Perceptivo/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
8.
J Acoust Soc Am ; 142(2): 1067, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28863602

RESUMO

This study investigated whether clear speech reduces the cognitive demands of lexical competition by crossing speaking style with lexical difficulty. Younger and older adults identified more words in clear versus conversational speech and more easy words than hard words. An initial analysis suggested that the effect of lexical difficulty was reduced in clear speech, but more detailed analyses within each age group showed this interaction was significant only for older adults. The results also showed that both groups improved over the course of the task and that clear speech was particularly helpful for individuals with poorer hearing: for younger adults, clear speech eliminated hearing-related differences that affected performance on conversational speech. For older adults, clear speech was generally more helpful to listeners with poorer hearing. These results suggest that clear speech affords perceptual benefits to all listeners and, for older adults, mitigates the cognitive challenge associated with identifying words with many phonological neighbors.


Assuntos
Fonética , Reconhecimento Psicológico , Acústica da Fala , Inteligibilidade da Fala , Percepção da Fala , Qualidade da Voz , Estimulação Acústica , Adolescente , Fatores Etários , Idoso , Idoso de 80 Anos ou mais , Audiometria de Tons Puros , Audiometria da Fala , Limiar Auditivo , Cognição , Feminino , Humanos , Masculino , Adulto Jovem
9.
Exp Aging Res ; 42(1): 97-111, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26683044

RESUMO

BACKGROUND/STUDY CONTEXT: A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS: The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS: Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION: The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.


Assuntos
Memória de Curto Prazo , Rememoração Mental , Acústica da Fala , Adulto , Fatores Etários , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
10.
J Exp Psychol Hum Percept Perform ; 50(4): 329-357, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38330329

RESUMO

Prior research has shown that visual information, such as a speaker's perceived race or ethnicity, prompts listeners to expect a specific sociophonetic pattern ("social priming"). Indeed, a picture of an East Asian face may facilitate perception of second language (L2) Mandarin Chinese-accented English but interfere with perception of first language- (L1-) accented English. The present study builds on this line of inquiry, addressing the relationship between social priming effects and implicit racial/ethnic associations for L1- and L2-accented speech. For L1-accented speech, we found no priming effects when comparing White versus East Asian or Latina primes. For L2- (Mandarin Chinese-) accented speech, however, transcription accuracy was slightly better following an East Asian prime than a White prime. Across all experiments, a relationship between performance and individual differences in implicit associations emerged, but in no cases did this relationship interact with the priming manipulation. Ultimately, exploring social priming effects with additional methodological approaches, and in different populations of listeners, will help to determine whether these effects operate differently in the context of L1- and L2-accented speech. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Assuntos
Multilinguismo , Percepção da Fala , Humanos , Individualidade , Idioma , Fala , Etnicidade
11.
Psychon Bull Rev ; 31(1): 176-186, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37442872

RESUMO

Prior work in speech processing indicates that listening tasks with multiple speakers (as opposed to a single speaker) result in slower and less accurate processing. Notably, the trial-to-trial cognitive demands of switching between speakers or switching between accents have yet to be examined. We used pupillometry, a physiological index of cognitive load, to examine the demands of processing first (L1) and second (L2) language-accented speech when listening to sentences produced by the same speaker consecutively (no switch), a novel speaker of the same accent (within-accent switch), and a novel speaker with a different accent (across-accent switch). Inspired by research on sequential adjustments in cognitive control, we aimed to identify the cognitive demands of accommodating a novel speaker and accent by examining the trial-to-trial changes in pupil dilation during speech processing. Our results indicate that switching between speakers was more cognitively demanding than listening to the same speaker consecutively. Additionally, switching to a novel speaker with a different accent was more cognitively demanding than switching between speakers of the same accent. However, there was an asymmetry for across-accent switches, such that switching from an L1 to an L2 accent was more demanding than vice versa. Findings from the present study align with work examining multi-talker processing costs, and provide novel evidence that listeners dynamically adjust cognitive processing to accommodate speaker and accent variability. We discuss these novel findings in the context of an active control model and auditory streaming framework of speech processing.


Assuntos
Percepção da Fala , Fala , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia , Idioma , Cognição/fisiologia
12.
Lang Speech ; : 238309231199245, 2023 Sep 29.
Artigo em Inglês | MEDLINE | ID: mdl-37772514

RESUMO

Listeners use more than just acoustic information when processing speech. Social information, such as a speaker's perceived race or ethnicity, can also affect the processing of the speech signal, in some cases facilitating perception ("social priming"). We aimed to replicate and extend this line of inquiry, examining effects of multiple social primes (i.e., a Middle Eastern, White, or East Asian face, or a control silhouette image) on the perception of Mandarin Chinese-accented English and Arabic-accented English. By including uncommon priming combinations (e.g., a Middle Eastern prime for a Mandarin accent), we aimed to test the specificity of social primes: For example, can a Middle Eastern face facilitate perception of both Arabic-accented English and Mandarin-accented English? Contrary to our predictions, our results indicated no facilitative social priming effects for either of the second language (L2) accents. Results for our examination of specificity were mixed. Trends in the data indicated that the combination of an East Asian prime with Arabic accent resulted in lower accuracy as compared with a White prime, but the combination of a Middle Eastern prime with a Mandarin accent did not (and may have actually benefited listeners to some degree). We conclude that the specificity of priming effects may depend on listeners' level of familiarity with a given accent and/or racial/ethnic group and that the mixed outcomes in the current work motivate further inquiries to determine whether social priming effects for L2-accented speech may be smaller than previously hypothesized and/or highly dependent on listener experience.

13.
JASA Express Lett ; 3(12)2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-38059794

RESUMO

The present study examined whether race information about speakers can promote rapid and generalizable perceptual adaptation to second-language accent. First-language English listeners were presented with Cantonese-accented English sentences in speech-shaped noise during a training session with three intermixed talkers, followed by a test session with a novel (i.e., fourth) talker. Participants were assigned to view either three East Asian or three White faces during training, corresponding to each speaker. Results indicated no effect of the social priming manipulation on the training or test sessions, although both groups performed better at test than a control group.


Assuntos
Percepção da Fala , Humanos , Idioma , Ruído , Fala , Grupos Controle
14.
Psychophysiology ; 60(7): e14256, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36734299

RESUMO

Pupillometry has a rich history in the study of perception and cognition. One perennial challenge is that the magnitude of the task-evoked pupil response diminishes over the course of an experiment, a phenomenon we refer to as a fatigue effect. Reducing fatigue effects may improve sensitivity to task effects and reduce the likelihood of confounds due to systematic physiological changes over time. In this paper, we investigated the degree to which fatigue effects could be ameliorated by experimenter intervention. In Experiment 1, we assigned participants to one of three groups-no breaks, kinetic breaks (playing with toys, but no social interaction), or chatting with a research assistant-and compared the pupil response across conditions. In Experiment 2, we additionally tested the effect of researcher observation. Only breaks including social interaction significantly reduced the fatigue of the pupil response across trials. However, in all conditions we found robust evidence for fatigue effects: that is, regardless of protocol, the task-evoked pupil response was substantially diminished (at least 60%) over the duration of the experiment. We account for the variance of fatigue effects in our pupillometry data using multiple common statistical modeling approaches (e.g., linear mixed-effects models of peak, mean, and baseline pupil diameters, as well as growth curve models of time-course data). We conclude that pupil attenuation is a predictable phenomenon that should be accommodated in our experimental designs and statistical models.


Assuntos
Fadiga , Pupila , Humanos , Pupila/fisiologia , Cognição/fisiologia
15.
J Acoust Soc Am ; 131(2): 1449-64, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22352516

RESUMO

This study examined whether speech-on-speech masking is sensitive to variation in the degree of similarity between the target and the masker speech. Three experiments investigated whether speech-in-speech recognition varies across different background speech languages (English vs Dutch) for both English and Dutch targets, as well as across variation in the semantic content of the background speech (meaningful vs semantically anomalous sentences), and across variation in listener status vis-à-vis the target and masker languages (native, non-native, or unfamiliar). The results showed that the more similar the target speech is to the masker speech (e.g., same vs different language, same vs different levels of semantic content), the greater the interference on speech recognition accuracy. Moreover, the listener's knowledge of the target and the background language modulate the size of the release from masking. These factors had an especially strong effect on masking effectiveness in highly unfavorable listening conditions. Overall this research provided evidence that that the degree of target-masker similarity plays a significant role in speech-in-speech recognition. The results also give insight into how listeners assign their resources differently depending on whether they are listening to their first or second language.


Assuntos
Linguística , Mascaramento Perceptivo/fisiologia , Reconhecimento Psicológico , Percepção da Fala/fisiologia , Estimulação Acústica , Inglaterra , Feminino , Humanos , Illinois , Masculino , Multilinguismo , Países Baixos , Ruído , Fonética , Razão Sinal-Ruído , Adulto Jovem
16.
Atten Percept Psychophys ; 84(6): 2074-2086, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-34988904

RESUMO

Speech intelligibility is improved when the listener can see the talker in addition to hearing their voice. Notably, though, previous work has suggested that this "audiovisual benefit" for nonnative (i.e., foreign-accented) speech is smaller than the benefit for native speech, an effect that may be partially accounted for by listeners' implicit racial biases (Yi et al., 2013, The Journal of the Acoustical Society of America, 134[5], EL387-EL393.). In the present study, we sought to replicate these findings in a significantly larger sample of online participants. In a direct replication of Yi et al. (Experiment 1), we found that audiovisual benefit was indeed smaller for nonnative-accented relative to native-accented speech. However, our results did not support the conclusion that implicit racial biases, as measured with two types of implicit association tasks, were related to these differences in audiovisual benefit for native and nonnative speech. In a second experiment, we addressed a potential confound in the experimental design; to ensure that the difference in audiovisual benefit was caused by a difference in accent rather than a difference in overall intelligibility, we reversed the overall difficulty of each accent condition by presenting them at different signal-to-noise ratios. Even when native speech was presented at a much more difficult intelligibility level than nonnative speech, audiovisual benefit for nonnative speech remained poorer. In light of these findings, we discuss alternative explanations of reduced audiovisual benefit for nonnative speech, as well as methodological considerations for future work examining the intersection of social, cognitive, and linguistic processes.


Assuntos
Racismo , Percepção da Fala , Audição , Humanos , Linguística , Inteligibilidade da Fala
17.
Atten Percept Psychophys ; 84(5): 1772-1787, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35474415

RESUMO

The linguistic similarity hypothesis states that it is more difficult to segregate target and masker speech when they are linguistically similar. For example, recognition of English target speech should be more impaired by the presence of Dutch masking speech than Mandarin masking speech because Dutch and English are more linguistically similar than Mandarin and English. Across four experiments, English target speech was consistently recognized more poorly when presented in English masking speech than in silence, speech-shaped noise, or an unintelligible masker (i.e., Dutch or Mandarin). However, we found no evidence for graded masking effects-Dutch did not impair performance more than Mandarin in any experiment, despite 650 participants being tested. This general pattern was consistent when using both a cross-modal paradigm (in which target speech was lipread and maskers were presented aurally; Experiments 1a and 1b) and an auditory-only paradigm (in which both the targets and maskers were presented aurally; Experiments 2a and 2b). These findings suggest that the linguistic similarity hypothesis should be refined to reflect the existing evidence: There is greater release from masking when the masker language differs from the target speech than when it is the same as the target speech. However, evidence that unintelligible maskers impair speech identification to a greater extent when they are more linguistically similar to the target language remains elusive.


Assuntos
Mascaramento Perceptivo , Percepção da Fala , Humanos , Idioma , Linguística , Fala
18.
Psychon Bull Rev ; 29(1): 268-280, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34405386

RESUMO

In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological "neighbors" with similar acoustic properties (e.g., "cap" vs. "cat"). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in quiet. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words with many or fewer neighbors, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than did young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.


Assuntos
Percepção da Fala , Idoso , Cognição , Humanos , Percepção da Fala/fisiologia , Adulto Jovem
19.
Artigo em Inglês | MEDLINE | ID: mdl-35024541

RESUMO

The number of possible approaches to conducting and analyzing a research study-often referred to as researcher degrees of freedom-has been increasingly under scrutiny as a challenge to the reproducibility of experimental results. Here we focus on the specific instance of time window selection for time series data. As an example, we use data from a visual world eye tracking paradigm in which participants heard a word and were instructed to click on one of four pictures corresponding to the target (e.g., "Click on the hat"). We examined statistical models for a range of start times following the beginning of the carrier phrase, and for each start time a range of window lengths, resulting in 8281 unique time windows. For each time window we ran the same logistic linear mixed effects model, including effects of time, age, noise, and word frequency on an orthogonalized polynomial basis set. Comparing results across these time ranges shows substantial changes in both parameter estimates and p values, even within intuitively "reasonable" boundaries. In some cases varying the window selection in the range of 100-200 ms caused parameter estimates to change from positive to negative. Rather than rush to provide specific recommendations for time window selection (which differs across studies), we advocate for transparency regarding time window selection and awareness of the effects this choice may have on results. Preregistration and multiverse model exploration are two complementary strategies to help mitigate bias introduced by any particular time window choice.

20.
Cogn Res Princ Implic ; 6(1): 49, 2021 07 18.
Artigo em Inglês | MEDLINE | ID: mdl-34275022

RESUMO

Identifying speech requires that listeners make rapid use of fine-grained acoustic cues-a process that is facilitated by being able to see the talker's face. Face masks present a challenge to this process because they can both alter acoustic information and conceal the talker's mouth. Here, we investigated the degree to which different types of face masks and noise levels affect speech intelligibility and subjective listening effort for young (N = 180) and older (N = 180) adult listeners. We found that in quiet, mask type had little influence on speech intelligibility relative to speech produced without a mask for both young and older adults. However, with the addition of moderate (- 5 dB SNR) and high (- 9 dB SNR) levels of background noise, intelligibility dropped substantially for all types of face masks in both age groups. Across noise levels, transparent face masks and cloth face masks with filters impaired performance the most, and surgical face masks had the smallest influence on intelligibility. Participants also rated speech produced with a face mask as more effortful than unmasked speech, particularly in background noise. Although young and older adults were similarly affected by face masks and noise in terms of intelligibility and subjective listening effort, older adults showed poorer intelligibility overall and rated the speech as more effortful to process relative to young adults. This research will help individuals make more informed decisions about which types of masks to wear in various communicative settings.


Assuntos
Máscaras , Percepção da Fala , Idoso , Percepção Auditiva , Humanos , Ruído , Inteligibilidade da Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA