Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 6.905
Filter
Add more filters

Publication year range
1.
Proc Natl Acad Sci U S A ; 121(25): e2405588121, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38861607

ABSTRACT

Many animals can extract useful information from the vocalizations of other species. Neuroimaging studies have evidenced areas sensitive to conspecific vocalizations in the cerebral cortex of primates, but how these areas process heterospecific vocalizations remains unclear. Using fMRI-guided electrophysiology, we recorded the spiking activity of individual neurons in the anterior temporal voice patches of two macaques while they listened to complex sounds including vocalizations from several species. In addition to cells selective for conspecific macaque vocalizations, we identified an unsuspected subpopulation of neurons with strong selectivity for human voice, not merely explained by spectral or temporal structure of the sounds. The auditory representational geometry implemented by these neurons was strongly related to that measured in the human voice areas with neuroimaging and only weakly to low-level acoustical structure. These findings provide new insights into the neural mechanisms involved in auditory expertise and the evolution of communication systems in primates.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Neurons , Vocalization, Animal , Voice , Animals , Humans , Neurons/physiology , Voice/physiology , Magnetic Resonance Imaging/methods , Vocalization, Animal/physiology , Auditory Perception/physiology , Male , Macaca mulatta , Brain/physiology , Acoustic Stimulation , Brain Mapping/methods
2.
Proc Natl Acad Sci U S A ; 121(26): e2318361121, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38889147

ABSTRACT

When listeners hear a voice, they rapidly form a complex first impression of who the person behind that voice might be. We characterize how these multivariate first impressions from voices emerge over time across different levels of abstraction using electroencephalography and representational similarity analysis. We find that for eight perceived physical (gender, age, and health), trait (attractiveness, dominance, and trustworthiness), and social characteristics (educatedness and professionalism), representations emerge early (~80 ms after stimulus onset), with voice acoustics contributing to those representations between ~100 ms and 400 ms. While impressions of person characteristics are highly correlated, we can find evidence for highly abstracted, independent representations of individual person characteristics. These abstracted representationse merge gradually over time. That is, representations of physical characteristics (age, gender) arise early (from ~120 ms), while representations of some trait and social characteristics emerge later (~360 ms onward). The findings align with recent theoretical models and shed light on the computations underpinning person perception from voices.


Subject(s)
Auditory Perception , Brain , Electroencephalography , Voice , Humans , Male , Female , Voice/physiology , Adult , Brain/physiology , Auditory Perception/physiology , Young Adult , Social Perception
3.
Proc Natl Acad Sci U S A ; 120(17): e2218367120, 2023 04 25.
Article in English | MEDLINE | ID: mdl-37068255

ABSTRACT

Italian is sexy, German is rough-but how about Páez or Tamil? Are there universal phonesthetic judgments based purely on the sound of a language, or are preferences attributable to language-external factors such as familiarity and cultural stereotypes? We collected 2,125 recordings of 228 languages from 43 language families, including 5 to 11 speakers of each language to control for personal vocal attractiveness, and asked 820 native speakers of English, Chinese, or Semitic languages to indicate how much they liked these languages. We found a strong preference for languages perceived as familiar, even when they were misidentified, a variety of cultural-geographical biases, and a preference for breathy female voices. The scores by English, Chinese, and Semitic speakers were weakly correlated, indicating some cross-cultural concordance in phonesthetic judgments, but overall there was little consensus between raters about which languages sounded more beautiful, and average scores per language remained within ±2% after accounting for confounds related to familiarity and voice quality of individual speakers. None of the tested phonetic features-the presence of specific phonemic classes, the overall size of phonetic repertoire, its typicality and similarity to the listener's first language-were robust predictors of pleasantness ratings, apart from a possible slight preference for nontonal languages. While population-level phonesthetic preferences may exist, their contribution to perceptual judgments of short speech recordings appears to be minor compared to purely personal preferences, the speaker's voice quality, and perceived resemblance to other languages culturally branded as beautiful or ugly.


Subject(s)
Speech Perception , Voice , Humans , Female , India , Language , Sound , Speech
4.
Cereb Cortex ; 34(1)2024 01 14.
Article in English | MEDLINE | ID: mdl-38142293

ABSTRACT

Selective attention to one speaker in multi-talker environments can be affected by the acoustic and semantic properties of speech. One highly ecological feature of speech that has the potential to assist in selective attention is voice familiarity. Here, we tested how voice familiarity interacts with selective attention by measuring the neural speech-tracking response to both target and non-target speech in a dichotic listening "Cocktail Party" paradigm. We measured Magnetoencephalography from n = 33 participants, presented with concurrent narratives in two different voices, and instructed to pay attention to one ear ("target") and ignore the other ("non-target"). Participants were familiarized with one of the voices during the week prior to the experiment, rendering this voice familiar to them. Using multivariate speech-tracking analysis we estimated the neural responses to both stimuli and replicate their well-established modulation by selective attention. Importantly, speech-tracking was also affected by voice familiarity, showing enhanced response for target speech and reduced response for non-target speech in the contra-lateral hemisphere, when these were in a familiar vs. an unfamiliar voice. These findings offer valuable insight into how voice familiarity, and by extension, auditory-semantics, interact with goal-driven attention, and facilitate perceptual organization and speech processing in noisy environments.


Subject(s)
Speech Perception , Voice , Humans , Speech , Speech Perception/physiology , Recognition, Psychology/physiology , Semantics
5.
J Neurosci ; 43(14): 2579-2596, 2023 04 05.
Article in English | MEDLINE | ID: mdl-36859308

ABSTRACT

Many social animals can recognize other individuals by their vocalizations. This requires a memory system capable of mapping incoming acoustic signals to one of many known individuals. Using the zebra finch, a social songbird that uses songs and distance calls to communicate individual identity (Elie and Theunissen, 2018), we tested the role of two cortical-like brain regions in a vocal recognition task. We found that the rostral region of the Cadomedial Nidopallium (NCM), a secondary auditory region of the avian pallium, was necessary for maintaining auditory memories for conspecific vocalizations in both male and female birds, whereas HVC (used as a proper name), a premotor areas that gates auditory input into the vocal motor and song learning pathways in male birds (Roberts and Mooney, 2013), was not. Both NCM and HVC have previously been implicated for processing the tutor song in the context of song learning (Sakata and Yazaki-Sugiyama, 2020). Our results suggest that NCM might not only store songs as templates for future vocal imitation but also songs and calls for perceptual discrimination of vocalizers in both male and female birds. NCM could therefore operate as a site for auditory memories for vocalizations used in various facets of communication. We also observed that new auditory memories could be acquired without intact HVC or NCM but that for these new memories NCM lesions caused deficits in either memory capacity or auditory discrimination. These results suggest that the high-capacity memory functions of the avian pallial auditory system depend on NCM.SIGNIFICANCE STATEMENT Many aspects of vocal communication require the formation of auditory memories. Voice recognition, for example, requires a memory for vocalizers to identify acoustical features. In both birds and primates, the locus and neural correlates of these high-level memories remain poorly described. Previous work suggests that this memory formation is mediated by high-level sensory areas, not traditional memory areas such as the hippocampus. Using lesion experiments, we show that one secondary auditory brain region in songbirds that had previously been implicated in storing song memories for vocal imitation is also implicated in storing vocal memories for individual recognition. The role of the neural circuits in this region in interpreting the meaning of communication calls should be investigated in the future.


Subject(s)
Finches , Vocalization, Animal , Animals , Male , Female , Acoustic Stimulation , Learning , Brain , Auditory Perception
6.
Dev Biol ; 500: 10-21, 2023 08.
Article in English | MEDLINE | ID: mdl-37230380

ABSTRACT

Laryngeal birth defects are considered rare, but they can be life-threatening conditions. The BMP4 gene plays an important role in organ development and tissue remodeling throughout life. Here we examined its role in laryngeal development complementing similar efforts for the lung, pharynx, and cranial base. Our goal was to determine how different imaging techniques contribute to a better understanding of the embryonic anatomy of the normal and diseased larynx in small specimens. Contrast-enhanced micro CT images of embryonic larynx tissue from a mouse model with Bmp4 deletion informed by histology and whole-mount immunofluorescence were used to reconstruct the laryngeal cartilaginous framework in three dimensions. Laryngeal defects included laryngeal cleft, laryngeal asymmetry, ankylosis and atresia. Results implicate BMP4 in laryngeal development and show that the 3D reconstruction of laryngeal elements provides a powerful approach to visualize laryngeal defects and thereby overcoming shortcomings of 2D histological sectioning and whole mount immunofluorescence.


Subject(s)
Larynx , Animals , Mice , Pharynx , Signal Transduction
7.
Eur J Neurosci ; 2024 May 22.
Article in English | MEDLINE | ID: mdl-38777332

ABSTRACT

Although the attractiveness of voices plays an important role in social interactions, it is unclear how voice attractiveness and social interest influence social decision-making. Here, we combined the ultimatum game with recording event-related brain potentials (ERPs) and examined the effect of attractive versus unattractive voices of the proposers, expressing positive versus negative social interest ("I like you" vs. "I don't like you"), on the acceptance of the proposal. Overall, fair offers were accepted at significantly higher rates than unfair offers, and high voice attractiveness increased acceptance rates for all proposals. In ERPs in response to the voices, their attractiveness and expressed social interests yielded early additive effects in the N1 component, followed by interactions in the subsequent P2, P3 and N400 components. More importantly, unfair offers elicited a larger Medial Frontal Negativity (MFN) than fair offers but only when the proposer's voice was unattractive or when the voice carried positive social interest. These results suggest that both voice attractiveness and social interest moderate social decision-making and there is a similar "beauty premium" for voices as for faces.

8.
BMC Med ; 22(1): 121, 2024 Mar 14.
Article in English | MEDLINE | ID: mdl-38486293

ABSTRACT

BACKGROUND: Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics. METHODS: Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents' responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing. RESULTS: Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices. CONCLUSIONS: This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics. TRIAL REGISTRATION: BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Child , Humans , Autistic Disorder/diagnosis , Speech , Autism Spectrum Disorder/diagnosis , Bayes Theorem , Emotions/physiology , Acoustics
9.
J Exp Zool B Mol Dev Evol ; 342(4): 342-349, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38591232

ABSTRACT

Wolves howl and dogs bark, both are able to produce variants of either vocalization, but we see a distinct difference in usage between wild and domesticate. Other domesticates also show distinct changes to their vocal output: domestic cats retain meows, a distinctly subadult trait in wildcats. Such differences in acoustic output are well-known, but the causal mechanisms remain little-studied. Potential links between domestication and vocal output are intriguing for multiple reasons, and offer a unique opportunity to explore a prominent hypothesis in domestication research: the neural crest/domestication syndrome hypothesis. This hypothesis suggests that in the early stages of domestication, selection for tame individuals decreased neural crest cell (NCCs) proliferation and migration, which led to a downregulation of the sympathetic arousal system, and hence reduced fear and reactive aggression. NCCs are a transitory stem cell population crucial during embryonic development that tie to diverse tissue types and organ systems. One of these neural-crest derived systems is the larynx, the main vocal source in mammals. We argue that this connection between NCCs and the larynx provides a powerful test of the predictions of the neural crest/domestication syndrome hypothesis, discriminating its predictions from those of other current hypotheses concerning domestication.


Subject(s)
Domestication , Larynx , Neural Crest , Vocalization, Animal , Animals , Animals, Domestic , Larynx/physiology , Larynx/anatomy & histology , Neural Crest/physiology , Vocalization, Animal/physiology
10.
Psychol Sci ; 35(5): 543-557, 2024 May.
Article in English | MEDLINE | ID: mdl-38620057

ABSTRACT

Recently, gender-ambiguous (nonbinary) voices have been added to voice assistants to combat gender stereotypes and foster inclusion. However, if people react negatively to such voices, these laudable efforts may be counterproductive. In five preregistered studies (N = 3,684 adult participants) we found that people do react negatively, rating products described by narrators with gender-ambiguous voices less favorably than when they are described by clearly male or female narrators. The voices create a feeling of unease, or social disfluency, that affects evaluations of the products being described. These effects are best explained by low familiarity with voices that sound ambiguous. Thus, initial negative reactions can be overcome with more exposure.


Subject(s)
Voice , Humans , Female , Male , Adult , Young Adult , Stereotyping , Social Perception , Gender Identity , Adolescent , Middle Aged
11.
Psychol Sci ; 35(3): 250-262, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38289294

ABSTRACT

Fundamental frequency ( fo) is the most perceptually salient vocal acoustic parameter, yet little is known about how its perceptual influence varies across societies. We examined how fo affects key social perceptions and how socioecological variables modulate these effects in 2,647 adult listeners sampled from 44 locations across 22 nations. Low male fo increased men's perceptions of formidability and prestige, especially in societies with higher homicide rates and greater relational mobility in which male intrasexual competition may be more intense and rapid identification of high-status competitors may be exigent. High female fo increased women's perceptions of flirtatiousness where relational mobility was lower and threats to mating relationships may be greater. These results indicate that the influence of fo on social perceptions depends on socioecological variables, including those related to competition for status and mates.


Subject(s)
Voice , Adult , Humans , Male , Female , Homicide , Social Perception , Sexual Partners
12.
Psychol Med ; 54(3): 569-581, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37779256

ABSTRACT

BACKGROUND: Inducing hallucinations under controlled experimental conditions in non-hallucinating individuals represents a novel research avenue oriented toward understanding complex hallucinatory phenomena, avoiding confounds observed in patients. Auditory-verbal hallucinations (AVH) are one of the most common and distressing psychotic symptoms, whose etiology remains largely unknown. Two prominent accounts portray AVH either as a deficit in auditory-verbal self-monitoring, or as a result of overly strong perceptual priors. METHODS: In order to test both theoretical models and evaluate their potential integration, we developed a robotic procedure able to induce self-monitoring perturbations (consisting of sensorimotor conflicts between poking movements and corresponding tactile feedback) and a perceptual prior associated with otherness sensations (i.e. feeling the presence of a non-existing another person). RESULTS: Here, in two independent studies, we show that this robotic procedure led to AVH-like phenomena in healthy individuals, quantified as an increase in false alarm rate in a voice detection task. Robotically-induced AVH-like sensations were further associated with delusional ideation and to both AVH accounts. Specifically, a condition with stronger sensorimotor conflicts induced more AVH-like sensations (self-monitoring), while, in the otherness-related experimental condition, there were more AVH-like sensations when participants were detecting other-voice stimuli, compared to detecting self-voice stimuli (strong-priors). CONCLUSIONS: By demonstrating an experimental procedure able to induce AVH-like sensations in non-hallucinating individuals, we shed new light on AVH phenomenology, thereby integrating self-monitoring and strong-priors accounts.


Subject(s)
Psychotic Disorders , Voice , Humans , Hallucinations/etiology , Psychotic Disorders/diagnosis , Emotions
13.
Cerebellum ; 2024 Jan 29.
Article in English | MEDLINE | ID: mdl-38285133

ABSTRACT

Dysarthria is disabling in persons with degenerative ataxia. There is limited evidence for speech therapy interventions. In this pilot study, we used the Voice trainer app, which was originally developed for patients with Parkinson's disease, as a feedback tool for vocal control. We hypothesized that patients with ataxic dysarthria would benefit from the Voice trainer app to better control their loudness and pitch, resulting in a lower speaking rate and better intelligibility. This intervention study consisted of five therapy sessions of 30 min within 3 weeks using the principles of the Pitch Limiting Voice Treatment. Patients received real-time visual feedback on loudness and pitch during the exercises. Besides, they were encouraged to practice at home or to use the Voice trainer in daily life. We used observer-rated and patient-rated outcome measures. The primary outcome measure was intelligibility, as measured by the Dutch sentence intelligibility test. Twenty-one out of 25 included patients with degenerative ataxia completed the therapy. We found no statistically significant improvements in intelligibility (p = .56). However, after the intervention, patients were speaking slower (p = .03) and the pause durations were longer (p < .001). The patients were satisfied about using the app. At the group level, we found no evidence for an effect of the Voice trainer app on intelligibility in degenerative ataxia. Because of the heterogeneity of ataxic dysarthria, a more tailor-made rather than generic intervention seems warranted.

14.
Eur J Neurol ; : e16343, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38780314

ABSTRACT

The European Federation of Neurological Associations (EFNA) brings together European umbrella organizations of pan-European neurological patient advocacy groups (www.efna.net) and strives to improve the quality of life of people living with neurological conditions and to work towards relieving the immense social and economic burden on patients, carers and society in general. This article provides an overview of EFNA's activities and achievements over the past two decades, the evolution of patient advocacy during those years, and the increased role and impact of the European patient voice in the neurological arena.

15.
J Biomed Inform ; : 104669, 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38880237

ABSTRACT

BACKGROUND: Studies confirm that significant biases exist in online recommendation platforms, exacerbating pre-existing disparities and leading to less-than-optimal outcomes for underrepresented demographics. We study issues of bias in inclusion and representativeness in the context of healthcare information disseminated via videos on the YouTube social media platform, a widely used online channel for multi-media rich information. With one in three US adults using the Internet to learn about a health concern, it is critical to assess inclusivity and representativeness regarding how health information is disseminated by digital platforms such as YouTube. METHODS: Leveraging methods from fair machine learning (ML), natural language processing and voice and facial recognition methods, we examine inclusivity and representativeness of video content presenters using a large corpus of videos and their metadata on a chronic condition (diabetes) extracted from the YouTube platform. Regression models are used to determine whether presenter demographics impact video popularity, measured by the video's average daily view count. A video that generates a higher view count is considered to be more popular. RESULTS: The voice and facial recognition methods predicted the gender and race of the presenter with reasonable success. Gender is predicted through voice recognition (accuracy = 78 %, AUC = 76 %), while the gender and race predictions use facial recognition (accuracy = 93 %, AUC = 92 % and accuracy = 82 %, AUC = 80 %, respectively). The gender of the presenter is more significant for video views only when the face of the presenter is not visible while videos with male presenters with no face visibility have a positive relationship with view counts. Furthermore, videos with white and male presenters have a positive influence on view counts while videos with female and non - white group have high view counts. CONCLUSION: Presenters' demographics do have an influence on average daily view count of videos viewed on social media platforms as shown by advanced voice and facial recognition algorithms used for assessing inclusion and representativeness of the video content. Future research can explore short videos and those at the channel level because popularity of the channel name and the number of videos associated with that channel do have an influence on view counts.

16.
Cereb Cortex ; 33(4): 1170-1185, 2023 02 07.
Article in English | MEDLINE | ID: mdl-35348635

ABSTRACT

Voice signaling is integral to human communication, and a cortical voice area seemed to support the discrimination of voices from other auditory objects. This large cortical voice area in the auditory cortex (AC) was suggested to process voices selectively, but its functional differentiation remained elusive. We used neuroimaging while humans processed voices and nonvoice sounds, and artificial sounds that mimicked certain voice sound features. First and surprisingly, specific auditory cortical voice processing beyond basic acoustic sound analyses is only supported by a very small portion of the originally described voice area in higher-order AC located centrally in superior Te3. Second, besides this core voice processing area, large parts of the remaining voice area in low- and higher-order AC only accessorily process voices and might primarily pick up nonspecific psychoacoustic differences between voices and nonvoices. Third, a specific subfield of low-order AC seems to specifically decode acoustic sound features that are relevant but not exclusive for voice detection. Taken together, the previously defined voice area might have been overestimated since cortical support for human voice processing seems rather restricted. Cortical voice processing also seems to be functionally more diverse and embedded in broader functional principles of the human auditory system.


Subject(s)
Auditory Cortex , Voice , Humans , Acoustic Stimulation/methods , Auditory Perception , Sound , Magnetic Resonance Imaging/methods
17.
Cereb Cortex ; 33(3): 709-728, 2023 01 05.
Article in English | MEDLINE | ID: mdl-35296892

ABSTRACT

During social interactions, speakers signal information about their emotional state through their voice, which is known as emotional prosody. Little is known regarding the precise brain systems underlying emotional prosody decoding in children and whether accurate neural decoding of these vocal cues is linked to social skills. Here, we address critical gaps in the developmental literature by investigating neural representations of prosody and their links to behavior in children. Multivariate pattern analysis revealed that representations in the bilateral middle and posterior superior temporal sulcus (STS) divisions of voice-sensitive auditory cortex decode emotional prosody information in children. Crucially, emotional prosody decoding in middle STS was correlated with standardized measures of social communication abilities; more accurate decoding of prosody stimuli in the STS was predictive of greater social communication abilities in children. Moreover, social communication abilities were specifically related to decoding sadness, highlighting the importance of tuning in to negative emotional vocal cues for strengthening social responsiveness and functioning. Findings bridge an important theoretical gap by showing that the ability of the voice-sensitive cortex to detect emotional cues in speech is predictive of a child's social skills, including the ability to relate and interact with others.


Subject(s)
Auditory Cortex , Speech Perception , Voice , Humans , Child , Social Skills , Magnetic Resonance Imaging , Emotions , Communication
18.
Cereb Cortex ; 33(13): 8620-8632, 2023 06 20.
Article in English | MEDLINE | ID: mdl-37118893

ABSTRACT

Sentence oral reading requires not only a coordinated effort in the visual, articulatory, and cognitive processes but also supposes a top-down influence from linguistic knowledge onto the visual-motor behavior. Despite a gradual recognition of a predictive coding effect in this process, there is currently a lack of a comprehensive demonstration regarding the time-varying brain dynamics that underlines the oral reading strategy. To address this, our study used a multimodal approach, combining real-time recording of electroencephalography, eye movements, and speech, with a comprehensive examination of regional, inter-regional, sub-network, and whole-brain responses. Our study identified the top-down predictive effect with a phrase-grouping phenomenon in the fixation interval and eye-voice span. This effect was associated with the delta and theta band synchronization in the prefrontal, anterior temporal, and inferior frontal lobes. We also observed early activation of the cognitive control network and its recurrent interactions with the visual-motor networks structurally at the phrase rate. Finally, our study emphasizes the importance of cross-frequency coupling as a promising neural realization of hierarchical sentence structuring and calls for further investigation.


Subject(s)
Language , Reading , Electroencephalography , Brain/physiology , Linguistics
19.
Support Care Cancer ; 32(6): 338, 2024 May 10.
Article in English | MEDLINE | ID: mdl-38730019

ABSTRACT

BACKGROUND: Since the onset of the pandemic, breast cancer (BC) services have been disrupted in most countries. The purpose of this qualitative study is to explore the unmet needs, patient-priorities, and recommendations for improving BC healthcare post-pandemic for women with BC and to understand how they may vary based on social determinants of health (SDH), in particular socio-economic status (SES). METHODS: Thirty-seven women, who were purposively sampled based on SDH and previously interviewed about the impact of COVID-19 on BC, were invited to take part in follow-up semi-structured qualitative interviews in early 2023. The interviews explored their perspectives of BC care since the easing of COVID-19 government restrictions, including unmet needs, patient-priorities, and recommendations specific to BC care. Thematic analysis was conducted to synthesize each topic narratively with corresponding sub-themes. Additionally, variation by SDH was analyzed within each sub-theme. RESULTS: Twenty-eight women (mean age = 61.7 years, standard deviation (SD) = 12.3) participated in interviews (response rate = 76%). Thirty-nine percent (n = 11) of women were categorized as high-SES, while 61% (n = 17) of women were categorized as low-SES. Women expressed unmet needs in their BC care including routine care and mental and physical well-being care, as well as a lack of financial support to access BC care. Patient priorities included the following: developing cohesion between different aspects of BC care; communication with and between healthcare professionals; and patient empowerment within BC care. Recommendations moving forward post-pandemic included improving the transition from active to post-treatment, enhancing support resources, and implementing telemedicine where appropriate. Overall, women of low-SES experienced more severe unmet needs, which in turn resulted in varied patient priorities and recommendations. CONCLUSION: As health systems are recovering from the COVID-19 pandemic, the emphasis should be on restoring access to BC care and improving the quality of BC care, with a particular consideration given to those women from low-SES, to reduce health inequalities post-pandemic.


Subject(s)
Breast Neoplasms , COVID-19 , Qualitative Research , Humans , Female , COVID-19/epidemiology , Breast Neoplasms/therapy , Middle Aged , Aged , Social Determinants of Health , Health Services Accessibility , Adult , Health Services Needs and Demand , Interviews as Topic
20.
Conscious Cogn ; 123: 103718, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38880020

ABSTRACT

The phenomenon of "hearing voices" can be found not only in psychotic disorders, but also in the general population, with individuals across cultures reporting auditory perceptions of supernatural beings. In our preregistered study, we investigated a possible mechanism of such experiences, grounded in the predictive processing model of agency detection. We predicted that in a signal detection task, expecting less or more voices than actually present would drive the response bias toward a more conservative and liberal response strategy, respectively. Moreover, we hypothesized that including sensory noise would enhance these expectancy effects. In line with our predictions, the findings show that detection of voices relies on expectations and that this effect is especially pronounced in the case of unreliable sensory data. As such, the study contributes to our understanding of the predictive processes in hearing and the building blocks of voice hearing experiences.

SELECTION OF CITATIONS
SEARCH DETAIL