RESUMO
Studies using observational measures often fail to meet statistical standards for both reliability and validity. The present study examined the psychometric properties of the Coding Interactive Behavior (CIB) System within a German sample of parent-child dyads. The sample consisted of 149 parents with and without a mental illness and their children [n experimental group (EG) = 75, n control group (CG) = 74] who participated in the larger Children of Mentally Ill Parents at Risk Evaluation (COMPARE) study. The age of the children ranged from 3 to 12 years (M = 7.99, SD = 2.5). Exploratory factor analysis supported a five-factor model of the CIB with items describing 1) parental sensitivity/reciprocity, 2) parental intrusiveness, 3) child withdrawal, 4) child involvement, and 5) parent limit setting/child compliance. Compared to international samples, the model was reduced by two independent dyadic factors. Testing for predictive validity identified seven items with predictive power to differentiate parental group membership. The CIB factors did not seem to be sufficiently sensitive to illustrate differences in interaction within a sample of parents with various mental illnesses. To apply the CIB to the described sample or similar ones in the future, additional measurement instruments may be necessary.
RESUMO
Expectations shape our experience of music. However, the internal model upon which listeners form melodic expectations is still debated. Do expectations stem from Gestalt-like principles or statistical learning? If the latter, does long-term experience play an important role, or are short-term regularities sufficient? And finally, what length of context informs contextual expectations? To answer these questions, we presented human listeners with diverse naturalistic compositions from Western classical music, while recording neural activity using MEG. We quantified note-level melodic surprise and uncertainty using various computational models of music, including a state-of-the-art transformer neural network. A time-resolved regression analysis revealed that neural activity over fronto-temporal sensors tracked melodic surprise particularly around 200ms and 300-500ms after note onset. This neural surprise response was dissociated from sensory-acoustic and adaptation effects. Neural surprise was best predicted by computational models that incorporated long-term statistical learning-rather than by simple, Gestalt-like principles. Yet, intriguingly, the surprise reflected primarily short-range musical contexts of less than ten notes. We present a full replication of our novel MEG results in an openly available EEG dataset. Together, these results elucidate the internal model that shapes melodic predictions during naturalistic music listening.
Assuntos
Música , Humanos , Percepção Auditiva/fisiologia , Aprendizagem/fisiologia , Incerteza , Estimulação Acústica/métodosRESUMO
Decoding the rich temporal dynamics of complex sounds such as speech is constrained by the underlying neuronal-processing mechanisms. Oscillatory theories suggest the existence of one optimal perceptual performance regime at auditory stimulation rates in the delta to theta range (< 10 Hz), but reduced performance in the alpha range (10-14 Hz) is controversial. Additionally, the widely discussed motor system contribution to timing remains unclear. We measured rate discrimination thresholds between 4 and 15 Hz, and auditory-motor coupling strength was estimated through a behavioral auditory-motor synchronization task. In a Bayesian model comparison, high auditory-motor synchronizers showed a larger range of constant optimal temporal judgments than low synchronizers, with performance decreasing in the alpha range. This evidence for optimal processing in the theta range is consistent with preferred oscillatory regimes in auditory cortex that compartmentalize stimulus encoding and processing. The findings suggest, remarkably, that increased auditory-motor synchronization might extend such an optimal range towards faster rates.
Assuntos
Percepção da Fala , Percepção do Tempo , Estimulação Acústica , Percepção Auditiva , Teorema de Bayes , Humanos , FalaRESUMO
Musical training enhances auditory-motor cortex coupling, which in turn facilitates music and speech perception. How tightly the temporal processing of music and speech are intertwined is a topic of current research. We investigated the relationship between musical sophistication (Goldsmiths Musical Sophistication index, Gold-MSI) and spontaneous speech-to-speech synchronization behavior as an indirect measure of speech auditory-motor cortex coupling strength. In a group of participants (n = 196), we tested whether the outcome of the spontaneous speech-to-speech synchronization test (SSS-test) can be inferred from self-reported musical sophistication. Participants were classified as high (HIGHs) or low (LOWs) synchronizers according to the SSS-test. HIGHs scored higher than LOWs on all Gold-MSI subscales (General Score, Active Engagement, Musical Perception, Musical Training, Singing Skills), but the Emotional Attachment scale. More specifically, compared to a previously reported German-speaking sample, HIGHs overall scored higher and LOWs lower. Compared to an estimated distribution of the English-speaking general population, our sample overall scored lower, with the scores of LOWs significantly differing from the normal distribution, with scores in the â¼30th percentile. While HIGHs more often reported musical training compared to LOWs, the distribution of training instruments did not vary across groups. Importantly, even after the highly correlated subscores of the Gold-MSI were decorrelated, particularly the subscales Musical Perception and Musical Training allowed to infer the speech-to-speech synchronization behavior. The differential effects of musical perception and training were observed, with training predicting audio-motor synchronization in both groups, but perception only in the HIGHs. Our findings suggest that speech auditory-motor cortex coupling strength can be inferred from training and perceptual aspects of musical sophistication, suggesting shared mechanisms involved in speech and music perception.