ABSTRACT
Many people listen to music that conveys challenging emotions such as sadness and anger, despite the commonly assumed purpose of media being to elicit pleasure. We propose that eudaimonic motivation, the desire to engage with aesthetic experiences to be challenged and facilitate meaningful experiences, can explain why people listen to music containing such emotions. However, it is unknown whether music containing violent themes can facilitate such meaningful experiences. In this investigation, three studies were conducted to determine the implications of eudaimonic and hedonic (pleasure-seeking) motivations for fans of music with violent themes. In Study 1, we developed and tested a new scale and showed that fans exhibit high levels of both types of motivation. Study 2 further validated the new scale and provided evidence that the two types of motivations are associated with different affective outcomes. Study 3 revealed that fans of violently themed music exhibited higher levels of eudaimonic motivation and lower levels of hedonic motivation than fans of non-violently themed music. Taken together, the findings support the notion that fans of music with violent themes are driven to engage with this music to be challenged and to pursue meaning, as well as to experience pleasure. Implications for fans' well-being and future applications of the new measure are discussed.
Subject(s)
Music , Pleasure , Humans , Motivation , Music/psychology , Emotions , AngerABSTRACT
Rich intercultural music engagement (RIME) is an embodied form of engagement whereby individuals immerse themselves in foreign musical practice, for example, by learning a traditional instrument from that culture. The present investigation evaluated whether RIME with Chinese or Middle Eastern music can nurture intercultural understanding. White Australian participants were randomly assigned to one of two plucked-string groups: Chinese pipa (n = 29) or Middle Eastern oud (n = 29). Before and after the RIME intervention, participants completed measures of ethnocultural empathy, tolerance, social connectedness, explicit and implicit attitudes towards ethnocultural groups, and open-ended questions about their experience. Following RIME, White Australian participants reported a significant increase in ethnocultural empathy, tolerance, feelings of social connection, and improved explicit and implicit attitudes towards Chinese and Middle Eastern people. However, these benefits differed between groups. Participants who learned Chinese pipa reported reduced bias and increased social connectedness towards Chinese people, but not towards Middle Eastern people. Conversely, participants who learned Middle Eastern oud reported a significant increase in social connectedness towards Middle Eastern people, but not towards Chinese people. This is the first experimental evidence that participatory RIME is an effective tool for understanding a culture other than one's own, with the added potential to reduce cultural bias.
Subject(s)
Culture , Music , Humans , Australia , Empathy , LearningABSTRACT
BACKGROUND: Ensuring that pool lifeguards develop the skills necessary to detect drowning victims is challenging given that these situations are relatively rare, unpredictable and are difficult to simulate accurately and safely. Virtual reality potentially provides a safe and ecologically valid approach to training since it offers a near-to-real visual experience, together with the opportunity to practice task-related skills and receive feedback. As a prelude to the development of a training intervention, the aim of this research was to establish the construct validity of virtual reality drowning detection tasks. METHOD: Using a repeated measures design, a total of 38 qualified lifeguards and 33 non-lifeguards completed 13 min and 23 min simulated drowning detection tasks that were intended to reflect different levels of sustained attention. During the simulated tasks, participants were asked to monitor a virtual pool and identify any drowning targets with accuracy, response latency, and dwell time recorded. RESULTS: During the simulated scenarios, pool lifeguards detected drowning targets more frequently and spent less time than non-lifeguards fixating on the drowning target prior to the drowning onset. No significant differences in response latency were evident between lifeguards and non-lifeguards nor for first fixations on the drowning target. CONCLUSION: The results provide support for the construct validity of virtual reality lifeguarding scenarios, thereby providing the basis for their development and introduction as a potential training approach for developing and maintaining performance in lifeguarding and drowning detection. APPLICATION: This research provides support for the construct validity of virtual reality simulations as a potential training tool, enabling improvements in the fidelity of training solutions to improve pool lifeguard competency in drowning detection.
Subject(s)
Drowning , Humans , Drowning/diagnosis , Drowning/prevention & control , Attention , Reaction TimeABSTRACT
While the benefits to mood and well-being from passionate engagement with music are well-established, far less is known about the relationship between passion for explicitly violently themed music and psychological well-being. The present study employed the Dualistic Model of Passion to investigate whether harmonious passion (i.e., passionate engagement that is healthily balanced with other life activities) predicts positive music listening experiences and/or psychological well-being in fans of violently themed music. We also investigated whether obsessive passion (i.e., uncontrollable passionate engagement with an activity) predicts negative music listening experiences and/or psychological ill-being. Fans of violently themed music (N = 177) completed the passion scale, scale of positive and negative affective experiences, and various psychological well- and ill-being measures. As hypothesised, harmonious passion for violently themed music significantly predicted positive affective experiences which, in turn, predicted psychological well-being. Obsessive passion for violently themed music significantly predicted negative affective experiences which, in turn, predicted ill-being. Findings support the Dualistic Model of Passion, and suggest that even when music engagement includes violent content, adaptive outcomes are often experienced. We propose that the nature of one's passion for music is more influential in predicting well-being than the content or valence of the lyrical themes.
ABSTRACT
Neuroscientific research has revealed interconnected brain networks implicated in musical creativity, such as the executive control network, the default mode network, and premotor cortices. The present study employed brain stimulation to evaluate the role of the primary motor cortex (M1) in creative and technically fluent jazz piano improvisations. We implemented transcranial direct current stimulation (tDCS) to alter the neural activation patterns of the left hemispheric M1 whilst pianists performed improvisations with their right hand. Two groups of expert jazz pianists (n = 8 per group) performed five improvisations in each of two blocks. In Block 1, they improvised in the absence of brain stimulation. In Block 2, one group received inhibitory tDCS and the second group received excitatory tDCS while performing five new improvisations. Three independent expert-musicians judged the 160 performances on creativity and technical fluency using a 10-point Likert scale. As the M1 is involved in the acquisition and consolidation of motor skills and the control of hand orientation and velocity, we predicted that excitatory tDCS would increase the quality of improvisations relative to inhibitory tDCS. Indeed, improvisations under conditions of excitatory tDCS were rated as significantly more creative than those under conditions of inhibitory tDCS. A music analysis indicated that excitatory tDCS elicited improvisations with greater pitch range and number/variety of notes. Ratings of technical fluency did not differ significantly between tDCS groups. We discuss plausible mechanisms by which the M1 region contributes to musical creativity.
ABSTRACT
In a continuous recognition paradigm, most stimuli elicit superior recognition performance when the item to be recognized is the most recent stimulus (a recency-in-memory effect). Furthermore, increasing the number of intervening items cumulatively disrupts memory in most domains. Memory for melodies composed in familiar tuning systems also shows superior recognition for the most recent melody, but no disruptive effects from the number of intervening melodies. A possible explanation has been offered in a novel regenerative multiple representations (RMR) conjecture. The RMR assumes that prior knowledge informs perception and perception influences memory representations. It postulates that melodies are perceived, thus also represented, simultaneously as integrated entities and also as their components (such as pitches, pitch intervals, short phrases and rhythm). Multiple representations of the melody components and melody as a whole can restore one another, thus providing resilience against disruptive effects from intervening items. The conjecture predicts that melodies in an unfamiliar tuning system are not perceived as integrated melodies and should (a) disrupt recency-in-memory advantages and (b) facilitate disruptive effects from the number of intervening items. We test these two predictions in three experiments. Experiments 1 and 2 show that no recency-in-memory effects emerge for melodies in an unfamiliar tuning system. In Experiment 3, disruptive effects occurred as the number of intervening items and unfamiliarity of the stimuli increased. Overall, results are coherent with the predictions of the RMR conjecture. Further investigation of the conjecture's predictions may lead to greater understanding of the fundamental relationships between memory, perception and behavior.
Subject(s)
Auditory Perception/physiology , Memory/physiology , Music , Recognition, Psychology/physiology , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Male , Psychoacoustics , Young AdultABSTRACT
In many memory domains, a decrease in recognition performance between the first and second presentation of an object is observed as the number of intervening items increases. However, this effect is not universal. Within the auditory domain, this form of interference has been demonstrated in word and single-note recognition, but has yet to be substantiated using relatively complex musical material such as a melody. Indeed, it is becoming clear that music shows intriguing properties when it comes to memory. This study investigated how the number of intervening items influences memory for melodies. In Experiments 1, 2 and 3, one melody was presented per trial in a continuous recognition paradigm. After each melody, participants indicated whether they had heard the melody in the experiment before by responding "old" or "new." In Experiment 4, participants rated perceived familiarity for every melody without being told that melodies reoccur. In four experiments using two corpora of music, two different memory tasks, transposed and untransposed melodies and up to 195 intervening melodies, no sign of a disruptive effect from the number of intervening melodies beyond the first was observed. We propose a new "regenerative multiple representations" conjecture to explain why intervening items increase interference in recognition memory for most domains but not music. This conjecture makes several testable predictions and has the potential to strengthen our understanding of domain specificity in human memory, while moving one step closer to explaining the "paradox" that is memory for melody.
Subject(s)
Memory/physiology , Music , Pitch Perception/physiology , Recognition, Psychology/physiology , Acoustic Stimulation , Adolescent , Adult , Awareness/physiology , Female , Humans , Male , Young AdultABSTRACT
In a continuous recognition paradigm, most stimuli elicit superior recognition performance when the item to be recognised is the most recent stimulus (a recency-in-memory effect). Furthermore, increasing the number of intervening items cumulatively disrupts memory in most domains. Memory for melodies composed in familiar tuning systems also shows superior recognition for the most recent melody, but no disruptive effects from the number of intervening melodies. A possible explanation has been offered in a novel regenerative multiple representations (RMR) conjecture. The RMR assumes that prior knowledge informs perception and perception influences memory representations. It postulates that melodies are perceived, thus also represented, simultaneously as integrated entities and also their components (such as pitches, pitch intervals, short phrases, and rhythm). Multiple representations of the melody components and melody as a whole can restore one another, thus providing resilience against disruptive effects from intervening items. The conjecture predicts that melodies in an unfamiliar tuning system are not perceived as integrated melodies and should: a) disrupt recency-in-memory advantages; and b) facilitate disruptive effects from the number of intervening items. We test these two predictions in three experiments. Experiments 1 and 2 show that no recency-in-memory effects emerge for melodies in an unfamiliar tuning system. In Experiment 3, disruptive effects occurred as the number of intervening items and unfamiliarity of the stimuli increased. Overall, results are coherent with the predictions of the RMR conjecture. Further investigation of the conjecture's predictions may lead to greater understanding of the fundamental relationships between memory, perception, and behavior.
ABSTRACT
Continuous increases of acoustic intensity (up-ramps) can indicate a looming (approaching) sound source in the environment, whereas continuous decreases of intensity (down-ramps) can indicate a receding sound source. From psychoacoustic experiments, an "adaptive perceptual bias" for up-ramp looming tonal stimuli has been proposed (Neuhoff, 1998). This theory postulates that (1) up-ramps are perceptually salient because of their association with looming and potentially threatening stimuli in the environment; (2) tonal stimuli are perceptually salient because of an association with single and potentially threatening biological sound sources in the environment, relative to white noise, which is more likely to arise from dispersed signals and nonthreatening/nonbiological sources (wind/ocean). In the present study, we extrapolated the "adaptive perceptual bias" theory and investigated its assumptions by measuring sound source localization in response to acoustic stimuli presented in azimuth to imply looming, stationary, and receding motion in depth. Participants (N = 26) heard three directions of intensity change (up-ramps, down-ramps, and steady state, associated with looming, receding, and stationary motion, respectively) and three levels of acoustic spectrum (a 1-kHz pure tone, the tonal vowel /Ó/, and white noise) in a within-subjects design. We first hypothesized that if up-ramps are "perceptually salient" and capable of eliciting adaptive responses, then they would be localized faster and more accurately than down-ramps. This hypothesis was supported. However, the results did not support the second hypothesis. Rather, the white-noise and vowel conditions were localized faster and more accurately than the pure-tone conditions. These results are discussed in the context of auditory and visual theories of motion perception, auditory attentional capture, and the spectral causes of spatial ambiguity.
Subject(s)
Attention/physiology , Motion Perception/physiology , Sound Localization/physiology , Space Perception/physiology , Adolescent , Adult , Female , Humans , Male , Psychoacoustics , Young AdultABSTRACT
Phrasing facilitates the organization of auditory information and is central to speech and music. Not surprisingly, aspects of changing intensity, rhythm, and pitch are key determinants of musical phrases and their boundaries in instrumental note-based music. Different kinds of speech (such as tone- vs. stress-languages) share these features in different proportions and form an instructive comparison. However, little is known about whether or how musical phrasing is perceived in sound-based music, where the basic musical unit from which a piece is created is commonly non-instrumental continuous sounds, rather than instrumental discontinuous notes. This issue forms the target of the present paper. Twenty participants (17 untrained in music) were presented with six stimuli derived from sound-based music, note-based music, and environmental sound. Their task was to indicate each occurrence of a perceived phrase and qualitatively describe key characteristics of the stimulus associated with each phrase response. It was hypothesized that sound-based music does elicit phrase perception, and that this is primarily associated with temporal changes in intensity and timbre, rather than rhythm and pitch. Results supported this hypothesis. Qualitative analysis of participant descriptions showed that for sound-based music, the majority of perceived phrases were associated with intensity or timbral change. For the note-based piano piece, rhythm was the main theme associated with perceived musical phrasing. We modeled the occurrence in time of perceived musical phrases with recurrent event 'hazard' analyses using time-series data representing acoustic predictors associated with intensity, spectral flatness, and rhythmic density. Acoustic intensity and timbre (represented here by spectral flatness) were strong predictors of perceived musical phrasing in sound-based music, and rhythm was only predictive for the piano piece. A further analysis including five additional spectral measures linked to timbre strengthened the models. Overall, results show that even when little of the pitch and rhythm information important for phrasing in note-based music is available, phrasing is still perceived, primarily in response to changes of intensity and timbre. Implications for electroacoustic music composition and music recommender systems are discussed.
Subject(s)
Acoustics , Music , Pitch Perception/physiology , Adolescent , Adult , Female , Humans , MaleABSTRACT
BACKGROUND: Virtual humans have become part of our everyday life (movies, internet, and computer games). Even though they are becoming more and more realistic, their speech capabilities are, most of the time, limited and not coherent and/or not synchronous with the corresponding acoustic signal. METHODS: We describe a method to convert a virtual human avatar (animated through key frames and interpolation) into a more naturalistic talking head. In fact, speech articulation cannot be accurately replicated using interpolation between key frames and talking heads with good speech capabilities are derived from real speech production data. Motion capture data are commonly used to provide accurate facial motion for visible speech articulators (jaw and lips) synchronous with acoustics. To access tongue trajectories (partially occluded speech articulator), electromagnetic articulography (EMA) is often used. We recorded a large database of phonetically-balanced English sentences with synchronous EMA, motion capture data, and acoustics. An articulatory model was computed on this database to recover missing data and to provide 'normalized' animation (i.e., articulatory) parameters. In addition, semi-automatic segmentation was performed on the acoustic stream. A dictionary of multimodal Australian English diphones was created. It is composed of the variation of the articulatory parameters between all the successive stable allophones. RESULTS: The avatar's facial key frames were converted into articulatory parameters steering its speech articulators (jaw, lips and tongue). The speech production database was used to drive the Embodied Conversational Agent (ECA) and to enhance its speech capabilities. A Text-To-Auditory Visual Speech synthesizer was created based on the MaryTTS software and on the diphone dictionary derived from the speech production database. CONCLUSIONS: We describe a method to transform an ECA with generic tongue model and animation by key frames into a talking head that displays naturalistic tongue, jaw and lip motions. Thanks to a multimodal speech production database, a Text-To-Auditory Visual Speech synthesizer drives the ECA's facial movements enhancing its speech capabilities.
ABSTRACT
The aim of this work was to investigate perceived loudness change in response to melodies that increase (up-ramp) or decrease (down-ramp) in acoustic intensity, and the interaction with other musical factors such as melodic contour, tempo, and tonality (tonal/atonal). A within-subjects design manipulated direction of linear intensity change (up-ramp, down-ramp), melodic contour (ascending, descending), tempo, and tonality, using single ramp trials and paired ramp trials, where single up-ramps and down-ramps were assembled to create continuous up-ramp/down-ramp or down-ramp/up-ramp pairs. Twenty-nine (Exp 1) and thirty-six (Exp 2) participants rated loudness continuously in response to trials with monophonic 13-note piano melodies lasting either 6.4s or 12s. Linear correlation coefficients >.89 between loudness and time show that time-series loudness responses to dynamic up-ramp and down-ramp melodies are essentially linear across all melodies. Therefore, 'indirect' loudness change derived from the difference in loudness at the beginning and end points of the continuous response was calculated. Down-ramps were perceived to change significantly more in loudness than up-ramps in both tonalities and at a relatively slow tempo. Loudness change was also greater for down-ramps presented with a congruent descending melodic contour, relative to an incongruent pairing (down-ramp and ascending melodic contour). No differential effect of intensity ramp/melodic contour congruency was observed for up-ramps. In paired ramp trials assessing the possible impact of ramp context, loudness change in response to up-ramps was significantly greater when preceded by down-ramps, than when not preceded by another ramp. Ramp context did not affect down-ramp perception. The contribution to the fields of music perception and psychoacoustics are discussed in the context of real-time perception of music, principles of music composition, and performance of musical dynamics.
Subject(s)
Auditory Perception/physiology , Music , Acoustic Stimulation , Adolescent , Adult , Female , Humans , Loudness Perception/physiology , Male , Psychoacoustics , Time Perception , Young AdultABSTRACT
Overestimation of loudness change typically occurs in response to up-ramp auditory stimuli (increasing intensity) relative to down-ramps (decreasing intensity) matched on frequency, duration, and end-level. In the experiment reported, forward masking is used to investigate a sensory component of up-ramp overestimation: persistence of excitation after stimulus presentation. White-noise and synthetic vowel 3.6 s up-ramp and down-ramp maskers were presented over two regions of intensity change (40-60 dB SPL, 60-80 dB SPL). Three participants detected 10 ms 1.5 kHz pure tone signals presented at masker-offset to signal-offset delays of 10, 20, 30, 50, 90, 170 ms. Masking magnitude was significantly greater in response to up-ramps compared with down-ramps for masker-signal delays up to and including 50 ms. When controlling for an end-level recency bias (40-60 dB SPL up-ramp vs 80-60 dB SPL down-ramp), the difference in masking magnitude between up-ramps and down-ramps was not significant at each masker-signal delay. Greater sensory persistence in response to up-ramps is argued to have minimal effect on perceptual overestimation of loudness change when response biases are controlled. An explanation based on sensory adaptation is discussed.
Subject(s)
Loudness Perception , Perceptual Masking , Sound Spectrography , Acoustic Stimulation , Adult , Attention , Discrimination, Psychological , Female , Humans , Male , Pitch Perception , Psychoacoustics , Speech PerceptionABSTRACT
In two experiments, we examined the effect of intensity and intensity change on judgements of pitch differences or interval size. In Experiment 1, 39 musically untrained participants rated the size of the interval spanned by two pitches within individual gliding tones. Tones were presented at high intensity, low intensity, looming intensity (up-ramp), and fading intensity (down-ramp) and glided between two pitches spanning either 6 or 7 semitones (a tritone or a perfect fifth interval). The pitch shift occurred in either ascending or descending directions. Experiment 2 repeated the conditions of Experiment 1 but the shifts in pitch and intensity occurred across two discrete tones (i.e., a melodic interval). Results indicated that participants were sensitive to the differences in interval size presented: Ratings were significantly higher when two pitches differed by 7 semitones than when they differed by 6 semitones. However, ratings were also dependent on whether the interval was high or low in intensity, whether it increased or decreased in intensity across the two pitches, and whether the interval was ascending or descending in pitch. Such influences illustrate that the perception of pitch relations does not always adhere to a logarithmic function as implied by their musical labels, but that identical intervals are perceived as substantially different in size depending on other attributes of the sound source.
Subject(s)
Judgment/physiology , Pitch Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Analysis of Variance , Female , Humans , Male , Music , Professional Competence , Psychoacoustics , Reaction Time , Time Factors , Young AdultABSTRACT
Three experiments investigate psychological, methodological, and domain-specific characteristics of loudness change in response to sounds that continuously increase in intensity (up-ramps), relative to sounds that decrease (down-ramps). Timbre (vowel, violin), layer (monotone, chord), and duration (1.8 s, 3.6 s) were manipulated in Experiment 1. Participants judged global loudness change between pairs of spectrally identical up-ramps and down-ramps. It was hypothesized that loudness change is overestimated in up-ramps, relative to down-ramps, using simple speech and musical stimuli. The hypothesis was supported and the proportion of up-ramp overestimation increased with stimulus duration. Experiment 2 investigated recency and a bias for end-levels by presenting paired dynamic stimuli with equivalent end-levels and steady-state controls. Experiment 3 used single stimulus presentations, removing artifacts associated with paired stimuli. Perceptual overestimation of loudness change is influenced by (1) intensity region of the dynamic stimulus; (2) differences in stimulus end-level; (3) order in which paired items are presented; and (4) duration of each item. When methodological artifacts are controlled, overestimation of loudness change in response to up-ramps remains. The relative influence of cognitive and sensory mechanisms is discussed.
Subject(s)
Judgment , Loudness Perception , Music , Sound Spectrography , Speech Acoustics , Acoustic Stimulation/methods , Adolescent , Female , Humans , Illusions , Male , Young AdultABSTRACT
A "perceptual bias for rising intensity" (Neuhoff 1998, Nature 395 123-124) is not dependent on the continuous change of a dynamic, looming sound source. Thirty participants were presented with pairs of 500 ms steady-state sounds corresponding to onset and offset levels of previously used dynamic increasing- and decreasing-intensity stimuli. Independent variables, intensity-change direction (increasing, decreasing), intensity region (high: 70-90 dB SPL, low: 50-70 dB SPL), interstimulus interval (ISI) (0 s, 1.8 s, 3.6 s), and timbre (vowel, violin) were manipulated as a fully within-subjects design. The dependent variable was perceived loudness change between each stimulus item in a pair. It was hypothesised that (i) noncontinuous increases of intensity are overestimated in loudness change, relative to decreases, in both low-intensity and high-intensity regions; and (ii) perceptual overestimation does not occur when end-levels are balanced. The hypotheses were partially supported. At the high-intensity region, increasing stimuli were perceived to change more in loudness than decreasing-intensity stimuli. At the low-intensity region and under balanced end-level conditions, decreasing-intensity stimuli were perceived to change more in loudness than increasing-intensity stimuli. A significant direction x region interaction varied as a function of ISI. Methodological, sensory, and cognitive explanations for overestimation in certain circumstances are discussed.
Subject(s)
Acoustic Stimulation/methods , Loudness Perception/physiology , Adolescent , Adult , Female , Humans , Male , Motion Perception/physiology , Time Factors , Young AdultABSTRACT
The present experiment was aimed at characterizing the timing of conditioned nictitating membrane (NM) movements as function of the interstimulus interval (ISI) in delay conditioning for rabbits (Oryctolagus cuniculus). Onset latency and peak latency were approximately, but not strictly, scalar for all but the smallest movements (<.10 mm). That is, both the mean and standard deviation of the timing measures increased in proportion to the ISI, but their coefficients of variation (standard deviation/mean) tended to be larger for shorter ISIs. For all ISIs, the absolute timing of the NM movements covaried with magnitude. The smaller movements (approximately, .11-.50 mm) were highly variable, and their peaks tended to occur well after the time of US delivery. The larger movements (>.50 mm) were less variable, and their peaks were better aligned with the time of US delivery. These results are discussed with respect to their implications for current models of timing in eyeblink conditioning.
Subject(s)
Conditioning, Eyelid/physiology , Nictitating Membrane/physiology , Reaction Time/physiology , Acoustic Stimulation/methods , Animals , Female , Psychoacoustics , Rabbits , Time FactorsABSTRACT
Extinguishing a conditioned response (CR) has entailed separating the conditioned stimulus (CS) from the unconditioned stimulus (US). This research reveals that elimination of the rabbit nictitating membrane response occurred during continuous CS-US pairings. Initial training contained a mixture of 2 CS-US interstimulus intervals (ISIs), 150 ms and 500 ms. The CRs showed double peaks, one for each ISI. When the 150-ms ISI was removed, its CR peak showed 2 hallmarks of extinction: a decline across sessions and spontaneous recovery between sessions. When a further stage of training was introduced with a distinctive CS using the 150-ms ISI, occasional tests of the original, extinguished CS revealed another hallmark of extinction, specifically, strong recovery of the 150-ms peak. These results support both abstract and cerebellar models of conditioning that encode the CS into a cascade of microstimuli, while challenging theories of extinction that rely on changes in CS processing, US representations, and contextual control.