Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51
Filter
Add more filters











Publication year range
1.
Res Sq ; 2024 Sep 11.
Article in English | MEDLINE | ID: mdl-39315273

ABSTRACT

Past research shows that emotion affects beauty judgments of images and music. Because it is widely supposed that our faculty of empathy facilitates aesthetic experience, we wondered whether individual levels of empathy modulate the effect of emotion on beauty. 164 participants rated the perceived beauty, happiness, and sadness of 12 art images, 12 nature photographs, and 24 songs. The stimuli were presented in two blocks, and participants took the PANAS mood questionnaire before and after each block. Between blocks, they viewed one of three mood induction videos, intended to increase their happiness, increase their sadness, or leave their mood unchanged. We also measured (trait) empathy with the Questionnaire for Cognitive and Affective Empathy. We used structural equation modeling to analyze the effect of empathy on emotion, beauty, and the relationship between them. We assessed four emotion variables: participants' felt happiness and sadness (mood questionnaire ratings) and perceived happiness and sadness (stimulus ratings). We find that higher empathy is associated with stronger positive relationships between beauty and both felt and perceived emotions, for both images and music (ß ~ 0.06 per empathy point on a 10-pt. scale, p < 0.001). We also find that perceived happiness and sadness boost beauty directly for both images and music. However, sadness affects music more than images (ß = 0.51 vs. 0.12, all p < 0.001), and empathy amplifies this relationship for music but not images. Thus, felt and perceived emotions produce more beauty, more so in more empathic people, and more so with music than images.

2.
iScience ; 27(7): 110213, 2024 Jul 19.
Article in English | MEDLINE | ID: mdl-39006484

ABSTRACT

Variance across participants is at the heart of the centuries-old debate about the universality of beauty. Beauty's belonging to the eye of the beholder implies large interindividual variance, while beauty as a universal object property implies the opposite. To characterize the variance at the center of this debate, we selected two quartets with either high- or low-variance images with high typicality and a given mean beauty. The quartets have high or low variance across 50 participants (group variance) and correspondingly high or low variance across images of a quartet for each participant (quartet variance). We asked 52 new participants to estimate their own mean and quartet variance. Participants successfully predicted their quartet mean but failed to predict their quartet variance. Though invisible, beauty variance is essential to prediction, both in theory and in practice. The quartets show that mean beauty is not the whole story - beauty variance is heterogeneous.

3.
Front Hum Neurosci ; 17: 1255465, 2023.
Article in English | MEDLINE | ID: mdl-38094145

ABSTRACT

Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online as online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision (±4 deg). EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. It tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the laboratory, using gaze-contingent stimulus presentation; second, in the laboratory, using EasyEyes while independently monitoring gaze using EyeLink 1000; third, online at home, using EasyEyes. We find that crowding thresholds are consistent and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, this method enables fixation-dependent measurements online, for easy testing of larger and more diverse populations.

4.
J Vis ; 23(13): 6, 2023 Nov 01.
Article in English | MEDLINE | ID: mdl-37971770

ABSTRACT

What role do the emotions of subject and object play in judging the beauty of images and music? Eighty-one participants rated perceived beauty, liking, perceived happiness, and perceived sadness of 24 songs, 12 art images, and 12 nature photographs. Stimulus presentation was brief (2 seconds) or prolonged (20 seconds). The stimuli were presented in two blocks, and participants took the Positive and Negative Affect Score (PANAS) mood questionnaire before and after each block. They viewed a mood induction video between blocks either to increase their happiness or sadness or to maintain their mood. Using linear mixed-effects models, we found that perceived object happiness predicts an increase in image and song beauty regardless of duration. The effect of perceived object sadness on beauty, however, is stronger for songs than images and stronger for prolonged than brief durations. Subject emotion affects brief song beauty minimally and prolonged song beauty substantially. Whereas past studies of beauty and emotion emphasized sad music, here we analyze both happiness and sadness, both subject and object emotion, and both images and music. We conclude that the interactions between emotion and beauty are different for images and music and are strongly moderated by duration.


Subject(s)
Music , Humans , Music/psychology , Emotions , Happiness , Linear Models , Time Factors
5.
J Vis ; 23(8): 6, 2023 08 01.
Article in English | MEDLINE | ID: mdl-37540179

ABSTRACT

Crowding is the failure to recognize an object due to surrounding clutter. Our visual crowding survey measured 13 crowding distances (or "critical spacings") twice in each of 50 observers. The survey includes three eccentricities (0, 5, and 10 deg), four cardinal meridians, two orientations (radial and tangential), and two fonts (Sloan and Pelli). The survey also tested foveal acuity, twice. Remarkably, fitting a two-parameter model-the well-known Bouma law, where crowding distance grows linearly with eccentricity-explains 82% of the variance for all 13 × 50 measured log crowding distances, cross-validated. An enhanced Bouma law, with factors for meridian, crowding orientation, target kind, and observer, explains 94% of the variance, again cross-validated. These additional factors reveal several asymmetries, consistent with previous reports, which can be expressed as crowding-distance ratios: 0.62 horizontal:vertical, 0.79 lower:upper, 0.78 right:left, 0.55 tangential:radial, and 0.78 Sloan-font:Pelli-font. Across our observers, peripheral crowding is independent of foveal crowding and acuity. Evaluation of the Bouma factor, b (the slope of the Bouma law), as a biomarker of visual health would be easier if there were a way to compare results across crowding studies that use different methods. We define a standardized Bouma factor b' that corrects for differences from Bouma's 25 choice alternatives, 75% threshold criterion, and linearly symmetric flanker placement. For radial crowding on the right meridian, the standardized Bouma factor b' is 0.24 for this study, 0.35 for Bouma (1970), and 0.30 for the geometric mean across five representative modern studies, including this one, showing good agreement across labs, including Bouma's. Simulations, confirmed by data, show that peeking can skew estimates of crowding (e.g., greatly decreasing the mean or doubling the SD of log b). Using gaze tracking to prevent peeking, individual differences are robust, as evidenced by the much larger 0.08 SD of log b across observers than the mere 0.03 test-retest SD of log b measured in half an hour. The ease of measurement of crowding enhances its promise as a biomarker for dyslexia and visual health.


Subject(s)
Dyslexia , Pattern Recognition, Visual , Humans , Complement Factor B , Crowding
6.
bioRxiv ; 2023 Jul 18.
Article in English | MEDLINE | ID: mdl-37503301

ABSTRACT

Online methods allow testing of larger, more diverse populations, with much less effort than in-lab testing. However, many psychophysical measurements, including visual crowding, require accurate eye fixation, which is classically achieved by testing only experienced observers who have learned to fixate reliably, or by using a gaze tracker to restrict testing to moments when fixation is accurate. Alas, both approaches are impractical online since online observers tend to be inexperienced, and online gaze tracking, using the built-in webcam, has a low precision (±4 deg, Papoutsaki et al., 2016). The EasyEyes open-source software reliably measures peripheral thresholds online with accurate fixation achieved in a novel way, without gaze tracking. EasyEyes tells observers to use the cursor to track a moving crosshair. At a random time during successful tracking, a brief target is presented in the periphery. The observer responds by identifying the target. To evaluate EasyEyes fixation accuracy and thresholds, we tested 12 naive observers in three ways in a counterbalanced order: first, in the lab, using gaze-contingent stimulus presentation (Kurzawski et al., 2023; Pelli et al., 2016); second, in the lab, using EasyEyes while independently monitoring gaze; third, online at home, using EasyEyes. We find that crowding thresholds are consistent (no significant differences in mean and variance of thresholds across ways) and individual differences are conserved. The small root mean square (RMS) fixation error (0.6 deg) during target presentation eliminates the need for gaze tracking. Thus, EasyEyes enables fixation-dependent measurements online, for easy testing of larger and more diverse populations.

7.
J Vis ; 23(7): 6, 2023 Jul 03.
Article in English | MEDLINE | ID: mdl-37410492

ABSTRACT

Information theory (bits) allows comparing beauty judgment to perceptual judgment on the same absolute scale. In one of the most influential articles in psychology, Miller (1956) found that classifying a stimulus into one of eight or more categories of the attribute transmits roughly 2.6 bits of information. That corresponds to 7 ± 2 categories. This number is both remarkably small and highly conserved across attributes and sensory modalities. This appears to be a signature of one-dimensional perceptual judgment. We wondered whether beauty can break this limit. Beauty judgments matter and play a key role in many of our real-life decisions, large and small. Mutual information is how much information about one variable can be obtained from observing another. We measured the mutual information of 50 participants' beauty ratings of everyday images. The mutual information saturated at 2.3 bits. We also replicated the results using different images. The 2.3 bits conveyed by beauty judgment are close to Miller's 2.6 bits of unidimensional perceptual judgment and far less than the 5 to 14 bits of a multidimensional perceptual judgment. By this measure, beauty judgment acts like a perceptual judgment, such as rating pitch, hue, or loudness.


Subject(s)
Beauty , Judgment , Humans
8.
Atten Percept Psychophys ; 85(4): 1355-1373, 2023 May.
Article in English | MEDLINE | ID: mdl-36918510

ABSTRACT

Recall memory and sequential dependence threaten the independence of successive beauty ratings. Such independence is usually assumed when using repeated measures to estimate the intrinsic variance of a rating. We call "intrinsic" the variance of all possible responses that the participant could give on a trial. Variance arises within and across participants. In attributing the measured variance to sources, the first step is to assess how much is intrinsic. In seven experiments, we measure how much of the variability across beauty ratings can be attributed to recall memory and sequential dependence. With a set size of one, memory is a problem and contributes half the measured variance. However, we showed that for both beauty and ellipticity, with set size of nine or more, recall memory causes a mere 10% increase in the variance of repeated ratings. Moreover, we showed that as long as the stimuli are diverse (i.e., represent different object categories), sequential dependence does not affect the variance of beauty ratings. Lastly, the variance of beauty ratings increases in proportion to the 0.15 power of stimulus set size. We show that the beauty rating of a stimulus in a diverse set is affected by the stimulus set size and not the value of other stimuli. Overall, we conclude that the variance of repeated ratings is a good way to estimate the intrinsic variance of a beauty rating of a stimulus in a diverse set.


Subject(s)
Judgment , Mental Recall , Humans , Judgment/physiology , Research Design
9.
Curr Biol ; 32(8): R378-R379, 2022 04 25.
Article in English | MEDLINE | ID: mdl-35472429

ABSTRACT

Beauty judgments, at least in part, determine what we wear, where we eat, and who we swipe right on Tinder. However, beauty judgments vary greatly across individuals, and a new study highlights the importance of assessing these individual differences.


Subject(s)
Beauty , Judgment , Esthetics , Humans
10.
Sci Rep ; 11(1): 23540, 2021 12 07.
Article in English | MEDLINE | ID: mdl-34876580

ABSTRACT

Sensory cortical mechanisms combine auditory or visual features into perceived objects. This is difficult in noisy or cluttered environments. Knowing that individuals vary greatly in their susceptibility to clutter, we wondered whether there might be a relation between an individual's auditory and visual susceptibilities to clutter. In auditory masking, background sound makes spoken words unrecognizable. When masking arises due to interference at central auditory processing stages, beyond the cochlea, it is called informational masking. A strikingly similar phenomenon in vision, called visual crowding, occurs when nearby clutter makes a target object unrecognizable, despite being resolved at the retina. We here compare susceptibilities to auditory informational masking and visual crowding in the same participants. Surprisingly, across participants, we find a negative correlation (R = -0.7) between susceptibility to informational masking and crowding: Participants who have low susceptibility to auditory clutter tend to have high susceptibility to visual clutter, and vice versa. This reveals a tradeoff in the brain between auditory and visual processing.


Subject(s)
Auditory Perception/physiology , Vision, Ocular/physiology , Visual Perception/physiology , Adult , Attention/physiology , Brain/physiology , Crowding , Female , Humans , Male , Noise , Perceptual Masking/physiology , Young Adult
11.
Neuroimage ; 244: 118609, 2021 12 01.
Article in English | MEDLINE | ID: mdl-34582948

ABSTRACT

Population receptive field (pRF) models fit to fMRI data are used to non-invasively measure retinotopic maps in human visual cortex, and these maps are a fundamental component of visual neuroscience experiments. Here, we examined the reproducibility of retinotopic maps across two datasets: a newly acquired retinotopy dataset from New York University (NYU) (n = 44) and a public dataset from the Human Connectome Project (HCP) (n = 181). Our goal was to assess the degree to which pRF properties are similar across datasets, despite substantial differences in their experimental protocols. The two datasets simultaneously differ in their stimulus apertures, participant pool, fMRI protocol, MRI field strength, and preprocessing pipeline. We assessed the cross-dataset reproducibility of the two datasets in terms of the similarity of vertex-wise pRF estimates and in terms of large-scale polar angle asymmetries in cortical magnification. Within V1, V2, V3, and hV4, the group-median NYU and HCP vertex-wise polar angle estimates were nearly identical. Both eccentricity and pRF size estimates were also strongly correlated between the two datasets, but with a slope different from 1; the eccentricity and pRF size estimates were systematically greater in the NYU data. Next, to compare large-scale map properties, we quantified two polar angle asymmetries in V1 cortical magnification previously identified in the HCP data. The NYU dataset confirms earlier reports that more cortical surface area represents horizontal than vertical visual field meridian, and lower than upper vertical visual field meridian. Together, our findings show that the retinotopic properties of V1, V2, V3, and hV4 can be reliably measured across two datasets, despite numerous differences in their experimental design. fMRI-derived retinotopic maps are reproducible because they rely on an explicit computational model of the fMRI response. In the case of pRF mapping, the model is grounded in physiological evidence of how visual receptive fields are organized, allowing one to quantitatively characterize the BOLD signal in terms of stimulus properties (i.e., location and size). The new NYU Retinotopy Dataset will serve as a useful benchmark for testing hypotheses about the organization of visual areas and for comparison to the HCP 7T Retinotopy Dataset.


Subject(s)
Visual Cortex/diagnostic imaging , Adult , Computer Simulation , Connectome , Female , Humans , Magnetic Resonance Imaging/methods , Male , Motivation , New York , Reproducibility of Results , Visual Fields/physiology
12.
Acta Psychol (Amst) ; 219: 103365, 2021 Sep.
Article in English | MEDLINE | ID: mdl-34246875

ABSTRACT

Many philosophers and psychologists have made claims about what is felt in an experience of beauty. Here, we test how well these claims match the feelings that people report while looking at an image, or listening to music, or recalling a personal experience of beauty. We conducted ten experiments (total n = 851) spanning three nations (US, UK, and India). Across nations and modalities, top-rated beauty experiences are strongly characterized by six dimensions: intense pleasure, an impression of universality, the wish to continue the experience, exceeding expectation, perceived harmony in variety, and meaningfulness. Other frequently proposed beauty characteristics - like surprise, desire to understand, and mind wandering - are uncorrelated with feeling beauty. A typical remembered beautiful experience was active and social like a family holiday - hardly ever mentioning beauty - and only rarely mentioned art, unlike the academic emphasis, in aesthetics, on solitary viewing of art. Our survey aligns well with Kant and the psychological theories that emphasize pleasure, and reject theories that emphasize information seeking.


Subject(s)
Beauty , Music , Emotions , Esthetics , Humans , Pleasure
13.
Atten Percept Psychophys ; 83(3): 1179-1188, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33205370

ABSTRACT

How many pleasures can you track? In a previous study, we showed that people can simultaneously track the pleasure they experience from two images. Here, we push further, probing the individual and combined pleasures felt from seeing four images in one glimpse. Participants (N = 25) viewed 36 images spanning the entire range of pleasure. Each trial presented an array of four images, one in each quadrant of the screen, for 200 ms. On 80% of the trials, a central line cue pointed, randomly, at some screen corner either before (precue) or after (postcue) the images were shown. The cue indicated which image (the target) to rate while ignoring the others (distractors). On the other 20% of trials, an X cue requested a rating of the combined pleasure of all four images. Later, for baseline reference, we obtained a single-pleasure rating for each image shown alone. When precued, participants faithfully reported the pleasure of the target. When postcued, however, the mean ratings of images that are intensely pleasurable when seen alone (pleasure >4.5 on a 1-9 scale) dropped below baseline. Regardless of cue timing, the rating of the combined pleasure of four images was a linear transform of the average baseline pleasures of all four images. Thus, while people can faithfully track two pleasures, they cannot track four. Instead, the pleasure of otherwise above-medium-pleasure images is diminished, mimicking the effect of a distracting task.


Subject(s)
Emotions , Pleasure , Humans
14.
Atten Percept Psychophys ; 83(3): 1189, 2021 Apr.
Article in English | MEDLINE | ID: mdl-33300104
15.
Psychon Bull Rev ; 27(2): 330-340, 2020 Apr.
Article in English | MEDLINE | ID: mdl-31898260

ABSTRACT

Can people track several pleasures? In everyday life, pleasing stimuli rarely appear in isolation. Yet, experiments on aesthetic pleasure usually present only one image at a time. Here, we ask whether people can reliably report the pleasure of either of two images seen in a single glimpse. Participants (N = 13 in the original; +25 in the preregistered replication) viewed 36 Open Affective Standardized Image Set (OASIS) images that span the entire range of pleasure and beauty. On each trial, the observer saw two images, side by side, for 200 ms. An arrow cue pointed, randomly, left, right, or bidirectionally. Left or right indicated which image (the target) to rate while ignoring the other (the distractor); bidirectional requested rating the combined pleasure of both images. In half the blocks, the cue came before the images (precuing). Otherwise, it came after (postcuing). Precuing allowed the observer to ignore the distractor, while postcuing demanded tracking both images. Finally, we obtained single-pleasure ratings for each image shown alone. Our replication confirms the original study. People have unbiased access to their felt pleasure from each image and the average of both. Furthermore, the variance of the observer's report is similar whether reporting the pleasure of one image or the average pleasure of two. The undiminished variance for reports of the average pleasure of two images indicates either that the underlying pleasure variances are highly correlated, or, more likely, that the variance arises in the common reporting process. In brief, observers can faithfully track at least two visual pleasures.


Subject(s)
Pattern Recognition, Visual/physiology , Pleasure/physiology , Adult , Beauty , Esthetics , Female , Humans , Male , Young Adult
16.
Front Psychol ; 10: 2420, 2019.
Article in English | MEDLINE | ID: mdl-31749737

ABSTRACT

At the beginning of psychology, Fechner (1876) claimed that beauty is immediate pleasure, and that an object's pleasure determines its value. In our earlier work, we found that intense pleasure always results in intense beauty. Here, we focus on the inverse: Is intense pleasure necessary for intense beauty? If so, the inability to experience pleasure (anhedonia) should prevent the experience of intense beauty. We asked 757 online participants to rate how intensely they felt beauty from each image. We used 900 OASIS images along with their available valence (pleasure vs. displeasure) and arousal ratings. We then obtained self-reports of anhedonia (TEPS), mood, and depression (PHQ-9). Across images, beauty ratings were closely related to pleasure ratings (r = 0.75), yet unrelated to arousal ratings. Only images with an average pleasure rating above 4 (of a possible 7) often achieved (>10%) beauty averages exceeding the overall median beauty. For normally beautiful images (average rating > 4.5), the beauty ratings were correlated with anhedonia (r ∼-0.3) and mood (r ∼ 0.3), yet unrelated to depression. Comparing each participant's average beauty rating to the overall median (5.0), none of the most anhedonic participants exceeded the median, whereas 50% of the remaining participants did. Thus, both general and anhedonic results support the claim that intense beauty requires intense pleasure. In addition, follow-up repeated measures showed that shared taste contributed 19% to beauty-rating variance, only one third as much as personal taste (58%). Addressing age-old questions, these results indicate that beauty is a kind of pleasure, and that beauty is more personal than universal, i.e., 1.7 times more correlated with individual than with shared taste.

17.
Vision Res ; 161: 60-62, 2019 08.
Article in English | MEDLINE | ID: mdl-31194983
18.
Neuroimage ; 188: 584-597, 2019 03.
Article in English | MEDLINE | ID: mdl-30543845

ABSTRACT

Neuroaesthetics is a rapidly developing interdisciplinary field of research that aims to understand the neural substrates of aesthetic experience: While understanding aesthetic experience has been an objective of philosophers for centuries, it has only more recently been embraced by neuroscientists. Recent work in neuroaesthetics has revealed that aesthetic experience with static visual art engages visual, reward and default-mode networks. Very little is known about the temporal dynamics of these networks during aesthetic appreciation. Previous behavioral and brain imaging research suggests that critical aspects of aesthetic experience have slow dynamics, taking more than a few seconds, making them amenable to study with fMRI. Here, we identified key aspects of the dynamics of aesthetic experience while viewing art for various durations. In the first few seconds following image onset, activity in the DMN (and high-level visual and reward regions) was greater for very pleasing images; in the DMN this activity counteracted a suppressive effect that grew longer and deeper with increasing image duration. In addition, for very pleasing art, the DMN response returned to baseline in a manner time-locked to image offset. Conversely, for non-pleasing art, the timing of this return to baseline was inconsistent. This differential response in the DMN may therefore reflect the internal dynamics of the participant's state: The participant disengages from art-related processing and returns to stimulus-independent thought. These dynamics suggest that the DMN tracks the internal state of a participant during aesthetic experience.


Subject(s)
Beauty , Cerebral Cortex/physiology , Functional Neuroimaging/methods , Nerve Net/physiology , Pattern Recognition, Visual/physiology , Pleasure/physiology , Adult , Cerebral Cortex/diagnostic imaging , Esthetics , Female , Humans , Magnetic Resonance Imaging , Male , Nerve Net/diagnostic imaging , Paintings , Young Adult
19.
J Vis ; 18(13): 2, 2018 12 03.
Article in English | MEDLINE | ID: mdl-30508427

ABSTRACT

Many vision science studies employ machine learning, especially the version called "deep learning." Neuroscientists use machine learning to decode neural responses. Perception scientists try to understand how living organisms recognize objects. To them, deep neural networks offer benchmark accuracies for recognition of learned stimuli. Originally machine learning was inspired by the brain. Today, machine learning is used as a statistical tool to decode brain activity. Tomorrow, deep neural networks might become our best model of brain function. This brief overview of the use of machine learning in biological vision touches on its strengths, weaknesses, milestones, controversies, and current directions. Here, we hope to help vision scientists assess what role machine learning should play in their research.


Subject(s)
Deep Learning , Neural Networks, Computer , Vision, Ocular/physiology , Algorithms , Brain , Humans
20.
Curr Biol ; 28(16): R859-R863, 2018 08 20.
Article in English | MEDLINE | ID: mdl-30130500

ABSTRACT

Our everyday lives are full of aesthetic experiences. We wake up and frown at an overcast sky, or smile at the sight of the sun. Myriad decisions depend on the aesthetic appeal of the available options like which shirt to wear, which route to take to work, or where to eat. Even life-changing decisions, like where to live or who to live with, are partly based on their aesthetic appeal.


Subject(s)
Esthetics , Esthetics/classification , Esthetics/history , Esthetics/psychology , History, 19th Century , History, 20th Century , History, 21st Century , Humans
SELECTION OF CITATIONS
SEARCH DETAIL