Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Nature ; 626(8001): 1049-1055, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38355800

RESUMEN

Each year, people spend less time reading and more time viewing images1, which are proliferating online2-4. Images from platforms such as Google and Wikipedia are downloaded by millions every day2,5,6, and millions more are interacting through social media, such as Instagram and TikTok, that primarily consist of exchanging visual content. In parallel, news agencies and digital advertisers are increasingly capturing attention online through the use of images7,8, which people process more quickly, implicitly and memorably than text9-12. Here we show that the rise of images online significantly exacerbates gender bias, both in its statistical prevalence and its psychological impact. We examine the gender associations of 3,495 social categories (such as 'nurse' or 'banker') in more than one million images from Google, Wikipedia and Internet Movie Database (IMDb), and in billions of words from these platforms. We find that gender bias is consistently more prevalent in images than text for both female- and male-typed categories. We also show that the documented underrepresentation of women online13-18 is substantially worse in images than in text, public opinion and US census data. Finally, we conducted a nationally representative, preregistered experiment that shows that googling for images rather than textual descriptions of occupations amplifies gender bias in participants' beliefs. Addressing the societal effect of this large-scale shift towards visual communication will be essential for developing a fair and inclusive future for the internet.


Asunto(s)
Ocupaciones , Fotograbar , Sexismo , Medios de Comunicación Sociales , Femenino , Humanos , Masculino , Ocupaciones/estadística & datos numéricos , Fotograbar/estadística & datos numéricos , Fotograbar/tendencias , Opinión Pública , Sexismo/prevención & control , Sexismo/psicología , Sexismo/estadística & datos numéricos , Sexismo/tendencias , Medios de Comunicación Sociales/estadística & datos numéricos , Cambio Social
2.
Cognition ; 241: 105621, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37716312

RESUMEN

Deep neural networks (DNNs) are increasingly proposed as models of human vision, bolstered by their impressive performance on image classification and object recognition tasks. Yet, the extent to which DNNs capture fundamental aspects of human vision such as color perception remains unclear. Here, we develop novel experiments for evaluating the perceptual coherence of color embeddings in DNNs, and we assess how well these algorithms predict human color similarity judgments collected via an online survey. We find that state-of-the-art DNN architectures - including convolutional neural networks and vision transformers - provide color similarity judgments that strikingly diverge from human color judgments of (i) images with controlled color properties, (ii) images generated from online searches, and (iii) real-world images from the canonical CIFAR-10 dataset. We compare DNN performance against an interpretable and cognitively plausible model of color perception based on wavelet decomposition, inspired by foundational theories in computational neuroscience. While one deep learning model - a convolutional DNN trained on a style transfer task - captures some aspects of human color perception, our wavelet algorithm provides more coherent color embeddings that better predict human color judgments compared to all DNNs we examine. These results hold when altering the high-level visual task used to train similar DNN architectures (e.g., image classification versus image segmentation), as well as when examining the color embeddings of different layers in a given DNN architecture. These findings break new ground in the effort to analyze the perceptual representations of machine learning algorithms and to improve their ability to serve as cognitively plausible models of human vision. Implications for machine learning, human perception, and embodied cognition are discussed.

3.
Cognition ; 201: 104306, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32504912

RESUMEN

The embodied cognition paradigm has stimulated ongoing debate about whether sensory data - including color - contributes to the semantic structure of abstract concepts. Recent uses of linguistic data in the study of embodied cognition have been focused on textual corpora, which largely precludes the direct analysis of sensory information. Here, we develop an automated approach to multimodal content analysis that detects associations between words based on the color distributions of their Google Image search results. Crucially, we measure color using a transformation of colorspace that closely resembles human color perception. We find that words in the abstract domains of academic disciplines, emotions, and music genres, cluster in a statistically significant fashion according to their color distributions. Furthermore, we use the lexical ontology WordNet and crowdsourced human judgments to show that this clustering reflects non-arbitrary semantic structure, consistent with metaphor-based accounts of embodied cognition. In particular, we find that images corresponding to more abstract words exhibit higher variability in colorspace, and semantically similar words have more similar color distributions. Strikingly, we show that color associations often reflect shared affective dimensions between abstract domains, thus revealing patterns of aesthetic coherence in everyday language. We argue that these findings provide a novel way to synthesize metaphor-based and affect-based accounts of embodied semantics.


Asunto(s)
Formación de Concepto , Semántica , Cognición , Emociones , Humanos , Lenguaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA