Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Soc Cogn Affect Neurosci ; 19(1)2024 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-38722755

RESUMEN

The social world is dynamic and contextually embedded. Yet, most studies utilize simple stimuli that do not capture the complexity of everyday social episodes. To address this, we implemented a movie viewing paradigm and investigated how everyday social episodes are processed in the brain. Participants watched one of two movies during an MRI scan. Neural patterns from brain regions involved in social perception, mentalization, action observation and sensory processing were extracted. Representational similarity analysis results revealed that several labeled social features (including social interaction, mentalization, the actions of others, characters talking about themselves, talking about others and talking about objects) were represented in the superior temporal gyrus (STG) and middle temporal gyrus (MTG). The mentalization feature was also represented throughout the theory of mind network, and characters talking about others engaged the temporoparietal junction (TPJ), suggesting that listeners may spontaneously infer the mental state of those being talked about. In contrast, we did not observe the action representations in the frontoparietal regions of the action observation network. The current findings indicate that STG and MTG serve as key regions for social processing, and that listening to characters talk about others elicits spontaneous mental state inference in TPJ during natural movie viewing.


Asunto(s)
Mapeo Encefálico , Encéfalo , Imagen por Resonancia Magnética , Películas Cinematográficas , Percepción Social , Teoría de la Mente , Humanos , Femenino , Masculino , Imagen por Resonancia Magnética/métodos , Adulto Joven , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Adulto , Teoría de la Mente/fisiología , Mentalización/fisiología , Estimulación Luminosa/métodos
4.
Neuropsychologia ; 196: 108823, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38346576

RESUMEN

Recognizing and remembering social information is a crucial cognitive skill. Neural patterns in the superior temporal sulcus (STS) support our ability to perceive others' social interactions. However, despite the prominence of social interactions in memory, the neural basis of remembering social interactions is still unknown. To fill this gap, we investigated the brain mechanisms underlying memory of others' social interactions during free spoken recall of a naturalistic movie. By applying machine learning-based fMRI encoding analyses to densely labeled movie and recall data we found that a subset of the STS activity evoked by viewing social interactions predicted neural responses in not only held-out movie data, but also during memory recall. These results provide the first evidence that activity in the STS is reinstated in response to specific social content and that its reactivation underlies our ability to remember others' interactions. These findings further suggest that the STS contains representations of social interactions that are not only perceptually driven, but also more abstract or conceptual in nature.


Asunto(s)
Interacción Social , Lóbulo Temporal , Humanos , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Encéfalo/fisiología , Memoria/fisiología , Mapeo Encefálico , Imagen por Resonancia Magnética
5.
PLoS Comput Biol ; 20(2): e1011887, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38408105

RESUMEN

Despite decades of research, much is still unknown about the computations carried out in the human face processing network. Recently, deep networks have been proposed as a computational account of human visual processing, but while they provide a good match to neural data throughout visual cortex, they lack interpretability. We introduce a method for interpreting brain activity using a new class of deep generative models, disentangled representation learning models, which learn a low-dimensional latent space that "disentangles" different semantically meaningful dimensions of faces, such as rotation, lighting, or hairstyle, in an unsupervised manner by enforcing statistical independence between dimensions. We find that the majority of our model's learned latent dimensions are interpretable by human raters. Further, these latent dimensions serve as a good encoding model for human fMRI data. We next investigate the representation of different latent dimensions across face-selective voxels. We find that low- and high-level face features are represented in posterior and anterior face-selective regions, respectively, corroborating prior models of human face recognition. Interestingly, though, we find identity-relevant and irrelevant face features across the face processing network. Finally, we provide new insight into the few "entangled" (uninterpretable) dimensions in our model by showing that they match responses in the ventral stream and carry information about facial identity. Disentangled face encoding models provide an exciting alternative to standard "black box" deep learning approaches for modeling and interpreting human brain data.


Asunto(s)
Reconocimiento Facial , Corteza Visual , Humanos , Reconocimiento Facial/fisiología , Encéfalo/fisiología , Corteza Visual/fisiología , Mapeo Encefálico , Imagen por Resonancia Magnética/métodos
6.
ArXiv ; 2024 Jan 11.
Artículo en Inglés | MEDLINE | ID: mdl-38259351

RESUMEN

Vision is widely understood as an inference problem. However, two contrasting conceptions of the inference process have each been influential in research on biological vision as well as the engineering of machine vision. The first emphasizes bottom-up signal flow, describing vision as a largely feedforward, discriminative inference process that filters and transforms the visual information to remove irrelevant variation and represent behaviorally relevant information in a format suitable for downstream functions of cognition and behavioral control. In this conception, vision is driven by the sensory data, and perception is direct because the processing proceeds from the data to the latent variables of interest. The notion of "inference" in this conception is that of the engineering literature on neural networks, where feedforward convolutional neural networks processing images are said to perform inference. The alternative conception is that of vision as an inference process in Helmholtz's sense, where the sensory evidence is evaluated in the context of a generative model of the causal processes that give rise to it. In this conception, vision inverts a generative model through an interrogation of the sensory evidence in a process often thought to involve top-down predictions of sensory data to evaluate the likelihood of alternative hypotheses. The authors include scientists rooted in roughly equal numbers in each of the conceptions and motivated to overcome what might be a false dichotomy between them and engage the other perspective in the realm of theory and experiment. The primate brain employs an unknown algorithm that may combine the advantages of both conceptions. We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends the dichotomy and sets the stage for revealing the mysterious hybrid algorithm of primate vision.

7.
Trends Cogn Sci ; 28(3): 195-196, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38296745
8.
Nat Commun ; 14(1): 7317, 2023 11 11.
Artículo en Inglés | MEDLINE | ID: mdl-37951960

RESUMEN

Humans effortlessly recognize social interactions from visual input. Attempts to model this ability have typically relied on generative inverse planning models, which make predictions by inverting a generative model of agents' interactions based on their inferred goals, suggesting humans use a similar process of mental inference to recognize interactions. However, growing behavioral and neuroscience evidence suggests that recognizing social interactions is a visual process, separate from complex mental state inference. Yet despite their success in other domains, visual neural network models have been unable to reproduce human-like interaction recognition. We hypothesize that humans rely on relational visual information in particular, and develop a relational, graph neural network model, SocialGNN. Unlike prior models, SocialGNN accurately predicts human interaction judgments across both animated and natural videos. These results suggest that humans can make complex social interaction judgments without an explicit model of the social and physical world, and that structured, relational visual representations are key to this behavior.


Asunto(s)
Reconocimiento en Psicología , Interacción Social , Humanos , Juicio , Redes Neurales de la Computación
9.
Curr Biol ; 33(23): 5035-5047.e8, 2023 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-37918399

RESUMEN

Recent theoretical work has argued that in addition to the classical ventral (what) and dorsal (where/how) visual streams, there is a third visual stream on the lateral surface of the brain specialized for processing social information. Like visual representations in the ventral and dorsal streams, representations in the lateral stream are thought to be hierarchically organized. However, no prior studies have comprehensively investigated the organization of naturalistic, social visual content in the lateral stream. To address this question, we curated a naturalistic stimulus set of 250 3-s videos of two people engaged in everyday actions. Each clip was richly annotated for its low-level visual features, mid-level scene and object properties, visual social primitives (including the distance between people and the extent to which they were facing), and high-level information about social interactions and affective content. Using a condition-rich fMRI experiment and a within-subject encoding model approach, we found that low-level visual features are represented in early visual cortex (EVC) and middle temporal (MT) area, mid-level visual social features in extrastriate body area (EBA) and lateral occipital complex (LOC), and high-level social interaction information along the superior temporal sulcus (STS). Communicative interactions, in particular, explained unique variance in regions of the STS after accounting for variance explained by all other labeled features. Taken together, these results provide support for representation of increasingly abstract social visual content-consistent with hierarchical organization-along the lateral visual stream and suggest that recognizing communicative actions may be a key computational goal of the lateral visual pathway.


Asunto(s)
Corteza Visual , Humanos , Vías Visuales , Reconocimiento Visual de Modelos , Lóbulo Temporal , Encéfalo , Imagen por Resonancia Magnética/métodos , Mapeo Encefálico/métodos , Estimulación Luminosa/métodos
10.
J Neurosci ; 43(45): 7700-7711, 2023 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-37871963

RESUMEN

Seeing social touch triggers a strong social-affective response that involves multiple brain networks, including visual, social perceptual, and somatosensory systems. Previous studies have identified the specific functional role of each system, but little is known about the speed and directionality of the information flow. Is this information extracted via the social perceptual system or from simulation from somatosensory cortex? To address this, we examined the spatiotemporal neural processing of observed touch. Twenty-one human participants (seven males) watched 500-ms video clips showing social and nonsocial touch during electroencephalogram (EEG) recording. Visual and social-affective features were rapidly extracted in the brain, beginning at 90 and 150 ms after video onset, respectively. Combining the EEG data with functional magnetic resonance imaging (fMRI) data from our prior study with the same stimuli reveals that neural information first arises in early visual cortex (EVC), then in the temporoparietal junction and posterior superior temporal sulcus (TPJ/pSTS), and finally in the somatosensory cortex. EVC and TPJ/pSTS uniquely explain EEG neural patterns, while somatosensory cortex does not contribute to EEG patterns alone, suggesting that social-affective information may flow from TPJ/pSTS to somatosensory cortex. Together, these findings show that social touch is processed quickly, within the timeframe of feedforward visual processes, and that the social-affective meaning of touch is first extracted by a social perceptual pathway. Such rapid processing of social touch may be vital to its effective use during social interaction.SIGNIFICANCE STATEMENT Seeing physical contact between people evokes a strong social-emotional response. Previous research has identified the brain systems responsible for this response, but little is known about how quickly and in what direction the information flows. We demonstrated that the brain processes the social-emotional meaning of observed touch quickly, starting as early as 150 ms after the stimulus onset. By combining electroencephalogram (EEG) data with functional magnetic resonance imaging (fMRI) data, we show for the first time that the social-affective meaning of touch is first extracted by a social perceptual pathway and followed by the later involvement of somatosensory simulation. This rapid processing of touch through the social perceptual route may play a pivotal role in effective usage of touch in social communication and interaction.


Asunto(s)
Percepción del Tacto , Tacto , Humanos , Masculino , Afecto/fisiología , Encéfalo/fisiología , Mapeo Encefálico/métodos , Electroencefalografía , Imagen por Resonancia Magnética , Corteza Somatosensorial/diagnóstico por imagen , Corteza Somatosensorial/fisiología , Tacto/fisiología , Percepción del Tacto/fisiología , Femenino
11.
Trends Cogn Sci ; 27(12): 1165-1179, 2023 12.
Artículo en Inglés | MEDLINE | ID: mdl-37805385

RESUMEN

Seeing the interactions between other people is a critical part of our everyday visual experience, but recognizing the social interactions of others is often considered outside the scope of vision and grouped with higher-level social cognition like theory of mind. Recent work, however, has revealed that recognition of social interactions is efficient and automatic, is well modeled by bottom-up computational algorithms, and occurs in visually-selective regions of the brain. We review recent evidence from these three methodologies (behavioral, computational, and neural) that converge to suggest the core of social interaction perception is visual. We propose a computational framework for how this process is carried out in the brain and offer directions for future interdisciplinary investigations of social perception.


Asunto(s)
Interacción Social , Percepción Social , Humanos , Encéfalo , Cognición
12.
Sci Rep ; 13(1): 5171, 2023 03 30.
Artículo en Inglés | MEDLINE | ID: mdl-36997625

RESUMEN

Understanding actions performed by others requires us to integrate different types of information about people, scenes, objects, and their interactions. What organizing dimensions does the mind use to make sense of this complex action space? To address this question, we collected intuitive similarity judgments across two large-scale sets of naturalistic videos depicting everyday actions. We used cross-validated sparse non-negative matrix factorization to identify the structure underlying action similarity judgments. A low-dimensional representation, consisting of nine to ten dimensions, was sufficient to accurately reconstruct human similarity judgments. The dimensions were robust to stimulus set perturbations and reproducible in a separate odd-one-out experiment. Human labels mapped these dimensions onto semantic axes relating to food, work, and home life; social axes relating to people and emotions; and one visual axis related to scene setting. While highly interpretable, these dimensions did not share a clear one-to-one correspondence with prior hypotheses of action-relevant dimensions. Together, our results reveal a low-dimensional set of robust and interpretable dimensions that organize intuitive action similarity judgments and highlight the importance of data-driven investigations of behavioral representations.


Asunto(s)
Reconocimiento Visual de Modelos , Semántica , Humanos , Juicio , Emociones , Actividades Humanas
13.
Elife ; 112022 05 24.
Artículo en Inglés | MEDLINE | ID: mdl-35608254

RESUMEN

Humans observe actions performed by others in many different visual and social settings. What features do we extract and attend when we view such complex scenes, and how are they processed in the brain? To answer these questions, we curated two large-scale sets of naturalistic videos of everyday actions and estimated their perceived similarity in two behavioral experiments. We normed and quantified a large range of visual, action-related, and social-affective features across the stimulus sets. Using a cross-validated variance partitioning analysis, we found that social-affective features predicted similarity judgments better than, and independently of, visual and action features in both behavioral experiments. Next, we conducted an electroencephalography experiment, which revealed a sustained correlation between neural responses to videos and their behavioral similarity. Visual, action, and social-affective features predicted neural patterns at early, intermediate, and late stages, respectively, during this behaviorally relevant time window. Together, these findings show that social-affective features are important for perceiving naturalistic actions and are extracted at the final stage of a temporal gradient in the brain.


Asunto(s)
Mapeo Encefálico , Encéfalo , Encéfalo/fisiología , Electroencefalografía , Humanos , Juicio/fisiología , Estimulación Luminosa , Percepción Visual/fisiología
14.
Neuroimage ; 245: 118741, 2021 12 15.
Artículo en Inglés | MEDLINE | ID: mdl-34800663

RESUMEN

Recognizing others' social interactions is a crucial human ability. Using simple stimuli, previous studies have shown that social interactions are selectively processed in the superior temporal sulcus (STS), but prior work with movies has suggested that social interactions are processed in the medial prefrontal cortex (mPFC), part of the theory of mind network. It remains unknown to what extent social interaction selectivity is observed in real world stimuli when controlling for other covarying perceptual and social information, such as faces, voices, and theory of mind. The current study utilizes a functional magnetic resonance imaging (fMRI) movie paradigm and advanced machine learning methods to uncover the brain mechanisms uniquely underlying naturalistic social interaction perception. We analyzed two publicly available fMRI datasets, collected while both male and female human participants (n = 17 and 18) watched two different commercial movies in the MRI scanner. By performing voxel-wise encoding and variance partitioning analyses, we found that broad social-affective features predict neural responses in social brain regions, including the STS and mPFC. However, only the STS showed robust and unique selectivity specifically to social interactions, independent from other covarying features. This selectivity was observed across two separate fMRI datasets. These findings suggest that naturalistic social interaction perception recruits dedicated neural circuity in the STS, separate from the theory of mind network, and is a critical dimension of human social understanding.


Asunto(s)
Mapeo Encefálico/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética , Interacción Social , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología , Teoría de la Mente , Adulto , Conjuntos de Datos como Asunto , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Películas Cinematográficas
15.
Neuroimage ; 215: 116844, 2020 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-32302763

RESUMEN

The ability to perceive others' social interactions, here defined as the directed contingent actions between two or more people, is a fundamental part of human experience that develops early in infancy and is shared with other primates. However, the neural computations underlying this ability remain largely unknown. Is social interaction recognition a rapid feedforward process or a slower post-perceptual inference? Here we used magnetoencephalography (MEG) decoding to address this question. Subjects in the MEG viewed snapshots of visually matched real-world scenes containing a pair of people who were either engaged in a social interaction or acting independently. The presence versus absence of a social interaction could be read out from subjects' MEG data spontaneously, even while subjects performed an orthogonal task. This readout generalized across different people and scenes, revealing abstract representations of social interactions in the human brain. These representations, however, did not come online until quite late, at 300 â€‹ms after image onset, well after feedforward visual processes. In a second experiment, we found that social interaction readout still occurred at this same late latency even when subjects performed an explicit task detecting social interactions. We further showed that MEG responses distinguished between different types of social interactions (mutual gaze vs joint attention) even later, around 500 â€‹ms after image onset. Taken together, these results suggest that the human brain spontaneously extracts information about others' social interactions, but does so slowly, likely relying on iterative top-down computations.


Asunto(s)
Encéfalo/fisiología , Magnetoencefalografía/métodos , Tiempo de Reacción/fisiología , Interacción Social , Percepción Social/psicología , Percepción Visual/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Luminosa/métodos , Adulto Joven
16.
Nat Commun ; 10(1): 1258, 2019 03 19.
Artículo en Inglés | MEDLINE | ID: mdl-30890707

RESUMEN

Within a fraction of a second of viewing a face, we have already determined its gender, age and identity. A full understanding of this remarkable feat will require a characterization of the computational steps it entails, along with the representations extracted at each. Here, we used magnetoencephalography (MEG) to measure the time course of neural responses to faces, thereby addressing two fundamental questions about how face processing unfolds over time. First, using representational similarity analysis, we found that facial gender and age information emerged before identity information, suggesting a coarse-to-fine processing of face dimensions. Second, identity and gender representations of familiar faces were enhanced very early on, suggesting that the behavioral benefit for familiar faces results from tuning of early feed-forward processing mechanisms. These findings start to reveal the time course of face processing in humans, and provide powerful new constraints on computational theories of face perception.


Asunto(s)
Encéfalo/fisiología , Reconocimiento Facial/fisiología , Modelos Neurológicos , Reconocimiento en Psicología/fisiología , Adulto , Femenino , Voluntarios Sanos , Humanos , Magnetoencefalografía , Masculino , Análisis Multivariante , Estimulación Luminosa , Caracteres Sexuales , Factores de Tiempo
17.
J Neurosci ; 38(40): 8526-8537, 2018 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-30126975

RESUMEN

The brain actively represents incoming information, but these representations are only useful to the extent that they flexibly reflect changes in the environment. How does the brain transform representations across changes, such as in size or viewing angle? We conducted a fMRI experiment and a magnetoencephalography experiment in humans (both sexes) in which participants viewed objects before and after affine viewpoint changes (rotation, translation, enlargement). We used a novel approach, representational transformation analysis, to derive transformation functions that linked the distributed patterns of brain activity evoked by an object before and after an affine change. Crucially, transformations derived from one object could predict a postchange representation for novel objects. These results provide evidence of general operations in the brain that are distinct from neural representations evoked by particular objects and scenes.SIGNIFICANCE STATEMENT The dominant focus in cognitive neuroscience has been on how the brain represents information, but these representations are only useful to the extent that they flexibly reflect changes in the environment. How does the brain transform representations, such as linking two states of an object, for example, before and after an object undergoes a physical change? We used a novel method to derive transformations between the brain activity evoked by an object before and after an affine viewpoint change. We show that transformations derived from one object undergoing a change generalized to a novel object undergoing the same change. This result shows that there are general perceptual operations that transform object representations from one state to another.


Asunto(s)
Reconocimiento Visual de Modelos/fisiología , Corteza Visual/fisiología , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Magnetoencefalografía , Masculino , Estimulación Luminosa/métodos
18.
Annu Rev Vis Sci ; 4: 403-422, 2018 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-30052494

RESUMEN

Recognizing the people, objects, and actions in the world around us is a crucial aspect of human perception that allows us to plan and act in our environment. Remarkably, our proficiency in recognizing semantic categories from visual input is unhindered by transformations that substantially alter their appearance (e.g., changes in lighting or position). The ability to generalize across these complex transformations is a hallmark of human visual intelligence, which has been the focus of wide-ranging investigation in systems and computational neuroscience. However, while the neural machinery of human visual perception has been thoroughly described, the computational principles dictating its functioning remain unknown. Here, we review recent results in brain imaging, neurophysiology, and computational neuroscience in support of the hypothesis that the ability to support the invariant recognition of semantic entities in the visual world shapes which neural representations of sensory input are computed by human visual cortex.


Asunto(s)
Discriminación en Psicología/fisiología , Modelos Neurológicos , Reconocimiento en Psicología/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Biología Computacional , Humanos
19.
Neuroimage ; 180(Pt A): 147-159, 2018 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-28823828

RESUMEN

The majority of visual recognition studies have focused on the neural responses to repeated presentations of static stimuli with abrupt and well-defined onset and offset times. In contrast, natural vision involves unique renderings of visual inputs that are continuously changing without explicitly defined temporal transitions. Here we considered commercial movies as a coarse proxy to natural vision. We recorded intracranial field potential signals from 1,284 electrodes implanted in 15 patients with epilepsy while the subjects passively viewed commercial movies. We could rapidly detect large changes in the visual inputs within approximately 100 ms of their occurrence, using exclusively field potential signals from ventral visual cortical areas including the inferior temporal gyrus and inferior occipital gyrus. Furthermore, we could decode the content of those visual changes even in a single movie presentation, generalizing across the wide range of transformations present in a movie. These results present a methodological framework for studying cognition during dynamic and natural vision.


Asunto(s)
Corteza Visual/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Mapeo Encefálico/métodos , Niño , Preescolar , Epilepsia Refractaria/terapia , Terapia por Estimulación Eléctrica , Electrodos Implantados , Potenciales Evocados Visuales/fisiología , Femenino , Humanos , Masculino , Películas Cinematográficas , Estimulación Luminosa , Procesamiento de Señales Asistido por Computador , Adulto Joven
20.
J Neurophysiol ; 119(2): 631-640, 2018 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-29118198

RESUMEN

Humans can effortlessly recognize others' actions in the presence of complex transformations, such as changes in viewpoint. Several studies have located the regions in the brain involved in invariant action recognition; however, the underlying neural computations remain poorly understood. We use magnetoencephalography decoding and a data set of well-controlled, naturalistic videos of five actions (run, walk, jump, eat, drink) performed by different actors at different viewpoints to study the computational steps used to recognize actions across complex transformations. In particular, we ask when the brain discriminates between different actions, and when it does so in a manner that is invariant to changes in 3D viewpoint. We measure the latency difference between invariant and noninvariant action decoding when subjects view full videos as well as form-depleted and motion-depleted stimuli. We were unable to detect a difference in decoding latency or temporal profile between invariant and noninvariant action recognition in full videos. However, when either form or motion information is removed from the stimulus set, we observe a decrease and delay in invariant action decoding. Our results suggest that the brain recognizes actions and builds invariance to complex transformations at the same time and that both form and motion information are crucial for fast, invariant action recognition. NEW & NOTEWORTHY The human brain can quickly recognize actions despite transformations that change their visual appearance. We use neural timing data to uncover the computations underlying this ability. We find that within 200 ms action can be read out of magnetoencephalography data and that this representation is invariant to changes in viewpoint. We find form and motion are needed for this fast action decoding, suggesting that the brain quickly integrates complex spatiotemporal features to form invariant action representations.


Asunto(s)
Encéfalo/fisiología , Percepción de Movimiento , Reconocimiento Visual de Modelos , Adulto , Femenino , Humanos , Masculino , Movimiento , Tiempo de Reacción
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...