Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 41
Filtrar
1.
Behav Res Methods ; 2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39117987

RESUMEN

This tutorial provides instruction on how to use the eye tracking technology built into virtual reality (VR) headsets, emphasizing the analysis of head and eye movement data when an observer is situated in the center of an omnidirectional environment. We begin with a brief description of how VR eye movement research differs from previous forms of eye movement research, as well as identifying some outstanding gaps in the current literature. We then introduce the basic methodology used to collect VR eye movement data both in general and with regard to the specific data that we collected to illustrate different analytical approaches. We continue with an introduction of the foundational ideas regarding data analysis in VR, including frames of reference, how to map eye and head position, and event detection. In the next part, we introduce core head and eye data analyses focusing on determining where the head and eyes are directed. We then expand on what has been presented, introducing several novel spatial, spatio-temporal, and temporal head-eye data analysis techniques. We conclude with a reflection on what has been presented, and how the techniques introduced in this tutorial provide the scaffolding for extensions to more complex and dynamic VR environments.

2.
Perception ; 53(4): 287-290, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38173337

RESUMEN

Shaking hands is a fundamental form of social interaction. The current study used high-definition cameras during a university graduation ceremony to examine the temporal sequencing of eye contact and shaking hands. Analyses revealed that mutual gaze always preceded shaking hands. A follow up investigation manipulated gaze when shaking hands, and found that participants take significantly longer to accept a handshake when an outstretched hand precedes eye contact. These findings demonstrate that the timing between a person's gaze and their offer to shake hands is critical to how their action is interpreted.


Asunto(s)
Atención , Interacción Social , Humanos , Ojo , Movimientos Oculares , Medidas del Movimiento Ocular , Fijación Ocular
3.
J Cogn ; 6(1): 51, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37663138

RESUMEN

When we imagine a picture, we move our eyes even though the picture is physically not present. These eye movements provide information about the ongoing process of mental imagery. Eye movements unfold over time, and previous research has shown that the temporal gaze dynamics of eye movements in mental imagery have unique properties, which are unrelated to those in perception. In mental imagery, refixations of previously fixated locations happen more often and in a more systematic manner than in perception. The origin of these unique properties remains unclear. We tested how the temporal structure of eye movements is influenced by the complexity of the mental image. Participants briefly saw and then maintained a pattern stimulus, consisting of one (easy condition) to four black segments (most difficult condition). When maintaining a simple pattern in imagery, participants restricted their gaze to a narrow area, and for more complex stimuli, eye movements were more spread out to distant areas. At the same time, fewer refixations were made in imagery when the stimuli were complex. The results show that refixations depend on the imagined content. While fixations of stimulus-related areas reflect the so-called 'looking at nothing' effect, gaze restriction emphasizes differences between mental imagery and perception.

4.
PLoS One ; 18(2): e0282030, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36800398

RESUMEN

One approach to studying the recognition of scenes and objects relies on the comparison of eye movement patterns during encoding and recognition. Past studies typically analyzed the perception of flat stimuli of limited extent presented on a computer monitor that did not require head movements. In contrast, participants in the present study saw omnidirectional panoramic scenes through an immersive 3D virtual reality viewer, and they could move their head freely to inspect different parts of the visual scenes. This allowed us to examine how unconstrained observers use their head and eyes to encode and recognize visual scenes. By studying head and eye movement within a fully immersive environment, and applying cross-recurrence analysis, we found that eye movements are strongly influenced by the content of the visual environment, as are head movements-though to a much lesser degree. Moreover, we found that the head and eyes are linked, with the head supporting, and by and large mirroring the movements of the eyes, consistent with the notion that the head operates to support the acquisition of visual information by the eyes.


Asunto(s)
Movimientos de la Cabeza , Realidad Virtual , Humanos , Movimientos Oculares , Reconocimiento en Psicología , Estimulación Luminosa/métodos
5.
Curr Top Behav Neurosci ; 65: 73-100, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36710302

RESUMEN

This chapter explores the current state of the art in eye tracking within 3D virtual environments. It begins with the motivation for eye tracking in Virtual Reality (VR) in psychological research, followed by descriptions of the hardware and software used for presenting virtual environments as well as for tracking eye and head movements in VR. This is followed by a detailed description of an example project on eye and head tracking while observers look at 360° panoramic scenes. The example is illustrated with descriptions of the user interface and program excerpts to show the measurement of eye and head movements in VR. The chapter continues with fundamentals of data analysis, in particular methods for the determination of fixations and saccades when viewing spherical displays. We then extend these methodological considerations to determining the spatial and temporal coordination of the eyes and head in VR perception. The chapter concludes with a discussion of outstanding problems and future directions for conducting eye- and head-tracking research in VR. We hope that this chapter will serve as a primer for those intending to implement VR eye tracking in their own research.


Asunto(s)
Tecnología de Seguimiento Ocular , Realidad Virtual , Movimientos Sacádicos
6.
PLoS One ; 15(8): e0236131, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32756560

RESUMEN

A student's ability to learn effectively in a classroom setting is subject to many factors. While some factors are difficult to regulate, this study explores two factors that a student, or instructor, has full control over, namely 1) seating position, and 2) computer usage. Both factors have been studied considerably with regard to their effects on student performance, and the results indicate that sitting further from the instructor, or using a computer in the classroom, are related to a decline in grade performance. However, it is unclear if the choice of where to sit and whether or not to use a computer in class are mediated by the same cognitive process. If they are the same, then we would expect to see an interaction between the factors, such that, for example, computer usage would most negatively impact the grades of students who sit near the back of a class. This study aims to answer this question by looking at the individual and combined effects of seating position and computer usage on classroom performance. We sampled 1364 students, collecting nearly 3000 total responses across 5 different introductory psychology courses with 4 different instructors on 3 separate occasions. In agreement with previous research, we found that sitting further from the instructor negatively impacted students' grades (0.75 percentage points/row), and using a computer in class negatively impacted grades (by 3.88 percentage points). Our novel finding is that these deleterious effects combined in an additive manner, such that using a computer had the same harmful effect on grade performance regardless of whether the student sat at the front or back of the classroom.


Asunto(s)
Rendimiento Académico , Enseñanza , Computadores , Humanos , Aprendizaje , Estudiantes , Universidades
7.
J Vis ; 20(7): 23, 2020 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-32692829

RESUMEN

How do we explore the visual environment around us, and how are head and eye movements coordinated during our exploration? To investigate this question, we had observers look at omnidirectional panoramic scenes, composed of both landscape and fractal images, using a virtual reality viewer while their eye and head movements were tracked. We analyzed the spatial distribution of eye fixations and the distribution of saccade directions and the spatial distribution of head positions and the distribution of head shifts, as well as the relation between eye and head movements. The results show that, for landscape scenes, eye and head behavior best fit the allocentric frame defined by the scene horizon, especially when head tilt (i.e., head rotation around the view axis) is considered. For fractal scenes, which have an isotropic texture, eye and head movements were executed primarily along the cardinal directions in world coordinates. The results also show that eye and head movements are closely linked in space and time in a complementary way, with stimulus-driven eye movements predominantly leading the head movements. Our study is the first to systematically examine eye and head movements in a panoramic virtual reality environment, and the results demonstrate that a virtual reality environment constitutes a powerful and informative research alternative to traditional methods for investigating looking behavior.


Asunto(s)
Movimientos Oculares/fisiología , Movimientos de la Cabeza/fisiología , Procesamiento Espacial/fisiología , Adolescente , Adulto , Femenino , Fijación Ocular/fisiología , Humanos , Masculino , Orientación Espacial , Navegación Espacial/fisiología , Campos Visuales/fisiología , Adulto Joven
8.
J Vis ; 20(8): 21, 2020 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-38755788

RESUMEN

Research investigating gaze in natural scenes has identified a number of spatial biases in where people look, but it is unclear whether these are partly due to constrained testing environments (e.g., a participant with their head restrained and looking at a landscape image framed within a computer monitor). We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influence eye and head movements in virtual reality (VR). Both the eyes and head were tracked while observers looked at natural scenes in a virtual environment. In line with previous work, we found a bias for saccade directions parallel to the image horizon, regardless of image shape or content. We found that, when allowed to do so, observers move both their eyes and head to explore images. Head rotation, however, was idiosyncratic; some observers rotated a lot, whereas others did not. Interestingly, the head rotated in line with the rotation of landscape but not fractal images. That head rotation and gaze direction respond differently to image content suggests that they may be under different control systems. We discuss our findings in relation to current theories on head and eye movement control and how insights from VR might inform more traditional eye-tracking studies.

9.
J Oral Rehabil ; 46(6): 518-525, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-30725489

RESUMEN

BACKGROUND: Lingual exercises are commonly used in clinical practice for swallowing rehabilitation. Associating lingual exercises with computer games increases motivation, which influences tongue motor performance. OBJECTIVE: To investigate the effects of tongue movement direction; resistance force level; repetition number; sustained tongue contraction duration; age and gender on tongue motor performance in healthy adults using computer games. METHODS: An observational pilot study was carried out at a university laboratory. Nine healthy adults, aged 22 to 38 years, used an intra-oral joystick controlled by the tongue to play four computer games. The participants had to reach 12 targets that appeared on the computer screen using the intra-oral joystick. Motor performance was measured by the number of attempts to score and the time during which the target force was maintained. Tongue motor performance was compared among tongue movement direction, resistance force level, game round number, and continuous force application time on the target, age and gender. RESULTS: The number of attempts depended significantly on the direction, continuous force application time on the target and age. The time during which the target force was maintained depended significantly on the direction, continuous force application time on the target and game round number. There were no significant differences in the comparisons by gender or by resistance force level. CONCLUSIONS: It was seen that young adults had their best performance in the downward direction, on the third round, holding the force for a shorter time. The performance deteriorated as age increased.


Asunto(s)
Trastornos de Deglución , Terapia por Ejercicio , Juegos de Video , Adulto , Deglución , Trastornos de Deglución/rehabilitación , Humanos , Proyectos Piloto , Lengua , Adulto Joven
10.
J Vis ; 19(1): 17, 2019 01 02.
Artículo en Inglés | MEDLINE | ID: mdl-30699229

RESUMEN

Several studies demonstrated similarities of eye fixations during mental imagery and visual perception but-to our knowledge-the temporal characteristics of eye movements during imagery have not yet been considered in detail. To fill this gap, the same data is analyzed with conventional spatial techniques such as analysis of areas of interest (AOI), ScanMatch, and MultiMatch and with recurrence quantification analysis (RQA), a new way of analyzing gaze data by tracking re-fixations and their temporal dynamics. Participants viewed and afterwards imagined three different kinds of pictures (art, faces, and landscapes) while their eye movements were recorded. While fixation locations during imagery were related to those during perception, participants returned more often to areas they had previously looked at during imagery and their scan paths were more clustered and more repetitive when compared to visual perception. Furthermore, refixations of the same area occurred sooner after initial fixation during mental imagery. The results highlight not only content-driven spatial similarities between imagery and perception but also shed light on the processes of mental imagery maintenance and interindividual differences in these processes.


Asunto(s)
Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Imaginación/fisiología , Recuerdo Mental/fisiología , Percepción Visual/fisiología , Adulto , Interpretación Estadística de Datos , Femenino , Humanos , Masculino , Adulto Joven
11.
J Eye Mov Res ; 12(7)2019 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-33828771

RESUMEN

We examined the extent to which image shape (square vs. circle), image rotation, and image content (landscapes vs. fractal images) influenced eye and head movements. Both the eyes and head were tracked while observers looked at natural scenes in a virtual reality (VR) environment. In line with previous work, we found a horizontal bias in saccade directions, but this was affected by both the image shape and its content. Interestingly, when viewing landscapes (but not fractals), observers rotated their head in line with the image rotation, presumably to make saccades in cardinal, rather than oblique, directions. We discuss our findings in relation to current theories on eye movement control, and how insights from VR might inform traditional eyetracking studies. - Part 2: Observers looked at panoramic, 360 degree scenes using VR goggles while eye and head movements were tracked. Fixations were determined using IDT (Salvucci & Goldberg, 2000) adapted to a spherical coordinate system. We then analyzed a) the spatial distribution of fixations and the distribution of saccade directions, b) the spatial distribution of head positions and the distribution of head movements, and c) the relation between gaze and head movements. We found that, for landscape scenes, gaze and head best fit the allocentric frame defined by the scene horizon, especially when taking head tilt (i.e., head rotation around the view axis) into account. For fractal scenes, which are isotropic on average, the bias toward a body-centric frame gaze is weak for gaze and strong for the head. Furthermore, our data show that eye and head movements are closely linked in space and time in stereotypical ways, with volitional eye movements predominantly leading the head. We discuss our results in terms of models of visual exploratory behavior in panoramic scenes, both in virtual and real environments. Video stream: https://vimeo.com/356859979 Production and publication of the video stream was sponsored by SCIANS Ltd http://www.scians.ch/.

12.
Atten Percept Psychophys ; 80(1): 21-26, 2018 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-29134577

RESUMEN

Does theory of mind play a significant role in where people choose to hide an item or where they search for an item that has been hidden? Adapting Anderson's "Hide-Find Paradigm" Anderson et al. (Action, Perception and Performance, 76, 907-913, 2014) participants viewed homogenous or popout visual arrays on a touchscreen table. Their task was to indicate where in the array they would hide an item, or to search for an item that had been hidden, by either a friend or a foe. Critically, participants believed that their sitting location at the touchtable was the same as-or opposite to-their partner's location. Replicating Anderson et al., participants tended to (1) select items nearer to themselves on homogenous displays, and this bias was stronger for a friend than foe; and (2) select popout items, and again, more for a friend than foe. These biases were observed only when participants believed that they shared the same physical perspective as their partner. Collectively, the data indicate that theory of mind plays a significant role in hiding and finding, and demonstrate that the hide-find paradigm is a powerful tool for investigating theory of mind in adults.


Asunto(s)
Conducta Exploratoria/fisiología , Percepción Espacial , Conducta Espacial/fisiología , Teoría de la Mente , Adolescente , Adulto , Femenino , Amigos/psicología , Humanos , Relaciones Interpersonales , Masculino , Adulto Joven
13.
Behav Res Methods ; 47(4): 1377-1392, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25540126

RESUMEN

Interest has flourished in studying both the spatial and temporal aspects of eye movement behavior. This has sparked the development of a large number of new methods to compare scanpaths. In the present work, we present a detailed overview of common scanpath comparison measures. Each of these measures was developed to solve a specific problem, but quantifies different aspects of scanpath behavior and requires different data-processing techniques. To understand these differences, we applied each scanpath comparison method to data from an encoding and recognition experiment and compared their ability to reveal scanpath similarities within and between individuals looking at natural scenes. Results are discussed in terms of the unique aspects of scanpath behavior that the different methods quantify. We conclude by making recommendations for choosing an appropriate scanpath comparison measure.


Asunto(s)
Movimientos Oculares/fisiología , Desempeño Psicomotor/fisiología , Interpretación Estadística de Datos , Fijación Ocular , Humanos , Estimulación Luminosa
14.
J Vis ; 14(9)2014 Aug 11.
Artículo en Inglés | MEDLINE | ID: mdl-25113020

RESUMEN

Recent research has begun to explore not just the spatial distribution of eye fixations but also the temporal dynamics of how we look at the world. In this investigation, we assess how scene characteristics contribute to these fixation dynamics. In a free-viewing task, participants viewed three scene types: fractal, landscape, and social scenes. We used a relatively new method, recurrence quantification analysis (RQA), to quantify eye movement dynamics. RQA revealed that eye movement dynamics were dependent on the scene type viewed. To understand the underlying cause for these differences we applied a technique known as fractal analysis and discovered that complexity and clutter are two scene characteristics that affect fixation dynamics, but only in scenes with meaningful content. Critically, scene primitives-revealed by saliency analysis-had no impact on performance. In addition, we explored how RQA differs from the first half of the trial to the second half, as well as the potential to investigate the precision of fixation targeting by changing RQA radius values. Collectively, our results suggest that eye movement dynamics result from top-down viewing strategies that vary according to the meaning of a scene and its associated visual complexity and clutter.


Asunto(s)
Movimientos Oculares/fisiología , Fijación Ocular/fisiología , Reconocimiento Visual de Modelos/fisiología , Fractales , Humanos
15.
PLoS One ; 9(3): e92696, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24671136

RESUMEN

The aim of the present study was to test the cognitive ethology approach, which seeks to link cognitions and behaviours as they operate in everyday life with those studied in controlled lab-based investigations. Our test bed was the understanding of first-person and third-person perspectives, which in lab-based investigations have been defined in a diverse and multi-faceted manner. We hypothesized that because these lab-based investigations seek to connect with how first- and third-person perspective operates in everyday life, then either some of the divergent lab-based definitions are missing their mark or the everyday conceptualization of first- and third-person perspective is multi-faceted. Our investigation revealed the latter. By applying a cognitive ethology approach we were able to determine that a) peoples' everyday understanding of perspective is diverse yet reliable, and b) a lab-based investigation that applies these diverse understandings in a controlled setting can accurately predict how people will perform. These findings provide a 'proof of concept' for the cognitive ethology approach. Moreover, the present data demonstrate that previous lab-based studies, that often had very different understandings of first- and third-person perspective, were each in and of themselves valid. That is, each is capturing part of a broader understanding of perspective in everyday life. Our results also revealed a novel social factor not included in traditional conceptualizations of first-person third-perspective, that of eye gaze, i.e., eye contact is equated strongly with first-person perspective and the lack of eye-contact with third-person perspective.


Asunto(s)
Cognición/fisiología , Etología , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Análisis y Desempeño de Tareas , Adulto Joven
16.
Sci Rep ; 3: 2356, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23912766

RESUMEN

Recent studies have found that participants consistently look less at social stimuli in live situations than expected from conventional laboratory experiments, raising questions as to the cause for this discrepancy and concerns about the validity of typical studies. We tested the possibility that it is the consequences of a potential social interaction that dictates one's looking behaviour. By placing participants in a situation where the social consequences of interacting are congruent with social norms (sharing a meal), we find an increased preference for participants to look at each other. Dyads who were particularly interactive also looked more at the other person than dyads who did not interact. Recent landmark studies have shown that in real world settings people avoid looking at strangers, but we show that in a situation with a different social context the opposite holds true.


Asunto(s)
Atención/fisiología , Conducta de Elección/fisiología , Conducta Alimentaria/fisiología , Relaciones Interpersonales , Conducta Social , Percepción Visual/fisiología , Humanos
17.
J Exp Psychol Hum Percept Perform ; 39(5): 1218-23, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23750971

RESUMEN

Current theory suggests that interpersonal synchrony is an important social behavior in that it not only serves as a form of "social glue," but it also arises automatically in a social context. Theorists suggest potential mechanisms for interpersonal synchrony, ranging from a "low-level" social-perceptual system account to a "high-level" social-motivational explanation. Past studies that suggest synchrony can be influenced by social factors do not discriminate between these accounts. The current investigation seeks to isolate the effect of the high-level social system on interpersonal synchrony by investigating the effects of spatial proximity on unintentional coordinated tapping between two naïve participants. Dyads performed a synchronization-continuation task either in the same room, in different rooms, or in different rooms but with the ability to hear each other tap. Participant taps were represented by a box that flashed on the monitor to control visual information across all three conditions. Same-room dyads had increased coordination over different-room dyads, whereas dyads that shared audio but were in different rooms showed an intermediate level of coordination. The present study demonstrates that shared space, independent of perceptual differences in stimuli, can increase unintentional coordinated tapping.


Asunto(s)
Relaciones Interpersonales , Percepción/fisiología , Desempeño Psicomotor/fisiología , Facilitación Social , Adulto , Percepción Auditiva/fisiología , Humanos , Percepción Social , Factores de Tiempo , Percepción del Tiempo/fisiología , Percepción Visual/fisiología
18.
Cogn Neuropsychol ; 30(1): 25-40, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23537050

RESUMEN

Simultanagnosia is a disorder of visual attention resulting from bilateral parieto-occipital lesions. Healthy individuals look at eyes to infer people's attentional states, but simultanagnosics allocate abnormally few fixations to eyes in scenes. It is unclear why simultanagnosics fail to fixate eyes, but it might reflect that they are (a) unable to locate and fixate them, or (b) do not prioritize attentional states. We compared eye movements of simultanagnosic G.B. to those of healthy subjects viewing scenes normally or through a restricted window of vision. They described scenes and explicitly inferred attentional states of people in scenes. G.B. and subjects viewing scenes through a restricted window made few fixations on eyes when describing scenes, yet increased fixations on eyes when inferring attention. Thus G.B. understands that eyes are important for inferring attentional states and can exert top-down control to seek out and process the gaze of others when attentional states are of interest.


Asunto(s)
Agnosia/complicaciones , Trastorno por Déficit de Atención con Hiperactividad/complicaciones , Percepción Social , Campos Visuales/fisiología , Adulto , Agnosia/etiología , Agnosia/psicología , Trastorno por Déficit de Atención con Hiperactividad/etiología , Trastorno por Déficit de Atención con Hiperactividad/psicología , Fijación Ocular/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Síndrome de Leucoencefalopatía Posterior/complicaciones
19.
Behav Res Methods ; 45(3): 842-56, 2013 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-23344735

RESUMEN

Recurrence quantification analysis (RQA) has been successfully used for describing dynamic systems that are too complex to be characterized adequately by standard methods in time series analysis. More recently, RQA has been used for analyzing the coordination of gaze patterns between cooperating individuals. Here, we extend RQA to the characterization of fixation sequences, and we show that the global and local temporal characteristics of fixation sequences can be captured by a small number of RQA measures that have a clear interpretation in this context. We applied RQA to the analysis of a study in which observers looked at different scenes under natural or gaze-contingent viewing conditions, and we found large differences in the RQA measures between the viewing conditions, indicating that RQA is a powerful new tool for the analysis of the temporal patterns of eye movement behavior.


Asunto(s)
Movimientos Oculares , Fijación Ocular , Intervalos de Confianza , Interpretación Estadística de Datos , Humanos , Modelos Psicológicos , Tiempo de Reacción , Reproducibilidad de los Resultados
20.
Comput Aided Surg ; 17(6): 269-83, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23098188

RESUMEN

Surgical techniques are becoming more complex and require substantial training to master. The development of automated, objective methods to analyze and evaluate surgical skill is necessary to provide trainees with reliable and accurate feedback during their training programs. We present a system to capture, visualize, and analyze the movements of a laparoscopic surgeon for the purposes of skill evaluation. The system records the upper body movement of the surgeon, the position, and orientation of the instruments, and the force and torque applied to the instruments. An empirical study was conducted using the system to record the performances of a number of surgeons with a wide range of skill. The study validated the usefulness of the system, and demonstrated the accuracy of the measurements.


Asunto(s)
Competencia Clínica , Laparoscopía/educación , Cirugía Asistida por Computador/educación , Interfaz Usuario-Computador , Educación de Postgrado en Medicina/métodos , Retroalimentación , Femenino , Humanos , Internado y Residencia , Masculino , Instrumentos Quirúrgicos , Análisis de Sistemas , Análisis y Desempeño de Tareas , Grabación en Video
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA