Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
J Neurosci ; 43(49): 8504-8514, 2023 12 06.
Artigo em Inglês | MEDLINE | ID: mdl-37848285

RESUMO

Selecting suitable grasps on three-dimensional objects is a challenging visuomotor computation, which involves combining information about an object (e.g., its shape, size, and mass) with information about the actor's body (e.g., the optimal grasp aperture and hand posture for comfortable manipulation). Here, we used functional magnetic resonance imaging to investigate brain networks associated with these distinct aspects during grasp planning and execution. Human participants of either sex viewed and then executed preselected grasps on L-shaped objects made of wood and/or brass. By leveraging a computational approach that accurately predicts human grasp locations, we selected grasp points that disentangled the role of multiple grasp-relevant factors, that is, grasp axis, grasp size, and object mass. Representational Similarity Analysis revealed that grasp axis was encoded along dorsal-stream regions during grasp planning. Grasp size was first encoded in ventral stream areas during grasp planning then in premotor regions during grasp execution. Object mass was encoded in ventral stream and (pre)motor regions only during grasp execution. Premotor regions further encoded visual predictions of grasp comfort, whereas the ventral stream encoded grasp comfort during execution, suggesting its involvement in haptic evaluation. These shifts in neural representations thus capture the sensorimotor transformations that allow humans to grasp objects.SIGNIFICANCE STATEMENT Grasping requires integrating object properties with constraints on hand and arm postures. Using a computational approach that accurately predicts human grasp locations by combining such constraints, we selected grasps on objects that disentangled the relative contributions of object mass, grasp size, and grasp axis during grasp planning and execution in a neuroimaging study. Our findings reveal a greater role of dorsal-stream visuomotor areas during grasp planning, and, surprisingly, increasing ventral stream engagement during execution. We propose that during planning, visuomotor representations initially encode grasp axis and size. Perceptual representations of object material properties become more relevant instead as the hand approaches the object and motor programs are refined with estimates of the grip forces required to successfully lift the object.


Assuntos
Encéfalo , Desempenho Psicomotor , Humanos , Mapeamento Encefálico/métodos , Força da Mão , Mãos
2.
Eur J Neurosci ; 60(1): 3719-3741, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38758670

RESUMO

Across vertebrate species, the olfactory epithelium (OE) exhibits the uncommon feature of lifelong neuronal turnover. Epithelial stem cells give rise to new neurons that can adequately replace dying olfactory receptor neurons (ORNs) during developmental and adult phases and after lesions. To relay olfactory information from the environment to the brain, the axons of the renewed ORNs must reconnect with the olfactory bulb (OB). In Xenopus laevis larvae, we have previously shown that this process occurs between 3 and 7 weeks after olfactory nerve (ON) transection. In the present study, we show that after 7 weeks of recovery from ON transection, two functionally and spatially distinct glomerular clusters are reformed in the OB, akin to those found in non-transected larvae. We also show that the same odourant response tuning profiles observed in the OB of non-transected larvae are again present after 7 weeks of recovery. Next, we show that characteristic odour-guided behaviour disappears after ON transection but recovers after 7-9 weeks of recovery. Together, our findings demonstrate that the olfactory system of larval X. laevis regenerates with high accuracy after ON transection, leading to the recovery of odour-guided behaviour.


Assuntos
Larva , Bulbo Olfatório , Xenopus laevis , Animais , Bulbo Olfatório/fisiologia , Regeneração Nervosa/fisiologia , Odorantes , Traumatismos do Nervo Olfatório , Nervo Olfatório/fisiologia , Mucosa Olfatória/citologia , Mucosa Olfatória/fisiologia , Olfato/fisiologia , Neurônios Receptores Olfatórios/fisiologia
3.
Proc Natl Acad Sci U S A ; 118(32)2021 08 10.
Artigo em Inglês | MEDLINE | ID: mdl-34349023

RESUMO

Sitting in a static railway carriage can produce illusory self-motion if the train on an adjoining track moves off. While our visual system registers motion, vestibular signals indicate that we are stationary. The brain is faced with a difficult challenge: is there a single cause of sensations (I am moving) or two causes (I am static, another train is moving)? If a single cause, integrating signals produces a more precise estimate of self-motion, but if not, one cue should be ignored. In many cases, this process of causal inference works without error, but how does the brain achieve it? Electrophysiological recordings show that the macaque medial superior temporal area contains many neurons that encode combinations of vestibular and visual motion cues. Some respond best to vestibular and visual motion in the same direction ("congruent" neurons), while others prefer opposing directions ("opposite" neurons). Congruent neurons could underlie cue integration, but the function of opposite neurons remains a puzzle. Here, we seek to explain this computational arrangement by training a neural network model to solve causal inference for motion estimation. Like biological systems, the model develops congruent and opposite units and recapitulates known behavioral and neurophysiological observations. We show that all units (both congruent and opposite) contribute to motion estimation. Importantly, however, it is the balance between their activity that distinguishes whether visual and vestibular cues should be integrated or separated. This explains the computational purpose of puzzling neural representations and shows how a relatively simple feedforward network can solve causal inference.


Assuntos
Percepção de Movimento/fisiologia , Redes Neurais de Computação , Células Receptoras Sensoriais/fisiologia , Animais , Sinais (Psicologia) , Macaca mulatta , Estimulação Luminosa , Lobo Temporal/fisiologia
4.
PLoS Comput Biol ; 17(6): e1008981, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-34061825

RESUMO

Shape is a defining feature of objects, and human observers can effortlessly compare shapes to determine how similar they are. Yet, to date, no image-computable model can predict how visually similar or different shapes appear. Such a model would be an invaluable tool for neuroscientists and could provide insights into computations underlying human shape perception. To address this need, we developed a model ('ShapeComp'), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp accurately predicts human shape similarity judgments between pairs of shapes without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that incorporating multiple ShapeComp dimensions facilitates the prediction of human shape similarity across a small number of shapes, and also captures much of the variance in the multiple arrangements of many shapes. ShapeComp outperforms both conventional pixel-based metrics and state-of-the-art convolutional neural networks, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.


Assuntos
Biologia Computacional/métodos , Reconhecimento Visual de Modelos , Animais , Humanos , Estimulação Luminosa
5.
J Neurophysiol ; 125(4): 1330-1338, 2021 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-33596725

RESUMO

How humans visually select where to grasp an object depends on many factors, including grasp stability and preferred grasp configuration. We examined how endpoints are selected when these two factors are brought into conflict: Do people favor stable grasps or do they prefer their natural grasp configurations? Participants reached to grasp one of three cuboids oriented so that its two corners were either aligned with, or rotated away from, each individual's natural grasp axis (NGA). All objects were made of brass (mass: 420 g), but the surfaces of their sides were manipulated to alter friction: 1) all-brass, 2) two opposing sides covered with wood, and the other two remained of brass, or 3) two opposing sides covered with sandpaper, and the two remaining brass sides smeared with Vaseline. Grasps were evaluated as either clockwise (thumb to the left of finger in frontal plane) or counterclockwise of the NGA. Grasp endpoints depended on both object orientation and surface material. For the all-brass object, grasps were bimodally distributed in the NGA-aligned condition but predominantly clockwise in the NGA-unaligned condition. These data reflected participants' natural grasp configuration independently of surface material. When grasping objects with different surface materials, endpoint selection changed: Participants sacrificed their usual grasp configuration to choose the more stable object sides. A model in which surface material shifts participants' preferred grip angle proportionally to the perceived friction of the surfaces accounts for our results. Our findings demonstrate that a stable grasp is more important than a biomechanically comfortable grasp configuration.NEW & NOTEWORTHY When grasping an object, humans can place their fingers at several positions on its surface. The selection of these endpoints depends on many factors, with two of the most important being grasp stability and grasp configuration. We put these two factors in conflict and examine which is considered more important. Our results highlight that humans are not reluctant to adopt unusual grasp configurations to satisfy grasp stability.


Assuntos
Dedos/fisiologia , Atividade Motora/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Adulto , Feminino , Fricção , Humanos , Masculino , Adulto Jovem
6.
PLoS Comput Biol ; 16(8): e1008081, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-32750070

RESUMO

We rarely experience difficulty picking up objects, yet of all potential contact points on the surface, only a small proportion yield effective grasps. Here, we present extensive behavioral data alongside a normative model that correctly predicts human precision grasping of unfamiliar 3D objects. We tracked participants' forefinger and thumb as they picked up objects of 10 wood and brass cubes configured to tease apart effects of shape, weight, orientation, and mass distribution. Grasps were highly systematic and consistent across repetitions and participants. We employed these data to construct a model which combines five cost functions related to force closure, torque, natural grasp axis, grasp aperture, and visibility. Even without free parameters, the model predicts individual grasps almost as well as different individuals predict one another's, but fitting weights reveals the relative importance of the different constraints. The model also accurately predicts human grasps on novel 3D-printed objects with more naturalistic geometries and is robust to perturbations in its key parameters. Together, the findings provide a unified account of how we successfully grasp objects of different 3D shape, orientation, mass, and mass distribution.


Assuntos
Força da Mão/fisiologia , Modelos Biológicos , Desempenho Psicomotor/fisiologia , Adulto , Biologia Computacional , Feminino , Mãos/fisiologia , Humanos , Masculino , Torque , Adulto Jovem
7.
PLoS Comput Biol ; 16(4): e1007699, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32275711

RESUMO

The human visual system is foveated: we can see fine spatial details in central vision, whereas resolution is poor in our peripheral visual field, and this loss of resolution follows an approximately logarithmic decrease. Additionally, our brain organizes visual input in polar coordinates. Therefore, the image projection occurring between retina and primary visual cortex can be mathematically described by the log-polar transform. Here, we test and model how this space-variant visual processing affects how we process binocular disparity, a key component of human depth perception. We observe that the fovea preferentially processes disparities at fine spatial scales, whereas the visual periphery is tuned for coarse spatial scales, in line with the naturally occurring distributions of depths and disparities in the real-world. We further show that the visual system integrates disparity information across the visual field, in a near-optimal fashion. We develop a foveated, log-polar model that mimics the processing of depth information in primary visual cortex and that can process disparity directly in the cortical domain representation. This model takes real images as input and recreates the observed topography of human disparity sensitivity. Our findings support the notion that our foveated, binocular visual system has been moulded by the statistics of our visual environment.


Assuntos
Visão Binocular/fisiologia , Acuidade Visual/fisiologia , Adulto , Percepção de Profundidade , Feminino , Humanos , Masculino , Modelos Neurológicos , Neurônios , Estimulação Luminosa , Disparidade Visual , Visão Ocular/fisiologia , Córtex Visual , Campos Visuais/fisiologia
8.
Exp Eye Res ; 166: 96-105, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-29051012

RESUMO

The formation of focused and corresponding foveal images requires a close synergy between the accommodation and vergence systems. This linkage is usually decoupled in virtual reality systems and may be dysfunctional in people who are at risk of developing myopia. We study how refractive error affects vergence-accommodation interactions in stereoscopic displays. Vergence and accommodative responses were measured in 21 young healthy adults (n=9 myopes, 22-31 years) while subjects viewed naturalistic stimuli on a 3D display. In Step 1, vergence was driven behind the monitor using a blurred, non-accommodative, uncrossed disparity target. In Step 2, vergence and accommodation were driven back to the monitor plane using naturalistic images that contained structured depth and focus information from size, blur and/or disparity. In Step 1, both refractive groups converged towards the stereoscopic target depth plane, but the vergence-driven accommodative change was smaller in emmetropes than in myopes (F1,19=5.13, p=0.036). In Step 2, there was little effect of peripheral depth cues on accommodation or vergence in either refractive group. However, vergence responses were significantly slower (F1,19=4.55, p=0.046) and accommodation variability was higher (F1,19=12.9, p=0.0019) in myopes. Vergence and accommodation responses are disrupted in virtual reality displays in both refractive groups. Accommodation responses are less stable in myopes, perhaps due to a lower sensitivity to dioptric blur. Such inaccuracies of accommodation may cause long-term blur on the retina, which has been associated with a failure of emmetropization.


Assuntos
Acomodação Ocular/fisiologia , Convergência Ocular/fisiologia , Emetropia/fisiologia , Miopia/fisiopatologia , Adolescente , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Refração Ocular/fisiologia , Adulto Jovem
9.
Exp Brain Res ; 236(3): 691-709, 2018 03.
Artigo em Inglês | MEDLINE | ID: mdl-29299642

RESUMO

Sensorimotor coupling in healthy humans is demonstrated by the higher accuracy of visually tracking intrinsically-rather than extrinsically-generated hand movements in the fronto-parallel plane. It is unknown whether this coupling also facilitates vergence eye movements for tracking objects in depth, or can overcome symmetric or asymmetric binocular visual impairments. Human observers were therefore asked to track with their gaze a target moving horizontally or in depth. The movement of the target was either directly controlled by the observer's hand or followed hand movements executed by the observer in a previous trial. Visual impairments were simulated by blurring stimuli independently in each eye. Accuracy was higher for self-generated movements in all conditions, demonstrating that motor signals are employed by the oculomotor system to improve the accuracy of vergence as well as horizontal eye movements. Asymmetric monocular blur affected horizontal tracking less than symmetric binocular blur, but impaired tracking in depth as much as binocular blur. There was a critical blur level up to which pursuit and vergence eye movements maintained tracking accuracy independent of blur level. Hand-eye coordination may therefore help compensate for functional deficits associated with eye disease and may be employed to augment visual impairment rehabilitation.


Assuntos
Movimentos Oculares/fisiologia , Percepção de Movimento/fisiologia , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Transtornos da Visão/fisiopatologia , Visão Binocular/fisiologia , Adulto , Percepção de Profundidade/fisiologia , Feminino , Humanos , Masculino , Adulto Jovem
10.
J Vis ; 17(5): 3, 2017 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-28476060

RESUMO

We evaluated the ability of emmetropic and myopic observers to detect and discriminate blur across the retina under monocular or binocular viewing conditions. We recruited 39 young (23-30 years) healthy adults (n = 19 myopes) with best-corrected visual acuity 0.0 LogMAR (20/20) or better in each eye and no binocular or accommodative dysfunction. Monocular and binocular blur discrimination thresholds were measured as a function of pedestal blur using naturalistic stimuli with an adaptive 4AFC procedure. Stimuli were presented in a 46° diameter window at 40 cm. Gaussian blur pedestals were confined to an annulus at either 0°, 4°, 8°, or 12° eccentricity, with a blur increment applied to only one quadrant of the image. The adaptive procedure efficiently estimated a dipper shaped blur discrimination threshold function with two parameters: intrinsic blur and blur sensitivity. The amount of intrinsic blur increased for retinal eccentricities beyond 4° (p < 0.001) and was lower in binocular than monocular conditions (p < 0.001), but was similar across refractive groups (p = 0.47). Blur sensitivity decreased with retinal eccentricity (p < 0.001) and was highest for binocular viewing, but only for central vision (p < 0.05). Myopes showed worse blur sensitivity than emmetropes monocularly (p < 0.05) but not binocularly (p = 0.66). As expected, blur perception worsens in the visual periphery and binocular summation is most evident in central vision. Furthermore, myopes exhibit a monocular impairment in blur sensitivity that improves under binocular conditions. Implications for the development of myopia are discussed.


Assuntos
Emetropia/fisiologia , Miopia/fisiopatologia , Transtornos da Percepção/fisiopatologia , Campos Visuais/fisiologia , Acomodação Ocular/fisiologia , Adulto , Feminino , Humanos , Masculino , Retina/fisiologia , Visão Binocular/fisiologia , Acuidade Visual/fisiologia , Adulto Jovem
11.
Behav Res Methods ; 49(3): 923-946, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27401169

RESUMO

The Tobii Eyex Controller is a new low-cost binocular eye tracker marketed for integration in gaming and consumer applications. The manufacturers claim that the system was conceived for natural eye gaze interaction, does not require continuous recalibration, and allows moderate head movements. The Controller is provided with a SDK to foster the development of new eye tracking applications. We review the characteristics of the device for its possible use in scientific research. We develop and evaluate an open source Matlab Toolkit that can be employed to interface with the EyeX device for gaze recording in behavioral experiments. The Toolkit provides calibration procedures tailored to both binocular and monocular experiments, as well as procedures to evaluate other eye tracking devices. The observed performance of the EyeX (i.e. accuracy < 0.6°, precision < 0.25°, latency < 50 ms and sampling frequency ≈55 Hz), is sufficient for some classes of research application. The device can be successfully employed to measure fixation parameters, saccadic, smooth pursuit and vergence eye movements. However, the relatively low sampling rate and moderate precision limit the suitability of the EyeX for monitoring micro-saccadic eye movements or for real-time gaze-contingent stimulus control. For these applications, research grade, high-cost eye tracking technology may still be necessary. Therefore, despite its limitations with respect to high-end devices, the EyeX has the potential to further the dissemination of eye tracking technology to a broad audience, and could be a valuable asset in consumer and gaming applications as well as a subset of basic and clinical research settings.


Assuntos
Equipamentos e Provisões , Movimentos Oculares/fisiologia , Software , Humanos
13.
J Vis ; 16(2): 12, 2016 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-27580091

RESUMO

We implement a neural model for the estimation of the focus of radial motion (FRM) at different retinal locations and assess the model by comparing its results with respect to the precision with which human observers can estimate the FRM in naturalistic motion stimuli. The model describes the deep hierarchy of the first stages of the dorsal visual pathway and is space variant, since it takes into account the retino-cortical transformation of the primate visual system through log-polar mapping. The log-polar transform of the retinal image is the input to the cortical motion-estimation stage, where optic flow is computed by a three-layer neural population. The sensitivity to complex motion patterns that has been found in area MST is modeled through a population of adaptive templates. The first-order description of cortical optic flow is derived from the responses of the adaptive templates. Information about self-motion (e.g., direction of heading) is estimated by combining the first-order descriptors computed in the cortical domain. The model's performance at FRM estimation as a function of retinal eccentricity neatly maps onto data from human observers. By employing equivalent-noise analysis we observe that loss in FRM accuracy for both model and human observers is attributable to a decrease in the efficiency with which motion information is pooled with increasing retinal eccentricity in the visual field. The decrease in sampling efficiency is thus attributable to receptive-field size increases with increasing retinal eccentricity, which are in turn driven by the lossy log-polar mapping that projects the retinal image onto primary visual areas. We further show that the model is able to estimate direction of heading in real-world scenes, thus validating the model's potential application to neuromimetic robotic architectures. More broadly, we provide a framework in which to model complex motion integration across the visual field in real-world scenes.


Assuntos
Modelos Neurológicos , Percepção de Movimento/fisiologia , Campos Visuais/fisiologia , Adulto , Humanos , Masculino , Pessoa de Meia-Idade , Fluxo Óptico , Córtex Visual/fisiologia , Vias Visuais/fisiologia
14.
J Vis ; 14(8): 13, 2014 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-25034260

RESUMO

We have developed a low-cost, practical gaze-contingent display in which natural images are presented to the observer with dioptric blur and stereoscopic disparity that are dependent on the three-dimensional structure of natural scenes. Our system simulates a distribution of retinal blur and depth similar to that experienced in real-world viewing conditions by emmetropic observers. We implemented the system using light-field photographs taken with a plenoptic camera which supports digital refocusing anywhere in the images. We coupled this capability with an eye-tracking system and stereoscopic rendering. With this display, we examine how the time course of binocular fusion depends on depth cues from blur and stereoscopic disparity in naturalistic images. Our results show that disparity and peripheral blur interact to modify eye-movement behavior and facilitate binocular fusion, and the greatest benefit was gained by observers who struggled most to achieve fusion. Even though plenoptic images do not replicate an individual's aberrations, the results demonstrate that a naturalistic distribution of depth-dependent blur may improve 3-D virtual reality, and that interruptions of this pattern (e.g., with intraocular lenses) which flatten the distribution of retinal blur may adversely affect binocular fusion.


Assuntos
Transtornos da Visão/fisiopatologia , Disparidade Visual/fisiologia , Visão Binocular/fisiologia , Adulto , Movimentos Oculares/fisiologia , Humanos , Luz , Adulto Jovem
15.
Sarcoidosis Vasc Diffuse Lung Dis ; 41(1): e2024017, 2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38567559

RESUMO

BACKGROUND: Pulmonary sarcoidosis is a systemic disease that can confound established follow-up tools. Pulmonary function tests (PFTs) are recommended in initial and follow-up patient evaluations yet are imperfect predictors of disease progression. The cardiopulmonary exercise test (CPET) is another potentially useful monitoring tool, although previous studies report conflicting findings regarding which variables are altered by the disease. Nuclear imaging tests are also employed to assess inflammatory activity and may be predictive of functional deterioration. AIM: We asked whether PFTs or CPET are more diagnostic of disease stage, which subsets of functional variables are impacted by the disease, and how these relate to nuclear imaging signs of active inflammation. STUDY DESIGN AND METHODS: We collected retrospective data (spirometry, CPET, Gallium-67 scintigraphy, 18F-FDG PET/CT) from 48 patients and 10 controls. Disease severity was assessed following Scadding classification. First, we correlated individual PFTs and CPET parameters to Scadding stage and nuclear imaging data. Next, we performed Principal Component Analysis (PCA) on PFTs and CPET parameters, separated into respiratory, cardiovascular and metabolic subsets. Finally, we constructed multiple regression models to determine which variable subsets were the best predictors of Scadding stage and disease activity. RESULTS: The majority of PFTs and CPET single parameters were significantly correlated with patient stage, while only few correlated with disease activity. Nevertheless, multiple regression models were able to significantly relate PFTs and CPET to both disease stage and activity. Additionally, these analyses highlighted CPET cardiovascular parameters as the best overall predictors of disease stage and activity. CONCLUSIONS: Our results display how CPET and spirometry data complement each other for sarcoidosis disease staging, and how these tests are able to detect disease activity. Our findings suggest that CPET, a repeatable and non-invasive functional test, should be more routinely performed and taken into account in sarcoidosis patient follow-up.

16.
Invest Ophthalmol Vis Sci ; 64(5): 2, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-37129906

RESUMO

Purpose: To examine how binocularly asymmetric glaucomatous visual field damage affects binocular disparity processing across the visual field. Methods: We recruited 18 patients with primary open-angle glaucoma, 16 age-matched controls, and 13 young controls. Participants underwent standard clinical assessments of binocular visual acuity, binocular contrast sensitivity, stereoacuity, and perimetry. We employed a previously validated psychophysical procedure to measure how sensitivity to binocular disparity varied across spatial frequencies and visual field sectors (i.e., with full-field stimuli spanning the central 21° of the visual field and with stimuli restricted to annular regions spanning 0°-3°, 3°-9°, or 9°-21°). We employed measurements with annular stimuli to model different possible scenarios regarding how disparity information is combined across visual field sectors. We adjudicated between potential mechanisms by comparing model predictions to the patterns observed with full-field stimuli. Results: Perimetry confirmed that patients with glaucoma exhibited binocularly asymmetric visual field damage (P < 0.001). Across participant groups, foveal regions preferentially processed disparities at finer spatial scales, whereas periphery regions were tuned for coarser scales (P < 0.001). Disparity sensitivity also decreased from fovea to periphery (P < 0.001) and across participant groups (Ps < 0.01). Finally, similar to controls, patients with glaucoma exhibited near-optimal disparity integration, specifically at low spatial frequencies (P < 0.001). Conclusions: Contrary to the conventional view that glaucoma spares central vision, we find that glaucomatous damage causes a widespread loss of disparity sensitivity across both foveal and peripheral regions. Despite these losses, cortical integration mechanisms appear to be well preserved, suggesting that patients with glaucoma make the best possible use of their remaining binocular function.


Assuntos
Glaucoma de Ângulo Aberto , Glaucoma , Humanos , Campos Visuais , Disparidade Visual , Testes de Campo Visual , Envelhecimento , Visão Binocular
17.
J Vis Exp ; (194)2023 04 21.
Artigo em Inglês | MEDLINE | ID: mdl-37154551

RESUMO

To grasp an object successfully, we must select appropriate contact regions for our hands on the surface of the object. However, identifying such regions is challenging. This paper describes a workflow to estimate the contact regions from marker-based tracking data. Participants grasp real objects, while we track the 3D position of both the objects and the hand, including the fingers' joints. We first determine the joint Euler angles from a selection of tracked markers positioned on the back of the hand. Then, we use state-of-the-art hand mesh reconstruction algorithms to generate a mesh model of the participant's hand in the current pose and the 3D position. Using objects that were either 3D printed or 3D scanned-and are, thus, available as both real objects and mesh data-allows the hand and object meshes to be co-registered. In turn, this allows the estimation of approximate contact regions by calculating the intersections between the hand mesh and the co-registered 3D object mesh. The method may be used to estimate where and how humans grasp objects under a variety of conditions. Therefore, the method could be of interest to researchers studying visual and haptic perception, motor control, human-computer interaction in virtual and augmented reality, and robotics.


Assuntos
Mãos , Robótica , Humanos , Força da Mão
18.
Front Neurosci ; 16: 1088926, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36578823

RESUMO

[This corrects the article DOI: 10.3389/fnins.2020.591898.].

19.
Curr Biol ; 32(21): R1224-R1225, 2022 11 07.
Artigo em Inglês | MEDLINE | ID: mdl-36347228

RESUMO

The discovery of mental rotation was one of the most significant landmarks in experimental psychology, leading to the ongoing assumption that to visually compare objects from different three-dimensional viewpoints, we use explicit internal simulations of object rotations, to 'mentally adjust' one object until it matches the other1. These rotations are thought to be performed on three-dimensional representations of the object, by literal analogy to physical rotations. In particular, it is thought that an imagined object is continuously adjusted at a constant three-dimensional angular rotation rate from its initial orientation to the final orientation through all intervening viewpoints2. While qualitative theories have tried to account for this phenomenon3, to date there has been no explicit, image-computable model of the underlying processes. As a result, there is no quantitative account of why some object viewpoints appear more similar to one another than others when the three-dimensional angular difference between them is the same4,5. We reasoned that the specific pattern of non-uniformities in the perception of viewpoints can reveal the visual computations underlying mental rotation. We therefore compared human viewpoint perception with a model based on the kind of two-dimensional 'optical flow' computations that are thought to underlie motion perception in biological vision6, finding that the model reproduces the specific errors that participants make. This suggests that mental rotation involves simulating the two-dimensional retinal image change that would occur when rotating objects. When we compare objects, we do not do so in a distal three-dimensional representation as previously assumed, but by measuring how much the proximal stimulus would change if we watched the object rotate, capturing perspectival appearance changes7.


Assuntos
Percepção de Movimento , Fluxo Óptico , Humanos , Reconhecimento Visual de Modelos , Percepção Visual
20.
Eye (Lond) ; 35(3): 868-876, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32483310

RESUMO

PURPOSE: Optical Coherence Tomography (OCT) is a powerful instrument for helping clinicians detect and monitor glaucoma. The aim of this study was to provide a detailed mapping of the relationships between visual field (VF) sensitivities and measures of retinal structure provided by a commercial Spectral Domain (SD)-OCT system (RTvue-100 Optovue). METHODS: Sixty-three eyes of open angle glaucoma patients (17 males, 16 females, and mean age 71 ± 7.5 years) were included in this retrospective, observational clinical study. Thickness values for superior and inferior retina, as well as average values, were recorded for the full retina, the outer retina, the ganglion cell complex, and the peripapillary retinal nerve fiber layer (RNFL). RNFL thickness was further evaluated along eight separate sectors (temporal lower, temporal upper, superior temporal, superior nasal, nasal upper, nasal lower, inferior nasal, and inferior temporal). Point-wise correlations were then computed between each of these OCT measures and the visual sensitivities at all VF locations assessed via Humphrey 10-2 and 24-2 perimetry. Lastly, OCT data were fit to VF data to predict glaucoma stage. RESULTS: The relationship between retinal thickness and visual sensitivities reflects the known topography of the retina. Spatial correlation patterns between visual sensitivities and RNFL thickness along different sectors broadly agree with previously hypothesized structure-function maps, yet suggest that structure-function maps still require more precise characterizations. Given these relationships, we find that OCT data can predict glaucoma stage. CONCLUSION: Ganglion cell complex and RNFL thickness measurements are highlighted as the most promising candidate metrics for glaucoma detection and monitoring.


Assuntos
Glaucoma de Ângulo Aberto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fibras Nervosas , Células Ganglionares da Retina , Estudos Retrospectivos , Tomografia de Coerência Óptica , Testes de Campo Visual , Campos Visuais
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA