RESUMO
BACKGROUND: Time away from surgical practice can lead to skills decay. Research residents are thought to be prone to skills decay, given their limited experience and reduced exposure to clinical activities during their research training years. This study takes a cross-sectional approach to assess differences in residents' skills at the beginning and end of their research years using virtual reality. We hypothesized that research residents will have measurable decay in psychomotor skills when evaluated using virtual reality. METHODS: Surgical residents (n = 28) were divided into two groups; the first group was just beginning their research time (clinical residents: n = 19) and the second group (research residents: n = 9) had just finished at least 2 y of research. All participants were asked to perform a target-tracking task using a haptic device, and their performance was compared using Welch's t-test. RESULTS: Research residents showed a higher level of "tracking error" (1.69 ± 0.44 cm versus 1.40 ± 0.19 cm; P = 0.04) and a similar level of "path length" (62.5 ± 10.5 cm versus 62.1 ± 5.2 cm; P = 0.92) when compared with clinical residents. CONCLUSIONS: The increased "tracking error" among residents at the end of their research time suggests fine psychomotor skills decay in residents who spend time away from clinical duties during laboratory time. This decay demonstrates the need for research residents to regularly participate in clinical activities, simulation, or assessments to minimize and monitor skills decay while away from clinical practice. Additional longitudinal studies may help better map learning and decay curves for residents who spend time away from clinical practice.
Assuntos
Pesquisa Biomédica/estatística & dados numéricos , Competência Clínica/estatística & dados numéricos , Internato e Residência/estatística & dados numéricos , Desempenho Psicomotor , Treinamento por Simulação/estatística & dados numéricos , Estudos Transversais , Feminino , Humanos , Masculino , Treinamento por Simulação/métodos , Fatores de Tempo , Realidade VirtualRESUMO
BACKGROUND: This project involved the development and evaluation of a new visual bleeding feedback (VBF) system for tourniquet training. We hypothesized that dynamic VBF during junctional tourniquet training would be helpful and well received by trainees. MATERIALS AND METHODS: We designed the VBF to simulate femoral bleeding. Medical students (n = 15) and emergency medical service (EMS) members (n = 4) were randomized in a single-blind, crossover study to the VBF or without feedback groups. Poststudy surveys assessing VBF usefulness and recommendations were conducted along with participants' reported confidence using a 7-point Likert scale. Data from the different groups were compared using Wilcoxon signed-rank and rank-sum tests. RESULTS: Participants rated the helpfulness of the VBF highly (6.53/7.00) and indicated they were very likely to recommend the VBF simulator to others (6.80/7.00). Pre- and post-VBF confidence were not statistically different (P = 0.59). Likewise, tourniquet application times for VBF and without feedback before crossover were not statistically different (P = 0.63). Although participant confidence did not change significantly from beginning to end of the study (P = 0.46), application time was significantly reduced (P = 0.001). CONCLUSIONS: New tourniquet learners liked our VBF prototype and found it useful. Although confidence did not change over the course of the study for any group, application times improved. Future studies using outcomes of this study will allow us to continue VBF development as well as incorporate other quantitative measures of task performance to elucidate VBF's true benefit and help trainees achieve mastery in junctional tourniquet skills.
Assuntos
Primeiros Socorros/métodos , Técnicas Hemostáticas/instrumentação , Treinamento por Simulação/métodos , Torniquetes , Estudos Cross-Over , Avaliação Educacional/estatística & dados numéricos , Auxiliares de Emergência/educação , Retroalimentação Sensorial , Feminino , Hemorragia/terapia , Humanos , Masculino , Manequins , Militares/educação , Método Simples-Cego , Estudantes de Medicina , Lesões Relacionadas à Guerra/terapiaRESUMO
Terrestrial navigation naturally involves translations within the horizontal plane and eye rotations about a vertical (yaw) axis to track and fixate targets of interest. Neurons in the macaque ventral intraparietal (VIP) area are known to represent heading (the direction of self-translation) from optic flow in a manner that is tolerant to rotational visual cues generated during pursuit eye movements. Previous studies have also reported that eye rotations modulate the response gain of heading tuning curves in VIP neurons. We tested the hypothesis that VIP neurons simultaneously represent both heading and horizontal (yaw) eye rotation velocity by measuring heading tuning curves for a range of rotational velocities of either real or simulated eye movements. Three findings support the hypothesis of a joint representation. First, we show that rotation velocity selectivity based on gain modulations of visual heading tuning is similar to that measured during pure rotations. Second, gain modulations of heading tuning are similar for self-generated eye rotations and visually simulated rotations, indicating that the representation of rotation velocity in VIP is multimodal, driven by both visual and extraretinal signals. Third, we show that roughly one-half of VIP neurons jointly represent heading and rotation velocity in a multiplicatively separable manner. These results provide the first evidence, to our knowledge, for a joint representation of translation direction and rotation velocity in parietal cortex and show that rotation velocity can be represented based on visual cues, even in the absence of efference copy signals.
Assuntos
Sinais (Psicologia) , Movimentos Oculares/fisiologia , Percepção de Movimento/fisiologia , Fluxo Óptico/fisiologia , Lobo Parietal/fisiologia , Navegação Espacial/fisiologia , Animais , Macaca mulatta , Masculino , Orientação/fisiologia , RotaçãoRESUMO
Visually guided behaviors require the brain to transform ambiguous retinal images into object-level spatial representations and implement sensorimotor transformations. These processes are supported by the dorsal 'where' pathway. However, the specific functional contributions of areas along this pathway remain elusive due in part to methodological differences across studies. We previously showed that macaque caudal intraparietal (CIP) area neurons possess robust 3D visual representations, carry choice- and saccade-related activity, and exhibit experience-dependent sensorimotor associations (Chang et al., 2020b). Here, we used a common experimental design to reveal parallel processing, hierarchical transformations, and the formation of sensorimotor associations along the 'where' pathway by extending the investigation to V3A, a major feedforward input to CIP. Higher-level 3D representations and choice-related activity were more prevalent in CIP than V3A. Both areas contained saccade-related activity that predicted the direction/timing of eye movements. Intriguingly, the time course of saccade-related activity in CIP aligned with the temporally integrated V3A output. Sensorimotor associations between 3D orientation and saccade direction preferences were stronger in CIP than V3A, and moderated by choice signals in both areas. Together, the results explicate parallel representations, hierarchical transformations, and functional associations of visual and saccade-related signals at a key juncture in the 'where' pathway.
Assuntos
Lobo Parietal , Movimentos Sacádicos , Animais , Movimentos Oculares , Macaca , Neurônios/fisiologia , Lobo Parietal/fisiologia , Estimulação Luminosa/métodosRESUMO
Reconstructing three-dimensional (3D) scenes from two-dimensional (2D) retinal images is an ill-posed problem. Despite this, 3D perception of the world based on 2D retinal images is seemingly accurate and precise. The integration of distinct visual cues is essential for robust 3D perception in humans, but it is unclear whether this is true for non-human primates (NHPs). Here, we assessed 3D perception in macaque monkeys using a planar surface orientation discrimination task. Perception was accurate across a wide range of spatial poses (orientations and distances), but precision was highly dependent on the plane's pose. The monkeys achieved robust 3D perception by dynamically reweighting the integration of stereoscopic and perspective cues according to their pose-dependent reliabilities. Errors in performance could be explained by a prior resembling the 3D orientation statistics of natural scenes. We used neural network simulations based on 3D orientation-selective neurons recorded from the same monkeys to assess how neural computation might constrain perception. The perceptual data were consistent with a model in which the responses of two independent neuronal populations representing stereoscopic cues and perspective cues (with perspective signals from the two eyes combined using nonlinear canonical computations) were optimally integrated through linear summation. Perception of combined-cue stimuli was optimal given this architecture. However, an alternative architecture in which stereoscopic cues, left eye perspective cues, and right eye perspective cues were represented by three independent populations yielded two times greater precision than the monkeys. This result suggests that, due to canonical computations, cue integration for 3D perception is optimized but not maximized.
Assuntos
Sinais (Psicologia) , Percepção de Movimento , Neurônios , Orientação , Estimulação Luminosa , Percepção VisualRESUMO
Three-dimensional (3D) representations of the environment are often critical for selecting actions that achieve desired goals. The success of these goal-directed actions relies on 3D sensorimotor transformations that are experience-dependent. Here we investigated the relationships between the robustness of 3D visual representations, choice-related activity, and motor-related activity in parietal cortex. Macaque monkeys performed an eight-alternative 3D orientation discrimination task and a visually guided saccade task while we recorded from the caudal intraparietal area using laminar probes. We found that neurons with more robust 3D visual representations preferentially carried choice-related activity. Following the onset of choice-related activity, the robustness of the 3D representations further increased for those neurons. We additionally found that 3D orientation and saccade direction preferences aligned, particularly for neurons with choice-related activity, reflecting an experience-dependent sensorimotor association. These findings reveal previously unrecognized links between the fidelity of ecologically relevant object representations, choice-related activity, and motor-related activity.
Assuntos
Neurônios Motores/fisiologia , Lobo Parietal/fisiologia , Células Receptoras Sensoriais/fisiologia , Visão Ocular , Animais , Comportamento Animal , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Orientação/fisiologia , Movimentos SacádicosRESUMO
Modern neuroscience research often requires the coordination of multiple processes such as stimulus generation, real-time experimental control, as well as behavioral and neural measurements. The technical demands required to simultaneously manage these processes with high temporal fidelity is a barrier that limits the number of labs performing such work. Here we present an open-source, network-based parallel processing framework that lowers this barrier. The Real-Time Experimental Control with Graphical User Interface (REC-GUI) framework offers multiple advantages: (i) a modular design that is agnostic to coding language(s) and operating system(s) to maximize experimental flexibility and minimize researcher effort, (ii) simple interfacing to connect multiple measurement and recording devices, (iii) high temporal fidelity by dividing task demands across CPUs, and (iv) real-time control using a fully customizable and intuitive GUI. We present applications for human, non-human primate, and rodent studies which collectively demonstrate that the REC-GUI framework facilitates technically demanding, behavior-contingent neuroscience research. Editorial note: This article has been through an editorial process in which the authors decide how to respond to the issues raised during peer review. The Reviewing Editor's assessment is that all the issues have been addressed (see decision letter).
Assuntos
Neurociências , Software , Potenciais de Ação , Animais , Aprendizagem da Esquiva , Comportamento Animal , Humanos , Camundongos , Primatas , Reprodutibilidade dos Testes , Fatores de Tempo , Visão OcularRESUMO
As we navigate through the world, eye and head movements add rotational velocity patterns to the retinal image. When such rotations accompany observer translation, the rotational velocity patterns must be discounted to accurately perceive heading. The conventional view holds that this computation requires efference copies of self-generated eye/head movements. Here we demonstrate that the brain implements an alternative solution in which retinal velocity patterns are themselves used to dissociate translations from rotations. These results reveal a novel role for visual cues in achieving a rotation-invariant representation of heading in the macaque ventral intraparietal area. Specifically, we show that the visual system utilizes both local motion parallax cues and global perspective distortions to estimate heading in the presence of rotations. These findings further suggest that the brain is capable of performing complex computations to infer eye movements and discount their sensory consequences based solely on visual cues.
Assuntos
Lobo Parietal/fisiologia , Visão Ocular , Animais , Macaca mulatta , Retina/fisiologiaRESUMO
Responses of neurons in early visual cortex change little with training and appear insufficient to account for perceptual learning. Behavioral performance, however, relies on population activity, and the accuracy of a population code is constrained by correlated noise among neurons. We tested whether training changes interneuronal correlations in the dorsal medial superior temporal area, which is involved in multisensory heading perception. Pairs of single units were recorded simultaneously in two groups of subjects: animals trained extensively in a heading discrimination task, and "naive" animals that performed a passive fixation task. Correlated noise was significantly weaker in trained versus naive animals, which might be expected to improve coding efficiency. However, we show that the observed uniform reduction in noise correlations leads to little change in population coding efficiency when all neurons are decoded. Thus, global changes in correlated noise among sensory neurons may be insufficient to account for perceptual learning.