Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Neurosci Methods ; 368: 109453, 2022 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-34968626

RESUMEN

BACKGROUND: Camera images can encode large amounts of visual information of an animal and its environment, enabling high fidelity 3D reconstruction of the animal and its environment using computer vision methods. Most systems, both markerless (e.g. deep learning based) and marker-based, require multiple cameras to track features across multiple points of view to enable such 3D reconstruction. However, such systems can be expensive and are challenging to set up in small animal research apparatuses. NEW METHODS: We present an open-source, marker-based system for tracking the head of a rodent for behavioral research that requires only a single camera with a potentially wide field of view. The system features a lightweight visual target and computer vision algorithms that together enable high-accuracy tracking of the six-degree-of-freedom position and orientation of the animal's head. The system, which only requires a single camera positioned above the behavioral arena, robustly reconstructs the pose over a wide range of head angles (360° in yaw, and approximately ± 120° in roll and pitch). RESULTS: Experiments with live animals demonstrate that the system can reliably identify rat head position and orientation. Evaluations using a commercial optical tracker device show that the system achieves accuracy that rivals commercial multi-camera systems. COMPARISON WITH EXISTING METHODS: Our solution significantly improves upon existing monocular marker-based tracking methods, both in accuracy and in allowable range of motion. CONCLUSIONS: The proposed system enables the study of complex behaviors by providing robust, fine-scale measurements of rodent head motions in a wide range of orientations.


Asunto(s)
Algoritmos , Dispositivos Ópticos , Animales , Computadores , Movimiento (Física) , Ratas
2.
Front Robot AI ; 8: 747917, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34926590

RESUMEN

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators' situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.

3.
Front Robot AI ; 8: 612964, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34250025

RESUMEN

Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.

4.
Curr Biol ; 28(24): 4029-4036.e4, 2018 12 17.
Artículo en Inglés | MEDLINE | ID: mdl-30503617

RESUMEN

Active sensing involves the production of motor signals for the purpose of acquiring sensory information [1-3]. The most common form of active sensing, found across animal taxa and behaviors, involves the generation of movements-e.g., whisking [4-6], touching [7, 8], sniffing [9, 10], and eye movements [11]. Active sensing movements profoundly affect the information carried by sensory feedback pathways [12-15] and are modulated by both top-down goals (e.g., measuring weight versus texture [1, 16]) and bottom-up stimuli (e.g., lights on or off [12]), but it remains unclear whether and how these movements are controlled in relation to the ongoing feedback they generate. To investigate the control of movements for active sensing, we created an experimental apparatus for freely swimming weakly electric fish, Eigenmannia virescens, that modulates the gain of reafferent feedback by adjusting the position of a refuge based on real-time videographic measurements of fish position. We discovered that fish robustly regulate sensory slip via closed-loop control of active sensing movements. Specifically, as fish performed the task of maintaining position inside the refuge [17-22], they dramatically up- or downregulated fore-aft active sensing movements in relation to a 4-fold change of experimentally modulated reafferent gain. These changes in swimming movements served to maintain a constant magnitude of sensory slip. The magnitude of sensory slip depended on the presence or absence of visual cues. These results indicate that fish use two controllers: one that controls the acquisition of information by regulating feedback from active sensing movements and another that maintains position in the refuge, a control structure that may be ubiquitous in animals [23, 24].


Asunto(s)
Retroalimentación Sensorial/fisiología , Gymnotiformes/fisiología , Natación/fisiología , Animales , Grabación en Video
5.
Urology ; 73(4): 896-900, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19193404

RESUMEN

OBJECTIVES: To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. METHODS: Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. RESULTS: Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. CONCLUSIONS: Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.


Asunto(s)
Imagenología Tridimensional , Cálculos Renales/diagnóstico , Cálculos Renales/cirugía , Neoplasias Renales/diagnóstico , Neoplasias Renales/cirugía , Laparoscopía/métodos , Nefrectomía/métodos , Robótica , Cirugía Asistida por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Sistemas de Computación , Estudios de Factibilidad , Humanos , Grabación en Video
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA