Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 6 de 6
Filtrer
Plus de filtres










Base de données
Gamme d'année
1.
Sci Adv ; 9(35): eadf8068, 2023 Sep.
Article de Anglais | MEDLINE | ID: mdl-37656798

RÉSUMÉ

The SMART-BARN (scalable multimodal arena for real-time tracking behavior of animals in large numbers) achieves fast, robust acquisition of movement, behavior, communication, and interactions of animals in groups, within a large (14.7 meters by 6.6 meters by 3.8 meters), three-dimensional environment using multiple information channels. Behavior is measured from a wide range of taxa (insects, birds, mammals, etc.) and body size (from moths to humans) simultaneously. This system integrates multiple, concurrent measurement techniques including submillimeter precision and high-speed (300 hertz) motion capture, acoustic recording and localization, automated behavioral recognition (computer vision), and remote computer-controlled interactive units (e.g., automated feeders and animal-borne devices). The data streams are available in real time allowing highly controlled and behavior-dependent closed-loop experiments, while producing comprehensive datasets for offline analysis. The diverse capabilities of SMART-BARN are demonstrated through three challenging avian case studies, while highlighting its broad applicability to the fine-scale analysis of collective animal behavior across species.


Sujet(s)
Comportement animal , Mouvement , Humains , Animaux , Mammifères
2.
Sci Rep ; 12(1): 19113, 2022 11 09.
Article de Anglais | MEDLINE | ID: mdl-36352049

RÉSUMÉ

Using a motion-capture system and custom head-calibration methods, we reconstructed the head-centric view of freely behaving pigeons and examined how they orient their head when presented with various types of attention-getting objects at various relative locations. Pigeons predominantly employed their retinal specializations to view a visual target, namely their foveas projecting laterally (at an azimuth of ± 75°) into the horizon, and their visually-sensitive "red areas" projecting broadly into the lower-frontal visual field. Pigeons used their foveas to view any distant object while they used their red areas to view a nearby object on the ground (< 50 cm). Pigeons "fixated" a visual target with their foveas; the intervals between head-saccades were longer when the visual target was viewed by birds' foveas compared to when it was viewed by any other region. Furthermore, pigeons showed a weak preference to use their right eye to examine small objects distinctive in detailed features and their left eye to view threat-related or social stimuli. Despite the known difficulty in identifying where a bird is attending, we show that it is possible to estimate the visual attention of freely-behaving birds by tracking the projections of their retinal specializations in their visual field with cutting-edge methods.


Sujet(s)
Perception du mouvement , Champs visuels , Animaux , Columbidae , Saccades , Rétine
3.
IEEE Trans Vis Comput Graph ; 26(5): 2073-2083, 2020 05.
Article de Anglais | MEDLINE | ID: mdl-32070970

RÉSUMÉ

The core idea in an XR (VR/MR/AR) application is to digitally stimulate one or more sensory systems (e.g. visual, auditory, olfactory) of the human user in an interactive way to achieve an immersive experience. Since the early 2000s biologists have been using Virtual Environments (VE) to investigate the mechanisms of behavior in non-human animals including insects, fish, and mammals. VEs have become reliable tools for studying vision, cognition, and sensory-motor control in animals. In turn, the knowledge gained from studying such behaviors can be harnessed by researchers designing biologically inspired robots, smart sensors, and rnulti-agent artificial intelligence. VE for animals is becoming a widely used application of XR technology but such applications have not previously been reported in the technical literature related to XR. Biologists and computer scientists can benefit greatly from deepening interdisciplinary research in this emerging field and together we can develop new methods for conducting fundamental research in behavioral sciences and engineering. To support our argument we present this review which provides an overview of animal behavior experiments conducted in virtual environments.


Sujet(s)
Comportement animal/physiologie , Infographie , Recherche , Réalité de synthèse , Animaux , Réalité augmentée , Environnement , Conception d'appareillage , Poissons , Insectes , Stimulation physique , Rats , Interface utilisateur
4.
Elife ; 82019 10 01.
Article de Anglais | MEDLINE | ID: mdl-31570119

RÉSUMÉ

Quantitative behavioral measurements are important for answering questions across scientific disciplines-from neuroscience to ecology. State-of-the-art deep-learning methods offer major advances in data quality and detail by allowing researchers to automatically estimate locations of an animal's body parts directly from images or videos. However, currently available animal pose estimation methods have limitations in speed and robustness. Here, we introduce a new easy-to-use software toolkit, DeepPoseKit, that addresses these problems using an efficient multi-scale deep-learning model, called Stacked DenseNet, and a fast GPU-based peak-detection algorithm for estimating keypoint locations with subpixel precision. These advances improve processing speed >2x with no loss in accuracy compared to currently available methods. We demonstrate the versatility of our methods with multiple challenging animal pose estimation tasks in laboratory and field settings-including groups of interacting individuals. Our work reduces barriers to using advanced tools for measuring behavior and has broad applicability across the behavioral sciences.


Sujet(s)
Comportement animal/physiologie , Biologie informatique/méthodes , Apprentissage profond , Logiciel , Algorithmes , Animaux , Drosophila melanogaster/physiologie , Equidae/physiologie , Sauterelles/physiologie , Locomotion/physiologie
5.
IEEE Trans Vis Comput Graph ; 21(11): 1211-20, 2015 Nov.
Article de Anglais | MEDLINE | ID: mdl-26439823

RÉSUMÉ

In the Shader Lamps concept, a projector-camera system augments physical objects with projected virtual textures, provided that a precise intrinsic and extrinsic calibration of the system is available. Calibrating such systems has been an elaborate and lengthy task in the past and required a special calibration apparatus. Self-calibration methods in turn are able to estimate calibration parameters automatically with no effort. However they inherently lack global scale and are fairly sensitive to input data. We propose a new semi-automatic calibration approach for projector-camera systems that - unlike existing auto-calibration approaches - additionally recovers the necessary global scale by projecting on an arbitrary object of known geometry. To this end our method combines surface registration with bundle adjustment optimization on points reconstructed from structured light projections to refine a solution that is computed from the decomposition of the fundamental matrix. In simulations on virtual data and experiments with real data we demonstrate that our approach estimates the global scale robustly and is furthermore able to improve incorrectly guessed intrinsic and extrinsic calibration parameters thus outperforming comparable metric rectification algorithms.

6.
Int J Comput Assist Radiol Surg ; 9(6): 987-96, 2014 Nov.
Article de Anglais | MEDLINE | ID: mdl-24664269

RÉSUMÉ

PURPOSE: The augmented reality (AR) fluoroscope augments an X-ray image by video and provides the surgeon with a real-time in situ overlay of the anatomy. The overlay alignment is crucial for diagnostic and intra-operative guidance, so precise calibration of the AR fluoroscope is required. The first and most complex step of the calibration procedure is the determination of the X-ray source position. Currently, this is achieved using a biplane phantom with movable metallic rings on its top layer and fixed X-ray opaque markers on its bottom layer. The metallic rings must be moved to positions where at least two pairs of rings and markers are isocentric in the X-ray image. The current "trial and error" calibration process currently requires acquisition of many X-ray images, a task that is both time consuming and radiation intensive. An improved process was developed and tested for C-arm calibration. METHODS: Video guidance was used to drive the calibration procedure to minimize both X-ray exposure and the time involved. For this, a homography between X-ray and video images is estimated. This homography is valid for the plane at which the metallic rings are positioned and is employed to guide the calibration procedure. Eight users having varying calibration experience (i.e., 2 experts, 2 semi-experts, 4 novices) were asked to participate in the evaluation. RESULTS: The video-guided technique reduced the number of intra-operative X-ray calibration images by 89% and decreased the total time required by 59%. CONCLUSION: A video-based C-arm calibration method has been developed that improves the usability of the AR fluoroscope with a friendlier interface, reduced calibration time and clinically acceptable radiation doses.


Sujet(s)
Calibrage , Compétence clinique , Radioscopie/méthodes , Procédures orthopédiques/méthodes , Conception d'appareillage , Humains , Fantômes en imagerie , Amélioration d'image radiographique/méthodes , Interprétation d'images radiographiques assistée par ordinateur/méthodes , Tomodensitométrie/méthodes , Enregistrement sur magnétoscope
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...