RESUMEN
Automated observation and analysis of behavior is important to facilitate progress in many fields of science. Recent developments in deep learning have enabled progress in object detection and tracking, but rodent behavior recognition struggles to exceed 75-80% accuracy for ethologically relevant behaviors. We investigate the main reasons why and distinguish three aspects of behavior dynamics that are difficult to automate. We isolate these aspects in an artificial dataset and reproduce effects with the state-of-the-art behavior recognition models. Having an endless amount of labeled training data with minimal input noise and representative dynamics will enable research to optimize behavior recognition architectures and get closer to human-like recognition performance for behaviors with challenging dynamics.
RESUMEN
The reproducibility crisis (or replication crisis) in biomedical research is a particularly existential and under-addressed issue in the field of behavioral neuroscience, where, in spite of efforts to standardize testing and assay protocols, several known and unknown sources of confounding environmental factors add to variance. Human interference is a major contributor to variability both within and across laboratories, as well as novelty-induced anxiety. Attempts to reduce human interference and to measure more "natural" behaviors in subjects has led to the development of automated home-cage monitoring systems. These systems enable prolonged and longitudinal recordings, and provide large continuous measures of spontaneous behavior that can be analyzed across multiple time scales. In this review, a diverse team of neuroscientists and product developers share their experiences using such an automated monitoring system that combines Noldus PhenoTyper® home-cages and the video-based tracking software, EthoVision® XT, to extract digital biomarkers of motor, emotional, social and cognitive behavior. After presenting our working definition of a "home-cage", we compare home-cage testing with more conventional out-of-cage tests (e.g., the open field) and outline the various advantages of the former, including opportunities for within-subject analyses and assessments of circadian and ultradian activity. Next, we address technical issues pertaining to the acquisition of behavioral data, such as the fine-tuning of the tracking software and the potential for integration with biotelemetry and optogenetics. Finally, we provide guidance on which behavioral measures to emphasize, how to filter, segment, and analyze behavior, and how to use analysis scripts. We summarize how the PhenoTyper has applications to study neuropharmacology as well as animal models of neurodegenerative and neuropsychiatric illness. Looking forward, we examine current challenges and the impact of new developments. Examples include the automated recognition of specific behaviors, unambiguous tracking of individuals in a social context, the development of more animal-centered measures of behavior and ways of dealing with large datasets. Together, we advocate that by embracing standardized home-cage monitoring platforms like the PhenoTyper, we are poised to directly assess issues pertaining to reproducibility, and more importantly, measure features of rodent behavior under more ethologically relevant scenarios.
RESUMEN
Automated observation and analysis of rodent behavior is important to facilitate research progress in neuroscience and pharmacology. Available automated systems lack adaptivity and can benefit from advances in AI. In this work we compare a state-of-the-art conventional rat behavior recognition (RBR) system to an advanced deep learning method and evaluate its performance within and across experimental setups. We show that using a multi-fiber network (MF-Net) in conjunction with data augmentation strategies within-setup dataset performance improves over the conventional RBR system. Two new methods for video augmentation were used: video cutout and dynamic illumination change. However, we also show that improvements do not transfer to videos in different experimental setups, for which we discuss possible causes and cures.
Asunto(s)
Aprendizaje Profundo , Neurociencias , Animales , Estimulación Luminosa , Ratas , Reconocimiento en Psicología , RoedoresRESUMEN
BACKGROUND: Social behavior is an important aspect of rodent models. Automated measuring tools that make use of video analysis and machine learning are an increasingly attractive alternative to manual annotation. Because machine learning-based methods need to be trained, it is important that they are validated using data from different experiment settings. NEW METHOD: To develop and validate automated measuring tools, there is a need for annotated rodent interaction datasets. Currently, the availability of such datasets is limited to two mouse datasets. We introduce the first, publicly available rat social interaction dataset, RatSI. RESULTS: We demonstrate the practical value of the novel dataset by using it as the training set for a rat interaction recognition method. We show that behavior variations induced by the experiment setting can lead to reduced performance, which illustrates the importance of cross-dataset validation. Consequently, we add a simple adaptation step to our method and improve the recognition performance. COMPARISON WITH EXISTING METHODS: Most existing methods are trained and evaluated in one experimental setting, which limits the predictive power of the evaluation to that particular setting. We demonstrate that cross-dataset experiments provide more insight in the performance of classifiers. CONCLUSIONS: With our novel, public dataset we encourage the development and validation of automated recognition methods. We are convinced that cross-dataset validation enhances our understanding of rodent interactions and facilitates the development of more sophisticated recognition methods. Combining them with adaptation techniques may enable us to apply automated recognition methods to a variety of animals and experiment settings.
Asunto(s)
Conducta Animal/fisiología , Investigación Conductal/métodos , Conjuntos de Datos como Asunto , Reconocimiento de Normas Patrones Automatizadas/métodos , Conducta Social , Animales , Investigación Conductal/normas , Masculino , Reconocimiento de Normas Patrones Automatizadas/normas , Ratas , Ratas Sprague-DawleyRESUMEN
The automated measurement of rodent behaviour is crucial to advance research in neuroscience and pharmacology. Rats and mice are used as models for human diseases; their behaviour is studied to discover and develop new drugs for psychiatric and neurological disorders and to establish the effect of genetic variation on behavioural changes. Such behaviour is primarily labelled by humans. Manual annotation is labour intensive, error-prone and subject to individual interpretation. We present a system for automated behaviour recognition (ABR) that recognises the rat behaviours 'drink', 'eat', 'sniff', 'groom', 'jump', 'rear unsupported', 'rear wall', 'rest', 'twitch' and 'walk'. The ABR system needs no on-site training; the only inputs needed are the sizes of the cage and the animal. This is a major advantage over other systems that need to be trained with hand-labelled data before they can be used in a new experimental setup. Furthermore, ABR uses an overhead camera view, which is more practical in lab situations and facilitates high-throughput testing more easily than a side-view setup. ABR has been validated by comparison with manual behavioural scoring by an expert. For this, animals were treated with two types of psychopharmaca: a stimulant drug (Amphetamine) and a sedative drug (Diazepam). The effects of drug treatment on certain behavioural categories were measured and compared for both analysis methods. Statistical analysis showed that ABR found similar behavioural effects as the human observer. We conclude that our ABR system represents a significant step forward in the automated observation of rodent behaviour.