RESUMO
Lameness in dairy cattle is a costly and highly prevalent problem that affects all aspects of sustainable dairy production, including animal welfare. Automation of gait assessment would allow monitoring of locomotion in which the cows' walking patterns can be evaluated frequently and with limited labor. With the right interpretation algorithms, this could result in more timely detection of locomotion problems. This in turn would facilitate timely intervention and early treatment, which is crucial to reduce the effect of abnormal behavior and pain on animal welfare. Gait features of dairy cows can potentially be derived from key points that locate crucial anatomical points on a cow's body. The aim of this study is 2-fold: (1) to demonstrate automation of the detection of dairy cows' key points in a practical indoor setting with natural occlusions from gates and races, and (2) to propose the necessary steps to postprocess these key points to make them suitable for subsequent gait feature calculations. Both the automated detection of key points as well as the postprocessing of them are crucial prerequisites for camera-based automated locomotion monitoring in a real farm environment. Side-view video footage of 34 Holstein-Friesian dairy cows, captured when exiting the milking parlor, were used for model development. From these videos, 758 samples of 2 successive frames were extracted. A previously developed deep learning model called T-LEAP was trained to detect 17 key points on cows in our indoor farm environment with natural occlusions. To this end, the dataset of 758 samples was randomly split into a train (n = 22 cows; no. of samples = 388), validation (n = 7 cows; no. of samples = 108), and test dataset (n = 15 cows; no. of samples = 262). The performance of T-LEAP to automatically assign key points in our indoor situation was assessed using the average percentage of correctly detected key points using a threshold of 0.2 of the head length (PCKh0.2). The model's performance on the test set achieved a good result with PCKh0.2: 89% on all 17 key points together. Detecting key points on the back (n = 3 key points) of the cow had the poorest performance PCKh0.2: 59%. In addition to the indoor performance of the model, a more detailed study of the detection performance was conducted to formulate postprocessing steps necessary to use these key points for gait feature calculations and subsequent automated locomotion monitoring. This detailed study included the evaluation of the detection performance in multiple directions. This study revealed that the performance of the key points on a cows' back were the poorest in the horizontal direction. Based on this more in-depth study, we recommend the implementation of the outlined postprocessing techniques to address the following issues: (1) correcting camera distortion, (2) rectifying erroneous key point detection, and (3) establishing the necessary procedures for translating hoof key points into gait features.