Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 3.748
Filter
1.
Sci Rep ; 14(1): 17779, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39090237

ABSTRACT

Video-based monitoring is essential nowadays in cattle farm management systems for automated evaluation of cow health, encompassing body condition scores, lameness detection, calving events, and other factors. In order to efficiently monitor the well-being of each individual animal, it is vital to automatically identify them in real time. Although there are various techniques available for cattle identification, a significant number of them depend on radio frequency or visible ear tags, which are prone to being lost or damaged. This can result in financial difficulties for farmers. Therefore, this paper presents a novel method for tracking and identifying the cattle with an RGB image-based camera. As a first step, to detect the cattle in the video, we employ the YOLOv8 (You Only Look Once) model. The sample data contains the raw video that was recorded with the cameras that were installed at above from the designated lane used by cattle after the milk production process and above from the rotating milking parlor. As a second step, the detected cattle are continuously tracked and assigned unique local IDs. The tracked images of each individual cattle are then stored in individual folders according to their respective IDs, facilitating the identification process. The images of each folder will be the features which are extracted using a feature extractor called VGG (Visual Geometry Group). After feature extraction task, as a final step, the SVM (Support Vector Machine) identifier for cattle identification will be used to get the identified ID of the cattle. The final ID of a cattle is determined based on the maximum identified output ID from the tracked images of that particular animal. The outcomes of this paper will act as proof of the concept for the use of combining VGG features with SVM is an effective and promising approach for an automatic cattle identification system.


Subject(s)
Video Recording , Animals , Cattle , Video Recording/methods , Artificial Intelligence , Animal Identification Systems/methods , Animal Identification Systems/instrumentation , Support Vector Machine , Dairying/methods , Female , Image Processing, Computer-Assisted/methods
2.
Sensors (Basel) ; 24(13)2024 Jun 26.
Article in English | MEDLINE | ID: mdl-39000914

ABSTRACT

The acquisition of the body temperature of animals kept in captivity in biology laboratories is crucial for several studies in the field of animal biology. Traditionally, the acquisition process was carried out manually, which does not guarantee much accuracy or consistency in the acquired data and was painful for the animal. The process was then switched to a semi-manual process using a thermal camera, but it still involved manually clicking on each part of the animal's body every 20 s of the video to obtain temperature values, making it a time-consuming, non-automatic, and difficult process. This project aims to automate this acquisition process through the automatic recognition of parts of a lizard's body, reading the temperature in these parts based on a video taken with two cameras simultaneously: an RGB camera and a thermal camera. The first camera detects the location of the lizard's various body parts using artificial intelligence techniques, and the second camera allows reading of the respective temperature of each part. Due to the lack of lizard datasets, either in the biology laboratory or online, a dataset had to be created from scratch, containing the identification of the lizard and six of its body parts. YOLOv5 was used to detect the lizard and its body parts in RGB images, achieving a precision of 90.00% and a recall of 98.80%. After initial calibration, the RGB and thermal camera images are properly localised, making it possible to know the lizard's position, even when the lizard is at the same temperature as its surrounding environment, through a coordinate conversion from the RGB image to the thermal image. The thermal image has a colour temperature scale with the respective maximum and minimum temperature values, which is used to read each pixel of the thermal image, thus allowing the correct temperature to be read in each part of the lizard.


Subject(s)
Artificial Intelligence , Body Temperature , Lizards , Animals , Lizards/physiology , Body Temperature/physiology , Video Recording/methods , Image Processing, Computer-Assisted/methods
3.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-39001165

ABSTRACT

The development of contactless methods to assess the degree of personal hygiene in elderly people is crucial for detecting frailty and providing early intervention to prevent complete loss of autonomy, cognitive impairment, and hospitalisation. The unobtrusive nature of the technology is essential in the context of maintaining good quality of life. The use of cameras and edge computing with sensors provides a way of monitoring subjects without interrupting their normal routines, and has the advantages of local data processing and improved privacy. This work describes the development an intelligent system that takes the RGB frames of a video as input to classify the occurrence of brushing teeth, washing hands, and fixing hair. No action activity is considered. The RGB frames are first processed by two Mediapipe algorithms to extract body keypoints related to the pose and hands, which represent the features to be classified. The optimal feature extractor results from the most complex Mediapipe pose estimator combined with the most complex hand keypoint regressor, which achieves the best performance even when operating at one frame per second. The final classifier is a Light Gradient Boosting Machine classifier that achieves more than 94% weighted F1-score under conditions of one frame per second and observation times of seven seconds or more. When the observation window is enlarged to ten seconds, the F1-scores for each class oscillate between 94.66% and 96.35%.


Subject(s)
Algorithms , Frailty , Humans , Frailty/diagnosis , Aged , Monitoring, Physiologic/methods , Monitoring, Physiologic/instrumentation , Female , Male , Video Recording/methods , Machine Learning
4.
BMJ Open Qual ; 13(3)2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39009461

ABSTRACT

BACKGROUND: Adherence to pharmacotherapy and use of the correct inhaler technique are important basic principles of asthma management. Video- or remote-direct observation of therapy (v-DOT) could be a feasible approach to facilitate monitoring and supervising therapy, supporting the delivery of standard care. OBJECTIVE: To explore the utility and the feasibility of v-DOT to monitor inhaler technique and adherence to treatment in adults attending the asthma outpatient service in a tertiary hospital in Northern Ireland. METHOD: The project evaluated use of the technology with 10 asthma patients. Patient and clinician feedback was obtained, in addition to measures of patient engagement and disease-specific clinical markers to assess the feasibility and utility of v-DOT technology in this group of patients. RESULTS: The engagement rate with v-DOT for participating patients averaged 78% (actual video uploads vs expected video uploads) over a median 7 week usage period. Although 50% of patients reported a technical issue at some stage during the usage period, all patients and clinicians reported that the technology was easy to use and that they were satisfied with the outcomes. A range of positive impacts were observed, including optimised inhaler technique and an observed improvement in lung function. An increase in asthma control test scores aligned with clinical aims to promote adherence and alleviate symptoms. CONCLUSION: The v-DOT technology was shown to be a feasible method of assessing inhaler technique and monitoring adherence in this small group of adult asthma patients. A range of positive impacts for participating patients and clinicians were observed. Not all patients invited to join the project agreed to participate or engage with using the technology, highlighting that in this setting, digital modes of delivering care provide only one of the approaches in the necessary "tool kit" for clinicians and patients.


Subject(s)
Asthma , Humans , Asthma/drug therapy , Asthma/therapy , Adult , Female , Male , Pilot Projects , Middle Aged , Northern Ireland , Digital Technology/methods , Digital Technology/statistics & numerical data , Video Recording/methods , Video Recording/statistics & numerical data , Directly Observed Therapy , Nebulizers and Vaporizers/statistics & numerical data
5.
Nat Methods ; 21(7): 1329-1339, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38997595

ABSTRACT

Keypoint tracking algorithms can flexibly quantify animal movement from videos obtained in a wide variety of settings. However, it remains unclear how to parse continuous keypoint data into discrete actions. This challenge is particularly acute because keypoint data are susceptible to high-frequency jitter that clustering algorithms can mistake for transitions between actions. Here we present keypoint-MoSeq, a machine learning-based platform for identifying behavioral modules ('syllables') from keypoint data without human supervision. Keypoint-MoSeq uses a generative model to distinguish keypoint noise from behavior, enabling it to identify syllables whose boundaries correspond to natural sub-second discontinuities in pose dynamics. Keypoint-MoSeq outperforms commonly used alternative clustering methods at identifying these transitions, at capturing correlations between neural activity and behavior and at classifying either solitary or social behaviors in accordance with human annotations. Keypoint-MoSeq also works in multiple species and generalizes beyond the syllable timescale, identifying fast sniff-aligned movements in mice and a spectrum of oscillatory behaviors in fruit flies. Keypoint-MoSeq, therefore, renders accessible the modular structure of behavior through standard video recordings.


Subject(s)
Algorithms , Behavior, Animal , Machine Learning , Video Recording , Animals , Mice , Behavior, Animal/physiology , Video Recording/methods , Movement/physiology , Drosophila melanogaster/physiology , Humans , Male
6.
Bioinformatics ; 40(7)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38970365

ABSTRACT

MOTIVATION: As more behavioural assays are carried out in large-scale experiments on Drosophila larvae, the definitions of the archetypal actions of a larva are regularly refined. In addition, video recording and tracking technologies constantly evolve. Consequently, automatic tagging tools for Drosophila larval behaviour must be retrained to learn new representations from new data. However, existing tools cannot transfer knowledge from large amounts of previously accumulated data. We introduce LarvaTagger, a piece of software that combines a pre-trained deep neural network, providing a continuous latent representation of larva actions for stereotypical behaviour identification, with a graphical user interface to manually tag the behaviour and train new automatic taggers with the updated ground truth. RESULTS: We reproduced results from an automatic tagger with high accuracy, and we demonstrated that pre-training on large databases accelerates the training of a new tagger, achieving similar prediction accuracy using less data. AVAILABILITY AND IMPLEMENTATION: All the code is free and open source. Docker images are also available. See gitlab.pasteur.fr/nyx/LarvaTagger.jl.


Subject(s)
Behavior, Animal , Drosophila , Larva , Software , Animals , Behavior, Animal/physiology , Video Recording/methods , Neural Networks, Computer
7.
Sensors (Basel) ; 24(14)2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39066034

ABSTRACT

In current smart classroom research, numerous studies focus on recognizing hand-raising, but few analyze the movements to interpret students' intentions. This limitation hinders teachers from utilizing this information to enhance the effectiveness of smart classroom teaching. Assistive teaching methods, including robotic and artificial intelligence teaching, require smart classroom systems to both recognize and thoroughly analyze hand-raising movements. This detailed analysis enables systems to provide targeted guidance based on students' hand-raising behavior. This study proposes a morphology-based analysis method to innovatively convert students' skeleton key point data into several one-dimensional time series. By analyzing these time series, this method offers a more detailed analysis of student hand-raising behavior, addressing the limitations of deep learning methods that cannot compare classroom hand-raising enthusiasm or establish a detailed database of such behavior. This method primarily utilizes a neural network to obtain students' skeleton estimation results, which are then converted into time series of several variables using the morphology-based analysis method. The YOLOX and HrNet models were employed to obtain the skeleton estimation results; YOLOX is an object detection model, while HrNet is a skeleton estimation model. This method successfully recognizes hand-raising actions and provides a detailed analysis of their speed and amplitude, effectively supplementing the coarse recognition capabilities of neural networks. The effectiveness of this method has been validated through experiments.


Subject(s)
Hand , Motivation , Neural Networks, Computer , Students , Humans , Hand/physiology , Motivation/physiology , Movement/physiology , Video Recording/methods , Artificial Intelligence
8.
Sensors (Basel) ; 24(14)2024 Jul 17.
Article in English | MEDLINE | ID: mdl-39066046

ABSTRACT

The timely detection of falls and alerting medical aid is critical for health monitoring in elderly individuals living alone. This paper mainly focuses on issues such as poor adaptability, privacy infringement, and low recognition accuracy associated with traditional visual sensor-based fall detection. We propose an infrared video-based fall detection method utilizing spatial-temporal graph convolutional networks (ST-GCNs) to address these challenges. Our method used fine-tuned AlphaPose to extract 2D human skeleton sequences from infrared videos. Subsequently, the skeleton data was represented in Cartesian and polar coordinates and processed through a two-stream ST-GCN to recognize fall behaviors promptly. To enhance the network's recognition capability for fall actions, we improved the adjacency matrix of graph convolutional units and introduced multi-scale temporal graph convolution units. To facilitate practical deployment, we optimized time window and network depth of the ST-GCN, striking a balance between model accuracy and speed. The experimental results on a proprietary infrared human action recognition dataset demonstrated that our proposed algorithm accurately identifies fall behaviors with the highest accuracy of 96%. Moreover, our algorithm performed robustly, identifying falls in both near-infrared and thermal-infrared videos.


Subject(s)
Accidental Falls , Algorithms , Infrared Rays , Neural Networks, Computer , Video Recording , Humans , Video Recording/methods
9.
J Neurosci Methods ; 409: 110212, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38960331

ABSTRACT

BACKGROUND: The forced swim test (FST) and tail suspension test (TST) are widely used to assess depressive-like behaviors in animals. Immobility time is used as an important parameter in both FST and TST. Traditional methods for analyzing FST and TST rely on manually setting the threshold for immobility, which is time-consuming and subjective. NEW METHOD: We proposed a threshold-free method for automated analysis of mice in these tests using a Dual-Stream Activity Analysis Network (DSAAN). Specifically, this network extracted spatial information of mice using a limited number of video frames and combined it with temporal information extracted from differential feature maps to determine the mouse's state. To do so, we developed the Mouse FSTST dataset, which consisted of annotated video recordings of FST and TST. RESULTS: By using DSAAN methods, we identify immobility states at accuracies of 92.51 % and 88.70 % for the TST and FST, respectively. The predicted immobility time from DSAAN is nicely correlated with a manual score, which indicates the reliability of the proposed method. Importantly, the DSAAN achieved over 80 % accuracy for both FST and TST by utilizing only 94 annotated images, suggesting that even a very limited training dataset can yield good performance in our model. COMPARISON WITH EXISTING METHOD(S): Compared with DBscorer and EthoVision XT, our method exhibits the highest Pearson correlation coefficient with manual annotation results on the Mouse FSTST dataset. CONCLUSIONS: We established a powerful tool for analyzing depressive-like behavior independent of threshold, which is capable of freeing users from time-consuming manual analysis.


Subject(s)
Behavior, Animal , Deep Learning , Hindlimb Suspension , Swimming , Animals , Hindlimb Suspension/methods , Swimming/physiology , Mice , Behavior, Animal/physiology , Depression/diagnosis , Male , Video Recording/methods , Mice, Inbred C57BL
10.
PLoS One ; 19(7): e0303633, 2024.
Article in English | MEDLINE | ID: mdl-38980882

ABSTRACT

Estimating the densities of marine prey observed in animal-borne video loggers when encountered by foraging predators represents an important challenge for understanding predator-prey interactions in the marine environment. We used video images collected during the foraging trip of one chinstrap penguin (Pygoscelis antarcticus) from Cape Shirreff, Livingston Island, Antarctica to develop a novel approach for estimating the density of Antarctic krill (Euphausia superba) encountered during foraging activities. Using the open-source Video and Image Analytics for a Marine Environment (VIAME), we trained a neural network model to identify video frames containing krill. Our image classifier has an overall accuracy of 73%, with a positive predictive value of 83% for prediction of frames containing krill. We then developed a method to estimate the volume of water imaged, thus the density (N·m-3) of krill, in the 2-dimensional images. The method is based on the maximum range from the camera where krill remain visibly resolvable and assumes that mean krill length is known, and that the distribution of orientation angles of krill is uniform. From 1,932 images identified as containing krill, we manually identified a subset of 124 images from across the video record that contained resolvable and unresolvable krill necessary to estimate the resolvable range and imaged volume for the video sensor. Krill swarm density encountered by the penguins ranged from 2 to 307 krill·m-3 and mean density of krill was 48 krill·m-3 (sd = 61 krill·m-3). Mean krill biomass density was 25 g·m-3. Our frame-level image classifier model and krill density estimation method provide a new approach to efficiently process video-logger data and estimate krill density from 2D imagery, providing key information on prey aggregations that may affect predator foraging performance. The approach should be directly applicable to other marine predators feeding on aggregations of prey.


Subject(s)
Euphausiacea , Predatory Behavior , Spheniscidae , Animals , Spheniscidae/physiology , Euphausiacea/physiology , Predatory Behavior/physiology , Antarctic Regions , Population Density , Video Recording/methods , Image Processing, Computer-Assisted/methods
11.
Rev Bras Enferm ; 77(3): e20230416, 2024.
Article in English, Portuguese | MEDLINE | ID: mdl-39082545

ABSTRACT

OBJECTIVE: to assess validity evidence of an educational video on safe sexual activity after acute coronary syndrome. METHOD: study in three phases: video development; content validity analysis by 11 experts; and analysis of validity based on response processes by seven people with coronary disease. The content validity ratio (CVR) was calculated with critical values for the second phase of 0.63 and for the third of 1.0. RESULTS: the video addressed the importance of resuming sexual activity and positions that consume less energy, clinical warning signs, the importance of adhering to treatment and a welcoming environment for sexual practice. A CVR above the critical value was obtained with a total of 4 minutes and 41 seconds. CONCLUSION: the educational video brings together adequate content validity evidence and can be used as a tool for patients after acute coronary syndrome.


Subject(s)
Acute Coronary Syndrome , Humans , Male , Female , Counseling/methods , Counseling/standards , Video Recording/methods , Middle Aged , Sexual Behavior/psychology , Adult , Patient Education as Topic/methods , Patient Education as Topic/standards
12.
J Neuroeng Rehabil ; 21(1): 129, 2024 Jul 31.
Article in English | MEDLINE | ID: mdl-39085937

ABSTRACT

BACKGROUND: Positional preferences, asymmetry of body position and movements potentially indicate abnormal clinical conditions in infants. However, a lack of standardized nomenclature hinders accurate assessment and documentation of these preferences over time. Video tools offer a safe and reproducible method to analyze and describe infant movement patterns, aiding in physiotherapy management and goal planning. The study aimed to develop an objective classification system for infant movement patterns with particular emphasis on the specific distribution of muscle tension, using methods of computer analysis of video recordings to enhance accuracy and reproducibility in assessments. METHODS: The study involved the recording of videos of 51 infants between 6 and 15 weeks of age, born at term, with an Apgar score of at least 8 points. Based on observations of a recording of infant spontaneous movements in the supine position, experts identified postural-motor patterns: symmetry and typical asymmetry linked to the asymmetrical tonic neck reflex. Deviations from the typical postural-motor system were indicated, and subcategories of atypical patterns were distinguished. A computer-based inference system was developed to automatically classify individual patterns. RESULTS: The following division of motor patterns was used: (1) normal patterns, including (a) typical (symmetrical, asymmetrical: variants 1 and 2); and (b) atypical (variants: 1 to 4), (2) positional preference, and (3) abnormal patterns. The proposed automatic classification method achieved an expert decision mapping accuracy of 84%. For atypical patterns, the high reproducibility of the system's results was confirmed. Lower reproducibility, not exceeding 70%, was achieved with typical patterns. CONCLUSIONS: Based on the observation of infant spontaneous movements, it is possible to identify movement patterns divided into typical and atypical patterns. Computer-based analysis of infant movement patterns makes it possible to objectify and satisfactorily reproduce diagnostic decisions.


Subject(s)
Movement , Video Recording , Humans , Infant , Movement/physiology , Video Recording/methods , Female , Male , Reproducibility of Results , Posture/physiology
13.
Sci Rep ; 14(1): 16401, 2024 07 16.
Article in English | MEDLINE | ID: mdl-39013897

ABSTRACT

Lameness affects animal mobility, causing pain and discomfort. Lameness in early stages often goes undetected due to a lack of observation, precision, and reliability. Automated and non-invasive systems offer precision and detection ease and may improve animal welfare. This study was conducted to create a repository of images and videos of sows with different locomotion scores. Our goal is to develop a computer vision model for automatically identifying specific points on the sow's body. The automatic identification and ability to track specific body areas, will allow us to conduct kinematic studies with the aim of facilitating the detection of lameness using deep learning. The video database was collected on a pig farm with a scenario built to allow filming of sows in locomotion with different lameness scores. Two stereo cameras were used to record 2D videos images. Thirteen locomotion experts assessed the videos using the Locomotion Score System developed by Zinpro Corporation. From this annotated repository, computational models were trained and tested using the open-source deep learning-based animal pose tracking framework SLEAP (Social LEAP Estimates Animal Poses). The top-performing models were constructed using the LEAP architecture to accurately track 6 (lateral view) and 10 (dorsal view) skeleton keypoints. The architecture achieved average precisions values of 0.90 and 0.72, average distances of 6.83 and 11.37 in pixel, and similarities of 0.94 and 0.86 for the lateral and dorsal views, respectively. These computational models are proposed as a Precision Livestock Farming tool and method for identifying and estimating postures in pigs automatically and objectively. The 2D video image repository with different pig locomotion scores can be used as a tool for teaching and research. Based on our skeleton keypoint classification results, an automatic system could be developed. This could contribute to the objective assessment of locomotion scores in sows, improving their welfare.


Subject(s)
Deep Learning , Locomotion , Video Recording , Animals , Locomotion/physiology , Swine , Video Recording/methods , Female , Lameness, Animal/diagnosis , Lameness, Animal/physiopathology , Biomechanical Phenomena , Swine Diseases/diagnosis , Swine Diseases/physiopathology
14.
Midwifery ; 136: 104089, 2024 Sep.
Article in English | MEDLINE | ID: mdl-38968682

ABSTRACT

BACKGROUND: Healthcare professionals have a role to play in reducing perinatal mental health related stigma. AIM: To assess the effectiveness of a video-based educational intervention developed to provide guidance to healthcare professionals on perinatal mental health related stigma reduction strategies. DESIGN: A single group pre-test-post-test pilot study with no control group. SETTING(S): A university affiliated maternity hospital in Ireland PARTICIPANTS: A convenience sample of registered midwives, nurses and doctors (n = 60) recruited from October 2020-January 2021. INTERVENTION: A twenty-minute video-based educational intervention. METHODS: Respondents (n = 60) completed a pre-test (time point one) and post-test (time point-two) questionnaire, and a three-month follow-up post-test questionnaire (time point-three) (n = 39). The questionnaire included the Mental Illness Clinicians' Attitudes Scale, Reported and Intended Behaviour Scale, Reynolds Empathy Scale and open-ended questions. Wilcoxon Signed Rank Test was selected to evaluate the pre-test post-test scores. RESULTS: The difference in mean Mental Illness: Clinicians' Attitudes-4 scores were statistically significant between time points one and three (z = 3.27, df=36, P = 0.0007) suggesting more positive attitudes towards people with mental health conditions after the intervention. The mean total score for the Reported and Intended Behaviour Scale increased from 18.7 (SD 1.87) at time point one to 19.2 (SD 1.60) at time point two (z= -3.368, df=59, P = 0.0004) suggesting an increase in positive intended behaviours towards those with mental health issues immediately following the intervention. These findings were also corroborated by responses to open-ended survey questions. CONCLUSIONS: Further research with a larger sample of healthcare professionals evaluated over a longer period would provide further evidence for the sustainability of the intervention. TWEETABLEABSTRACT: A video-based intervention can increase healthcare professionals' knowledge of perinatal #mentalhealth related stigma reduction strategies @Journal. Link to article.


Subject(s)
Health Personnel , Social Stigma , Humans , Pilot Projects , Adult , Surveys and Questionnaires , Ireland , Health Personnel/education , Health Personnel/psychology , Health Personnel/statistics & numerical data , Female , Pregnancy , Mental Disorders/psychology , Male , Attitude of Health Personnel , Video Recording/methods , Middle Aged
15.
Sci Rep ; 14(1): 17464, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39075097

ABSTRACT

Digital quantification of gait can be used to measure aging- and disease-related decline in mobility. Gait performance also predicts prognosis, disease progression, and response to therapies. Most gait analysis systems require large amounts of space, resources, and expertise to implement and are not widely accessible. Thus, there is a need for a portable system that accurately characterizes gait. Here, depth video from two portable cameras accurately reconstructed gait metrics comparable to those reported by a pressure-sensitive walkway. 392 research participants walked across a four-meter pressure-sensitive walkway while depth video was recorded. Gait speed, cadence, and step and stride durations and lengths strongly correlated (r > 0.9) between modalities, with root-mean-squared-errors (RMSE) of 0.04 m/s, 2.3 steps/min, 0.03 s, and 0.05-0.08 m for speed, cadence, step/stride duration, and step/stride length, respectively. Step, stance, and double support durations (gait cycle percentage) significantly correlated (r > 0.6) between modalities, with 5% RMSE for step and stance and 10% RMSE for double support. In an exploratory analysis, gait speed from both modalities significantly related to healthy, mild, moderate, or severe categorizations of Charleson Comorbidity Indices (ANOVA, Tukey's HSD, p < 0.0125). These findings demonstrate the viability of using depth video to expand access to quantitative gait assessments.


Subject(s)
Gait Analysis , Gait , Humans , Male , Female , Gait/physiology , Middle Aged , Gait Analysis/methods , Gait Analysis/instrumentation , Adult , Video Recording/methods , Aged , Walking/physiology , Pressure , Walking Speed/physiology , Motion Capture
16.
Sensors (Basel) ; 24(11)2024 May 26.
Article in English | MEDLINE | ID: mdl-38894208

ABSTRACT

In this study, we propose a deep learning-based nystagmus detection algorithm using video oculography (VOG) data to diagnose benign paroxysmal positional vertigo (BPPV). Various deep learning architectures were utilized to develop and evaluate nystagmus detection models. Among the four deep learning architectures used in this study, the CNN1D model proposed as a nystagmus detection model demonstrated the best performance, exhibiting a sensitivity of 94.06 ± 0.78%, specificity of 86.39 ± 1.31%, precision of 91.34 ± 0.84%, accuracy of 91.02 ± 0.66%, and an F1-score of 92.68 ± 0.55%. These results indicate the high accuracy and generalizability of the proposed nystagmus diagnosis algorithm. In conclusion, this study validates the practicality of deep learning in diagnosing BPPV and offers avenues for numerous potential applications of deep learning in the medical diagnostic sector. The findings of this research underscore its importance in enhancing diagnostic accuracy and efficiency in healthcare.


Subject(s)
Algorithms , Benign Paroxysmal Positional Vertigo , Deep Learning , Nystagmus, Pathologic , Humans , Benign Paroxysmal Positional Vertigo/diagnosis , Nystagmus, Pathologic/diagnosis , Video Recording/methods , Male , Female , Neural Networks, Computer , Middle Aged
17.
Adv Skin Wound Care ; 37(7): 1-6, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38899823

ABSTRACT

OBJECTIVE: To evaluate the comprehensiveness, reliability, and quality of YouTube videos related to pressure injuries. METHODS: The authors searched YouTube for relevant videos using the keywords "pressure injury", "pressure ulcer", "bedsore", "pressure injury etiology", "pressure injury classification", "pressure injury prevention", "pressure injury risk assessment", and "pressure injury management". Of the 1,023 videos screened, 269 met the inclusion criteria and were included in the study. For each video, the authors recorded the number of views, likes, and comments; the length; and the video upload source. The Comprehensiveness Assessment Tool for Pressure Injuries, the Quality Criteria for Consumer Health Information score, and the Global Quality Score were used to evaluate the comprehensiveness, reliability, and quality of the videos. RESULTS: The mean length of the 269 videos was 6.22 ± 4.62 minutes (range, 0.18-19.47 minutes). Only 14.5% of the videos (n = 39) were uploaded by universities or professional organizations. Most videos included information about PI prevention (69.5%), followed by PI management (27.9%). The mean comprehensiveness score was 2.33 ± 1.32 (range, 1-5). Nearly half of the videos (49.1%) were not reliable. However, the quality of 43.9% of the videos was somewhat useful. The Quality Criteria for Consumer Health Information mean scores of universities/professional organizations (P < .001), nonprofit healthcare professionals (P = .015), and independent health information channel videos (P = .026) were higher than the mean score of medical advertising/profit companies channel videos. CONCLUSIONS: This study draws attention to the need for more comprehensive, high-quality, and reliable videos about PIs. It is important that videos on YouTube provide comprehensive and reliable information for patients, caregivers, students, or providers seeking information on PI prevention, assessment, and management.


Subject(s)
Pressure Ulcer , Social Media , Video Recording , Pressure Ulcer/prevention & control , Humans , Video Recording/methods , Social Media/standards , Reproducibility of Results , Consumer Health Information/standards , Consumer Health Information/methods , Information Dissemination/methods , Information Sources
19.
Sensors (Basel) ; 24(12)2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38931606

ABSTRACT

Human pose estimation (HPE) is a technique used in computer vision and artificial intelligence to detect and track human body parts and poses using images or videos. Widely used in augmented reality, animation, fitness applications, and surveillance, HPE methods that employ monocular cameras are highly versatile and applicable to standard videos and CCTV footage. These methods have evolved from two-dimensional (2D) to three-dimensional (3D) pose estimation. However, in real-world environments, current 3D HPE methods trained on laboratory-based motion capture data encounter challenges, such as limited training data, depth ambiguity, left/right switching, and issues with occlusions. In this study, four 3D HPE methods were compared based on their strengths and weaknesses using real-world videos. Joint position correction techniques were proposed to eliminate and correct anomalies such as left/right inversion and false detections of joint positions in daily life motions. Joint angle trajectories were obtained for intuitive and informative human activity recognition using an optimization method based on a 3D humanoid simulator, with the joint position corrected by the proposed technique as the input. The efficacy of the proposed method was verified by applying it to three types of freehand gymnastic exercises and comparing the joint angle trajectories during motion.


Subject(s)
Deep Learning , Joints , Posture , Humans , Posture/physiology , Joints/physiology , Imaging, Three-Dimensional/methods , Algorithms , Movement/physiology , Video Recording/methods
20.
Sensors (Basel) ; 24(12)2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38931649

ABSTRACT

Understanding past and current trends is crucial in the fashion industry to forecast future market demands. This study quantifies and reports the characteristics of the trendy walking styles of fashion models during real-world runway performances using three cutting-edge technologies: (a) publicly available video resources, (b) human pose detection technology, and (c) multivariate human-movement analysis techniques. The skeletal coordinates of the whole body during one gait cycle, extracted from publicly available video resources of 69 fashion models, underwent principal component analysis to reduce the dimensionality of the data. Then, hierarchical cluster analysis was used to classify the data. The results revealed that (1) the gaits of the fashion models analyzed in this study could be classified into five clusters, (2) there were significant differences in the median years in which the shows were held between the clusters, and (3) reconstructed stick-figure animations representing the walking styles of each cluster indicate that an exaggerated leg-crossing gait has become less common over recent years. Accordingly, we concluded that the level of leg crossing while walking is one of the major changes in trendy walking styles, from the past to the present, directed by the world's leading brands.


Subject(s)
Gait , Walking , Humans , Walking/physiology , Multivariate Analysis , Gait/physiology , Cluster Analysis , Principal Component Analysis , Biomechanical Phenomena/physiology , Video Recording/methods , Posture/physiology
SELECTION OF CITATIONS
SEARCH DETAIL