Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-33378261

ABSTRACT

Sleep quality is an important determinant of human health and wellbeing. Novel technologies that can quantify sleep quality at scale are required to enable the diagnosis and epidemiology of poor sleep. One important indicator of sleep quality is body posture. In this paper, we present the design and implementation of a non-contact sleep monitoring system that analyses body posture and movement. Supervised machine learning strategies applied to noncontact vision-based infrared camera data using a transfer learning approach, successfully quantified sleep poses of participants covered by a blanket. This represents the first occasion that such a machine learning approach has been used to successfully detect four predefined poses and the empty bed state during 8-10 hour overnight sleep episodes representing a realistic domestic sleep situation. The methodology was evaluated against manually scored sleep poses and poses estimated using clinical polysomnography measurement technology. In a cohort of 12 healthy participants, we find that a ResNet-152 pre-trained network achieved the best performance compared with the standard de novo CNN network and other pre-trained networks. The performance of our approach was better than other video-based methods for sleep pose estimation and produced higher performance compared to the clinical standard for pose estimation using a polysomnography position sensor. It can be concluded that infrared video capture coupled with deep learning AI can be successfully used to quantify sleep poses as well as the transitions between poses in realistic nocturnal conditions, and that this non-contact approach provides superior pose estimation compared to currently accepted clinical methods.


Subject(s)
Posture , Sleep , Humans , Machine Learning , Polysomnography , Supervised Machine Learning
2.
IEEE Trans Vis Comput Graph ; 26(7): 2417-2428, 2020 Jul.
Article in English | MEDLINE | ID: mdl-30582545

ABSTRACT

We describe a non-parametric algorithm for multiple-viewpoint video inpainting. Uniquely, our algorithm addresses the domain of wide baseline multiple-viewpoint video (MVV) with no temporal look-ahead in near real time speed. A Dictionary of Patches (DoP) is built using multi-resolution texture patches reprojected from geometric proxies available in the alternate views. We dynamically update the DoP over time, and a Markov Random Field optimisation over depth and appearance is used to resolve and align a selection of multiple candidates for a given patch, this ensures the inpainting of large regions in a plausible manner conserving both spatial and temporal coherence. We demonstrate the removal of large objects (e.g., people) on challenging indoor and outdoor MVV exhibiting cluttered, dynamic backgrounds and moving cameras.

3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3115-3118, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31946547

ABSTRACT

In this study, a novel sleep pose identification method has been proposed for classifying 12 different sleep postures using a two-step deep learning process. For this purpose, transfer learning as an initial stage retrains a well-known CNN network (VGG-19) to categorise the data into four main pose classes, namely: supine, left, right, and prone. According to the decision made by VGG-19, subsets of the image data are next passed to one of four dedicated sub-class CNNs. As a result, the pose estimation label is further refined from one of four sleep pose labels to one of 12 sleep pose labels. 10 participants contributed for recording infrared (IR) images of 12 pre-defined sleep positions. Participants were covered by a blanket to occlude the original pose and present a more realistic sleep situation. Finally, we have compared our results with (1) the traditional CNN learning from scratch and (2) retrained VGG-19 network in one stage. The average accuracy increased from 74.5% & 78.1% to 85.6% compared with (1) & (2) respectively.


Subject(s)
Deep Learning , Neural Networks, Computer , Posture , Sleep , Humans
4.
IEEE Trans Image Process ; 28(3): 1118-1132, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30281455

ABSTRACT

A common problem in wide-baseline matching is the sparse and non-uniform distribution of correspondences when using conventional detectors, such as SIFT, SURF, FAST, A-KAZE, and MSER. In this paper, we introduce a novel segmentation-based feature detector (SFD) that produces an increased number of accurate features for wide-baseline matching. A multi-scale SFD is proposed using bilateral image decomposition to produce a large number of scale-invariant features for wide-baseline reconstruction. All input images are over-segmented into regions using any existing segmentation technique, such as Watershed, Mean-shift, and simple linear iterative clustering. Feature points are then detected at the intersection of the boundaries of three or more regions. The detected feature points are local maxima of the image function. The key advantage of feature detection based on segmentation is that it does not require global threshold setting and can, therefore, detect features throughout the image. A comprehensive evaluation demonstrates that SFD gives an increased number of features that are accurately localized and matched between wide-baseline camera views; the number of features for a given matching error increases by a factor of 3-5 compared with SIFT; feature detection and matching performance are maintained with increasing baseline between views; multi-scale SFD improves matching performance at varying scales. Application of SFD to sparse multi-view wide-baseline reconstruction demonstrates a factor of 10 increases in the number of reconstructed points with improved scene coverage compared with SIFT/MSER/A-KAZE. Evaluation against ground-truth shows that SFD produces an increased number of wide-baseline matches with a reduced error.

5.
Article in English | MEDLINE | ID: mdl-30440284

ABSTRACT

Sleep is a process of rest and renewal that is vital for humans. However, there are several sleep disorders such as rapid eye movement (REM) sleep behaviour disorder (RBD), sleep apnea, and restless leg syndrome (RLS) that can have an impact on a significant portion of the population. These disorders are known to be associated with particular behaviours such as specific body positions and movements. Clinical diagnosis requires patients to undergo polysomnography (PSG) in a sleep unit as a gold standard assessment. This involves attaching multiple electrodes to the head and body. In this experiment, we seek to develop non-contact approach to measure sleep disorders related to body postures and movement. An Infrared (IR) camera is used to monitor body position unaided by other sensors. Twelve participants were asked to adopt and then move through a set of 12 pre-defined sleep positions. We then adopted convolutional neural networks (CNNs) for automatic feature generation from IR data for classifying different sleep postures. The results show that the proposed method has an accuracy of between 0.76 & 0.91 across the participants and 12 sleepposes with, and without a blanket cover, respectively. The results suggest that this approach is a promising method to detect common sleep postures and potentially characterise sleep disorder behaviours.


Subject(s)
Posture , Sleep , Female , Humans , Male , Movement , Neural Networks, Computer , Polysomnography , Sleep Wake Disorders/physiopathology
6.
IEEE Trans Cybern ; 43(6): 1532-45, 2013 Dec.
Article in English | MEDLINE | ID: mdl-23807478

ABSTRACT

Surface motion capture (SurfCap) of actor performance from multiple view video provides reconstruction of the natural nonrigid deformation of skin and clothing. This paper introduces techniques for interactive animation control of SurfCap sequences which allow the flexibility in editing and interactive manipulation associated with existing tools for animation from skeletal motion capture (MoCap). Laplacian mesh editing is extended using a basis model learned from SurfCap sequences to constrain the surface shape to reproduce natural deformation. Three novel approaches for animation control of SurfCap sequences, which exploit the constrained Laplacian mesh editing, are introduced: 1) space­time editing for interactive sequence manipulation; 2) skeleton-driven animation to achieve natural nonrigid surface deformation; and 3) hybrid combination of skeletal MoCap driven and SurfCap sequence to extend the range of movement. These approaches are combined with high-level parametric control of SurfCap sequences in a hybrid surface and skeleton-driven animation control framework to achieve natural surface deformation with an extended range of movement by exploiting existing MoCap archives. Evaluation of each approach and the integrated animation framework are presented on real SurfCap sequences for actors performing multiple motions with a variety of clothing styles. Results demonstrate that these techniques enable flexible control for interactive animation with the natural nonrigid surface dynamics of the captured performance and provide a powerful tool to extend current SurfCap databases by incorporating new motions from MoCap sequences.


Subject(s)
Computer Graphics , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Joints/physiology , Movement/physiology , Photography/methods , Video Recording/methods , Humans , Joints/anatomy & histology , Surface Properties , Whole Body Imaging/methods
7.
IEEE Trans Vis Comput Graph ; 19(5): 762-73, 2013 May.
Article in English | MEDLINE | ID: mdl-23492379

ABSTRACT

A 4D parametric motion graph representation is presented for interactive animation from actor performance capture in a multiple camera studio. The representation is based on a 4D model database of temporally aligned mesh sequence reconstructions for multiple motions. High-level movement controls such as speed and direction are achieved by blending multiple mesh sequences of related motions. A real-time mesh sequence blending approach is introduced, which combines the realistic deformation of previous nonlinear solutions with efficient online computation. Transitions between different parametric motion spaces are evaluated in real time based on surface shape and motion similarity. Four-dimensional parametric motion graphs allow real-time interactive character animation while preserving the natural dynamics of the captured performance.


Subject(s)
Computer Graphics , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Locomotion/physiology , Pattern Recognition, Automated/methods , Subtraction Technique , User-Computer Interface , Algorithms , Artificial Intelligence , Humans , Image Enhancement/methods , Numerical Analysis, Computer-Assisted , Reproducibility of Results , Sensitivity and Specificity , Signal Processing, Computer-Assisted
SELECTION OF CITATIONS
SEARCH DETAIL
...