Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
1.
Geroscience ; 46(1): 39-56, 2024 Feb.
Article in English | MEDLINE | ID: mdl-37597113

ABSTRACT

DNA methylation (DNAm)-based age clocks have been studied extensively as a biomarker of human ageing and a risk factor for age-related diseases. Despite different tissues having vastly different rates of proliferation, it is still largely unknown whether they age at different rates. It was previously reported that the cerebellum ages slowly; however, this claim was drawn from a single clock using a relatively small sample size and so warrants further investigation. We collected the largest cerebellum DNAm dataset (N = 752) to date. We found the respective epigenetic ages are all severely underestimated by six representative DNAm age clocks, with the underestimation effects more pronounced in the four clocks whose training datasets do not include brain-related tissues. We identified 613 age-associated CpGs in the cerebellum, which accounts for only 14.5% of the number found in the middle temporal gyrus from the same population (N = 404). From the 613 cerebellum age-associated CpGs, we built a highly accurate age prediction model for the cerebellum named CerebellumClockspecific (Pearson correlation=0.941, MAD=3.18 years). Ageing rate comparisons based on the two tissue-specific clocks constructed on the 201 overlapping age-associated CpGs support the cerebellum has younger DNAm age. Nevertheless, we built BrainCortexClock to prove a single DNAm clock is able to unbiasedly estimate DNAm ages of both cerebellum and cerebral cortex, when they are adequately and equally represented in the training dataset. Comparing ageing rates across tissues using DNA methylation multi-tissue clocks is flawed. The large underestimation of age prediction for cerebellums by previous clocks mainly reflects the improper usage of these age clocks. There exist strong and consistent ageing effects on the cerebellar methylome, and we suggest the smaller number of age-associated CpG sites in cerebellum is largely attributed to its extremely low average cell replication rates.


Subject(s)
DNA Methylation , Epigenesis, Genetic , Humans , Aging/genetics , Epigenome , Epigenomics
2.
Article in English | MEDLINE | ID: mdl-37276101

ABSTRACT

The application of machine learning-based tele-rehabilitation faces the challenge of limited availability of data. To overcome this challenge, data augmentation techniques are commonly employed to generate synthetic data that reflect the configurations of real data. One such promising data augmentation technique is the Generative Adversarial Network (GAN). However, GANs have been found to suffer from mode collapse, a common issue where the generated data fails to capture all the relevant information from the original dataset. In this paper, we aim to address the problem of mode collapse in GAN-based data augmentation techniques for post-stroke assessment. We applied the GAN to generate synthetic data for two post-stroke rehabilitation datasets and observed that the original GAN suffered from mode collapse, as expected. To address this issue, we propose a Time Series Siamese GAN (TS-SGAN) that incorporates a Siamese network and an additional discriminator. Our analysis, using the longest common sub-sequence (LCSS), demonstrates that TS-SGAN generates data uniformly for all elements of two testing datasets, in contrast to the original GAN. To further evaluate the effectiveness of TS-SGAN, we encode the generated dataset into images using Gramian Angular Field and classify them using ResNet-18. Our results show that TS-SGAN achieves a significant accuracy increase of classification accuracy (35.2%-42.07%) for both selected datasets. This represents a substantial improvement over the original GAN.


Subject(s)
Stroke Rehabilitation , Stroke , Humans , Time Factors , Machine Learning
3.
Bioengineering (Basel) ; 10(6)2023 May 26.
Article in English | MEDLINE | ID: mdl-37370583

ABSTRACT

Gait analysis plays an important role in the fields of healthcare and sports sciences. Conventional gait analysis relies on costly equipment such as optical motion capture cameras and wearable sensors, some of which require trained assessors for data collection and processing. With the recent developments in computer vision and deep neural networks, using monocular RGB cameras for 3D human pose estimation has shown tremendous promise as a cost-effective and efficient solution for clinical gait analysis. In this paper, a markerless human pose technique is developed using motion captured by a consumer monocular camera (800 × 600 pixels and 30 FPS) for clinical gait analysis. The experimental results have shown that the proposed post-processing algorithm significantly improved the original human pose detection model (BlazePose)'s prediction performance compared to the gold-standard gait signals by 10.7% using the MoVi dataset. In addition, the predicted T2 score has an excellent correlation with ground truth (r = 0.99 and y = 0.94x + 0.01 regression line), which supports that our approach can be a potential alternative to the conventional marker-based solution to assist the clinical gait assessment.

4.
Front Bioeng Biotechnol ; 10: 877347, 2022.
Article in English | MEDLINE | ID: mdl-35646876

ABSTRACT

Knee joint moments are commonly calculated to provide an indirect measure of knee joint loads. A shortcoming of inverse dynamics approaches is that the process of collecting and processing human motion data can be time-consuming. This study aimed to benchmark five different deep learning methods in using walking segment kinematics for predicting internal knee abduction impulse during walking. Three-dimensional kinematic and kinetic data used for the present analyses came from a publicly available dataset on walking (participants n = 33). The outcome for prediction was the internal knee abduction impulse over the stance phase. Three-dimensional (3D) angular and linear displacement, velocity, and acceleration of the seven lower body segment's center of mass (COM), relative to a fixed global coordinate system were derived and formed the predictor space (126 time-series predictors). The total number of observations in the dataset was 6,737. The datasets were split into training (75%, n = 5,052) and testing (25%, n = 1685) datasets. Five deep learning models were benchmarked against inverse dynamics in quantifying knee abduction impulse. A baseline 2D convolutional network model achieved a mean absolute percentage error (MAPE) of 10.80%. Transfer learning with InceptionTime was the best performing model, achieving the best MAPE of 8.28%. Encoding the time-series as images then using a 2D convolutional model performed worse than the baseline model with a MAPE of 16.17%. Time-series based deep learning models were superior to an image-based method when predicting knee abduction moment impulse during walking. Future studies looking to develop wearable technologies will benefit from knowing the optimal network architecture, and the benefit of transfer learning for predicting joint moments.

5.
Bioinformatics ; 38(16): 3950-3957, 2022 08 10.
Article in English | MEDLINE | ID: mdl-35771651

ABSTRACT

MOTIVATION: Data normalization is an essential step to reduce technical variation within and between arrays. Due to the different karyotypes and the effects of X chromosome inactivation, females and males exhibit distinct methylation patterns on sex chromosomes; thus, it poses a significant challenge to normalize sex chromosome data without introducing bias. Currently, existing methods do not provide unbiased solutions to normalize sex chromosome data, usually, they just process autosomal and sex chromosomes indiscriminately. RESULTS: Here, we demonstrate that ignoring this sex difference will lead to introducing artificial sex bias, especially for thousands of autosomal CpGs. We present a novel two-step strategy (interpolatedXY) to address this issue, which is applicable to all quantile-based normalization methods. By this new strategy, the autosomal CpGs are first normalized independently by conventional methods, such as funnorm or dasen; then the corrected methylation values of sex chromosome-linked CpGs are estimated as the weighted average of their nearest neighbors on autosomes. The proposed two-step strategy can also be applied to other non-quantile-based normalization methods, as well as other array-based data types. Moreover, we propose a useful concept: the sex explained fraction of variance, to quantitatively measure the normalization effect. AVAILABILITY AND IMPLEMENTATION: The proposed methods are available by calling the function 'adjustedDasen' or 'adjustedFunnorm' in the latest wateRmelon package (https://github.com/schalkwyk/wateRmelon), with methods compatible with all the major workflows, including minfi. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Subject(s)
DNA Methylation , Sexism , Female , Male , Humans , Oligonucleotide Array Sequence Analysis/methods , Protein Processing, Post-Translational
6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2242-2247, 2021 11.
Article in English | MEDLINE | ID: mdl-34891733

ABSTRACT

The recent COVID-19 pandemic has further high-lighted the need for improving tele-rehabilitation systems. One of the common methods is to use wearable sensors for monitoring patients and intelligent algorithms for accurate and objective assessments. An important part of this work is to develop an efficient evaluation algorithm that provides a high-precision activity recognition rate. In this paper, we have investigated sixteen state-of-the-art time-series deep learning algorithms with four different architectures: eight convolutional neural networks configurations, six recurrent neural networks, a combination of the two and finally a wavelet-based neural network. Additionally, data from different sensors' combinations and placements as well as different pre-processing algorithms were explored to determine the optimal configuration for achieving the best performance. Our results show that the XceptionTime CNN architecture is the best performing algorithm with normalised data. Moreover, we found out that sensor placement is the most important attribute to improve the accuracy of the system, applying the algorithm on data from sensors placed on the waist achieved a maximum of 42% accuracy while the sensors placed on the hand achieved 84%. Consequently, compared to current results on the same dataset for different classification categories, this approach improved the existing state of the art accuracy from 79% to 84%, and from 80% to 90% respectively.


Subject(s)
COVID-19 , Deep Learning , Stroke Rehabilitation , Humans , Pandemics , SARS-CoV-2
7.
BMC Genomics ; 22(1): 484, 2021 Jun 28.
Article in English | MEDLINE | ID: mdl-34182928

ABSTRACT

BACKGROUND: Sex is an important covariate of epigenome-wide association studies due to its strong influence on DNA methylation patterns across numerous genomic positions. Nevertheless, many samples on the Gene Expression Omnibus (GEO) frequently lack a sex annotation or are incorrectly labelled. Considering the influence that sex imposes on DNA methylation patterns, it is necessary to ensure that methods for filtering poor samples and checking of sex assignment are accurate and widely applicable. RESULTS: Here we presented a novel method to predict sex using only DNA methylation beta values, which can be readily applied to almost all DNA methylation datasets of different formats (raw IDATs or text files with only signal intensities) uploaded to GEO. We identified 4345 significantly (p<0.01) sex-associated CpG sites present on both 450K and EPIC arrays, and constructed a sex classifier based on the two first principal components of the DNA methylation data of sex-associated probes mapped on sex chromosomes. The proposed method is constructed using whole blood samples and exhibits good performance across a wide range of tissues. We further demonstrated that our method can be used to identify samples with sex chromosome aneuploidy, this function is validated by five Turner syndrome cases and one Klinefelter syndrome case. CONCLUSIONS: This proposed sex classifier not only can be used for sex predictions but also applied to identify samples with sex chromosome aneuploidy, and it is freely and easily accessible by calling the 'estimateSex' function from the newest wateRmelon Bioconductor package ( https://github.com/schalkwyk/wateRmelon ).


Subject(s)
DNA Methylation , Genomics , Aneuploidy , CpG Islands , Humans , Sex Chromosomes/genetics
8.
PLoS One ; 14(5): e0216197, 2019.
Article in English | MEDLINE | ID: mdl-31075113

ABSTRACT

Two novel image denoising algorithms are proposed which employ goodness of fit (GoF) test at multiple image scales. Proposed methods operate by employing the GoF tests locally on the wavelet coefficients of a noisy image obtained via discrete wavelet transform (DWT) and the dual tree complex wavelet transform (DT-CWT) respectively. We next formulate image denoising as a binary hypothesis testing problem with the null hypothesis indicating the presence of noise and the alternate hypothesis representing the presence of desired signal only. The decision that a given wavelet coefficient corresponds to the null hypothesis or the alternate hypothesis involves the GoF testing based on empirical distribution function (EDF), applied locally on the noisy wavelet coefficients. The performance of the proposed methods is validated by comparing them against the state of the art image denoising methods.


Subject(s)
Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Algorithms , Artifacts , Computer Simulation , Humans , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Statistical Distributions , Wavelet Analysis
9.
Sensors (Basel) ; 19(1)2019 Jan 08.
Article in English | MEDLINE | ID: mdl-30626102

ABSTRACT

Output from imaging sensors based on CMOS and CCD devices is prone to noise due to inherent electronic fluctuations and low photon count. The resulting noise in the acquired image could be effectively modelled as signal-dependent Poisson noise or as a mixture of Poisson and Gaussian noise. To that end, we propose a generalized framework based on detection theory and hypothesis testing coupled with the variance stability transformation (VST) for Poisson or Poisson⁻Gaussian denoising. VST transforms signal-dependent Poisson noise to a signal independent Gaussian noise with stable variance. Subsequently, multiscale transforms are employed on the noisy image to segregate signal and noise into separate coefficients. That facilitates the application of local binary hypothesis testing on multiple scales using empirical distribution function (EDF) for the purpose of detection and removal of noise. We demonstrate the effectiveness of the proposed framework with different multiscale transforms and on a wide variety of input datasets.

10.
Research (Wash D C) ; 2019: 9686213, 2019.
Article in English | MEDLINE | ID: mdl-31922148

ABSTRACT

Electromagnetic waves carrying an orbital angular momentum (OAM) are of great interest. However, most OAM antennas present disadvantages such as a complicated structure, low efficiency, and large divergence angle, which prevents their practical applications. So far, there are few papers and research focuses on the problem of the divergence angle. Herein, a metasurface antenna is proposed to obtain the OAM beams with a small divergence angle. The circular arrangement and phase gradient were used to simplify the structure of the metasurface and obtain the small divergence angle, respectively. The proposed metasurface antenna presents a high transmission coefficient and effectively decreases the divergence angle of the OAM beam. All the theoretical analyses and derivation calculations were validated by both simulations and experiments. This compact structure paves the way to generate OAM beams with a small divergence angle.

11.
Comput Biol Med ; 88: 132-141, 2017 09 01.
Article in English | MEDLINE | ID: mdl-28719805

ABSTRACT

We present a data driven approach to classify ictal (epileptic seizure) and non-ictal EEG signals using the multivariate empirical mode decomposition (MEMD) algorithm. MEMD is a multivariate extension of empirical mode decomposition (EMD), which is an established method to perform the decomposition and time-frequency (T-F) analysis of non-stationary data sets. We select suitable feature sets based on the multiscale T-F representation of the EEG data via MEMD for the classification purposes. The classification is achieved using the artificial neural networks. The efficacy of the proposed method is verified on extensive publicly available EEG datasets.


Subject(s)
Diagnosis, Computer-Assisted/methods , Electroencephalography/methods , Seizures/diagnosis , Signal Processing, Computer-Assisted , Algorithms , Epilepsy/diagnosis , Humans
12.
Sensors (Basel) ; 15(7): 16804-30, 2015 Jul 10.
Article in English | MEDLINE | ID: mdl-26184211

ABSTRACT

The integral image, an intermediate image representation, has found extensive use in multi-scale local feature detection algorithms, such as Speeded-Up Robust Features (SURF), allowing fast computation of rectangular features at constant speed, independent of filter size. For resource-constrained real-time embedded vision systems, computation and storage of integral image presents several design challenges due to strict timing and hardware limitations. Although calculation of the integral image only consists of simple addition operations, the total number of operations is large owing to the generally large size of image data. Recursive equations allow substantial decrease in the number of operations but require calculation in a serial fashion. This paper presents two new hardware algorithms that are based on the decomposition of these recursive equations, allowing calculation of up to four integral image values in a row-parallel way without significantly increasing the number of operations. An efficient design strategy is also proposed for a parallel integral image computation unit to reduce the size of the required internal memory (nearly 35% for common HD video). Addressing the storage problem of integral image in embedded vision systems, the paper presents two algorithms which allow substantial decrease (at least 44.44%) in the memory requirements. Finally, the paper provides a case study that highlights the utility of the proposed architectures in embedded vision systems.

13.
Sensors (Basel) ; 15(5): 10923-47, 2015 May 08.
Article in English | MEDLINE | ID: mdl-26007714

ABSTRACT

A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD) algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD)-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF) containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA), discrete wavelet transform (DWT) and non-subsampled contourlet transform (NCT). A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

14.
Sensors (Basel) ; 13(8): 10876-907, 2013 Aug 19.
Article in English | MEDLINE | ID: mdl-23966187

ABSTRACT

A vision system that can assess its own performance and take appropriate actions online to maximize its effectiveness would be a step towards achieving the long-cherished goal of imitating humans. This paper proposes a method for performing an online performance analysis of local feature detectors, the primary stage of many practical vision systems. It advocates the spatial distribution of local image features as a good performance indicator and presents a metric that can be calculated rapidly, concurs with human visual assessments and is complementary to existing offline measures such as repeatability. The metric is shown to provide a measure of complementarity for combinations of detectors, correctly reflecting the underlying principles of individual detectors. Qualitative results on well-established datasets for several state-of-the-art detectors are presented based on the proposed measure. Using a hypothesis testing approach and a newly-acquired, larger image database, statistically-significant performance differences are identified. Different detector pairs and triplets are examined quantitatively and the results provide a useful guideline for combining detectors in applications that require a reasonable spatial distribution of image features. A principled framework for combining feature detectors in these applications is also presented. Timing results reveal the potential of the metric for online applications.


Subject(s)
Algorithms , Artificial Intelligence , Biomimetics/methods , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Pattern Recognition, Automated/methods
15.
IEEE Trans Image Process ; 21(1): 297-304, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21712160

ABSTRACT

Speeded-Up Robust Features is a feature extraction algorithm designed for real-time execution, although this is rarely achievable on low-power hardware such as that in mobile robots. One way to reduce the computation is to discard some of the scale-space octaves, and previous research has simply discarded the higher octaves. This paper shows that this approach is not always the most sensible and presents an algorithm for choosing which octaves to discard based on the properties of the imagery. Results obtained with this best octaves algorithm show that it is able to achieve a significant reduction in computation without compromising matching performance.


Subject(s)
Algorithms , Image Enhancement/methods , Image Interpretation, Computer-Assisted/methods , Imaging, Three-Dimensional/methods , Information Storage and Retrieval/methods , Pattern Recognition, Automated/methods , Reproducibility of Results , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL