ABSTRACT
BACKGROUND AIMS: The objective of this study was to compare the impact of umbilical cord-derived mesenchymal stromal cell (UCMSC) transplantation on the motor functions of identical twins with cerebral palsy (CP) and to analyze the correlation between the efficacy and hereditary factors. METHODS: Eight pairs (16 individuals) of identical twins with CP were recruited and received allogenic UCMSC transplantation by means of subarachnoid injection. The gross motor function measure (GMFM) and the fine motor function measure (FMFM) were performed before and 1 and 6 months after the treatment to analyze the results of individuals before and after the therapy, between two individuals of an identical twin and among twin pairs. Repeated-measured data variance was used to analyze the GMFM and FMFM scores of patients before and 1 and 6 months after the therapy. RESULTS: Eight pairs (16 individuals) of children with CP had significant improvement in the GMFM at the end of the 1st and 6th months after the therapy compared with that before the therapy, whereas the amelioration of the FMFM was not statistically significant. The improvements in motor functions between two individuals of an identical twin but not among twin pairs were correlated. CONCLUSIONS: UCMSC transplantation significantly improves GMFM in children with CP; motor function improvements in the GMFM between two individuals of an identical twin were closely correlated, but improvements among twin pairs were not correlated. We hypothesize that hereditary factors contribute to the mechanisms of UCMSC transplantation in motor function improvement in children with CP.
Subject(s)
Cell- and Tissue-Based Therapy/methods , Cerebral Palsy/therapy , Mesenchymal Stem Cell Transplantation , Mesenchymal Stem Cells/physiology , Motor Skills/physiology , Cerebral Palsy/physiopathology , Child , Child, Preschool , Female , Humans , Male , Pilot Projects , Twins, Monozygotic , Umbilical Cord/cytologyABSTRACT
Recently, there have been efforts to improve the performance in sign language recognition by designing self-supervised learning methods. However, these methods capture limited information from sign pose data in a frame-wise learning manner, leading to sub-optimal solutions. To this end, we propose a simple yet effective self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency from two distinct perspectives and learn instance discriminative representation for sign language recognition. On one hand, since the semantics of sign language are expressed by the cooperation of fine-grained hands and coarse-grained trunks, we utilize both granularity information and encode them into latent spaces. The consistency between hand and trunk features is constrained to encourage learning consistent representation of instance samples. On the other hand, inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling. Additionally, we further bridge the interaction between the embedding spaces of both modalities, facilitating bidirectional knowledge transfer to enhance sign language representation. Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin. The source code is publicly available at https://github.com/sakura/Code.
ABSTRACT
Hand gesture serves as a crucial role during the expression of sign language. Current deep learning based methods for sign language understanding (SLU) are prone to over-fitting due to insufficient sign data resource and suffer limited interpretability. In this paper, we propose the first self-supervised pre-trainable SignBERT+ framework with model-aware hand prior incorporated. In our framework, the hand pose is regarded as a visual token, which is derived from an off-the-shelf detector. Each visual token is embedded with gesture state and spatial-temporal position encoding. To take full advantage of current sign data resource, we first perform self-supervised learning to model its statistics. To this end, we design multi-level masked modeling strategies (joint, frame and clip) to mimic common failure detection cases. Jointly with these masked modeling strategies, we incorporate model-aware hand prior to better capture hierarchical context over the sequence. After the pre-training, we carefully design simple yet effective prediction heads for downstream tasks. To validate the effectiveness of our framework, we perform extensive experiments on three main SLU tasks, involving isolated and continuous sign language recognition (SLR), and sign language translation (SLT). Experimental results demonstrate the effectiveness of our method, achieving new state-of-the-art performance with a notable gain.