RESUMEN
This study investigates the complex relationship between upper limb movement direction and macroscopic neural signals in the brain, which is critical for understanding brain-computer interfaces (BCI). Conventional BCI research has primarily focused on a local area, such as the contralateral primary motor cortex (M1), relying on the population-based decoding method with microelectrode arrays. In contrast, macroscopic approaches such as electroencephalography (EEG) and magnetoencephalography (MEG) utilize numerous electrodes to cover broader brain regions. This study probes the potential differences in the mechanisms of microscopic and macroscopic methods. It is important to determine which neural activities effectively predict movements. To investigate this, we analyzed MEG data from nine right-handed participants while performing arm-reaching tasks. We employed dynamic statistical parametric mapping (dSPM) to estimate source activity and built a decoding model composed of long short-term memory (LSTM) and a multilayer perceptron to predict movement trajectories. This model achieved a high correlation coefficient of 0.79 between actual and predicted trajectories. Subsequently, we identified brain regions sensitive to predicting movement direction using the integrated gradients (IG) method, which assesses the predictive contribution of each source activity. The resulting salience map demonstrated a distribution without significant differences across motor-related regions, including M1. Predictions based solely on M1 activity yielded a correlation coefficient of 0.42, nearly half as effective as predictions incorporating all source activities. This suggests that upper limb movements are influenced by various factors such as movement coordination, planning, body and target position recognition, and control, beyond simple muscle activity. All of the activities are needed in the decoding model using macroscopic signals. Our findings also revealed that contralateral and ipsilateral hemispheres contribute equally to movement prediction, implying that BCIs could potentially benefit patients with brain damage in the contralateral hemisphere by utilizing brain signals from the ipsilateral hemisphere. In conclusion, this study demonstrates that macroscopic activity from large brain regions significantly contributes to predicting upper limb movement. Non-invasive BCI systems would require a comprehensive collection of neural signals from multiple brain regions.
Asunto(s)
Interfaces Cerebro-Computador , Magnetoencefalografía , Corteza Motora , Movimiento , Humanos , Corteza Motora/fisiología , Masculino , Magnetoencefalografía/métodos , Adulto , Femenino , Movimiento/fisiología , Adulto Joven , Mapeo Encefálico/métodosRESUMEN
Predicting where users will look inside head-mounted displays (HMDs) and fetching only the relevant content is an effective approach for streaming bulky 360 videos over bandwidth-constrained networks. Despite previous efforts, anticipating users' fast and sudden head movements is still difficult because there is a lack of clear understanding of the unique visual attention in 360 videos that dictates the users' head movement in HMDs. This in turn reduces the effectiveness of streaming systems and degrades the users' Quality of Experience. To address this issue, we propose to extract salient cues unique in the 360 video content to capture the attentive behavior of HMD users. Empowered by the newly discovered saliency features, we devise a head-movement prediction algorithm to accurately predict users' head orientations in the near future. A 360 video streaming framework that takes full advantage of the head movement predictor is proposed to enhance the quality of delivered 360 videos. Practical trace-driven results show that the proposed saliency-based 360 video streaming system reduces the stall duration by 65% and the stall count by 46%, while saving 31% more bandwidth than state-of-the-art approaches.
RESUMEN
BACKGROUND: The T-loop has been used clinically to close gap between teeth. And it is a typical orthodontic archwire bending method. However, the design of the T-loop parameters for different patients is based on the clinical experience of the dentists. The variation in dentists' clinical experience is the main reason for inadequate orthodontic treatment, even high incidence of postoperative complications. METHODS: Firstly, the tooth movement prediction model is established based on the analysis of the T-loop structure and the waxy model dynamic resistance. As well as the reverse reconstruction of the complete maxillary 3D model based on the patient CBCT images, the oral biomechanical FEM analysis is completed. A maxillary waxy dental model is manufactured to realize the water-bath measurement experiment in vitro mimicking the oral bio-environment. Thus, the calculated, simulation and experimental data are obtained, as well as obtaining a cloud of total deformation from the simulation analysis. RESULTS: The growth trend of the 11 sets of simulation data is the same as that of the experimental data. And all of them show that the tooth displacement is positively correlated with the cross-sectional size of the archwire, and the clearance distance. As well as the higher Young's modulus of the archwire material, the greater the tooth displacement. And the effect of archwire parameters on tooth displacement derived from simulation and experimental data is consistent with the prediction model. The experimental and calculated data are also compared and analyzed, and the two kinds of data are basically consistent in terms of growth trends and fluctuations, with deviation rates ranging from 2.17 to 10.00%. CONCLUSIONS: This study shows that the accuracy and reliability of the tooth movement prediction model can be verified through the comparative analysis and deviation calculation of the obtained calculated, simulation and experimental data, which can assist dentists to safely and efficiently perform orthodontic treatment on patients. And the FEM analysis can achieve predictability of orthodontic treatment results.
Asunto(s)
Alambres para Ortodoncia , Técnicas de Movimiento Dental , Estudios Transversales , Humanos , Reproducibilidad de los Resultados , Técnicas de Movimiento Dental/métodos , AguaRESUMEN
The novel of coronavirus (COVID-19) has suddenly and abruptly changed the world as we knew at the start of the 3rd decade of the 21st century. Particularly, COVID-19 pandemic has negatively affected financial econometrics and stock markets across the globe. Artificial Intelligence (AI) and Machine Learning (ML)-based prediction models, especially Deep Neural Network (DNN) architectures, have the potential to act as a key enabling factor to reduce the adverse effects of the COVID-19 pandemic and future possible ones on financial markets. In this regard, first, a unique COVID-19 related PRIce MOvement prediction ( COVID19 PRIMO ) dataset is introduced in this paper, which incorporates effects of social media trends related to COVID-19 on stock market price movements. Afterwards, a novel hybrid and parallel DNN-based framework is proposed that integrates different and diversified learning architectures. Referred to as the COVID-19 adopted Hybrid and Parallel deep fusion framework for Stock price Movement Prediction ( COVID19-HPSMP ), innovative fusion strategies are used to combine scattered social media news related to COVID-19 with historical mark data. The proposed COVID19-HPSMP consists of two parallel paths (hence hybrid), one based on Convolutional Neural Network (CNN) with Local/Global Attention modules, and one integrated CNN and Bi-directional Long Short term Memory (BLSTM) path. The two parallel paths are followed by a multilayer fusion layer acting as a fusion center that combines localized features. Performance evaluations are performed based on the introduced COVID19 PRIMO dataset illustrating superior performance of the proposed framework.
RESUMEN
Building accurate movement decoding models from brain signals is crucial for many biomedical applications. Predicting specific movement features, such as speed and force, before movement execution may provide additional useful information at the expense of increasing the complexity of the decoding problem. Recent attempts to predict movement speed and force from the electroencephalogram (EEG) achieved classification accuracies at or slightly above chance levels, highlighting the need for more accurate prediction strategies. Thus, the aims of this study were to accurately predict hand movement speed and force from single-trial EEG signals and to decode neurophysiological information of motor preparation from the prediction strategies. To these ends, a decoding model based on convolutional neural networks (ConvNets) was implemented and compared against other state-of-the-art prediction strategies, such as support vector machines and decision trees. ConvNets outperformed the other prediction strategies, achieving an overall accuracy of 84% in the classification of two different levels of speed and force (four-class classification) from pre-movement single-trial EEG (100 ms and up to 1,600 ms prior to movement execution). Furthermore, an analysis of the ConvNet architectures suggests that the network performs a complex spatiotemporal integration of EEG data to optimize classification accuracy. These results show that movement speed and force can be accurately predicted from single-trial EEG, and that the prediction strategies may provide useful neurophysiological information about motor preparation.
Asunto(s)
Interfaces Cerebro-Computador , Algoritmos , Electroencefalografía , Mano , Humanos , Imaginación , Movimiento , Redes Neurales de la ComputaciónRESUMEN
Research and development of active and passive exoskeletons for preventing work related injuries has steadily increased in the last decade. Recently, new types of quasi-passive designs have been emerging. These exoskeletons use passive viscoelastic elements, such as springs and dampers, to provide support to the user, while using small actuators only to change the level of support or to disengage the passive elements. Control of such devices is still largely unexplored, especially the algorithms that predict the movement of the user, to take maximum advantage of the passive viscoelastic elements. To address this issue, we developed a new control scheme consisting of Gaussian mixture models (GMM) in combination with a state machine controller to identify and classify the movement of the user as early as possible and thus provide a timely control output for the quasi-passive spinal exoskeleton. In a leave-one-out cross-validation procedure, the overall accuracy for providing support to the user was 86 . 72 ± 0 . 86 % (mean ± s.d.) with a sensitivity and specificity of 97 . 46 ± 2 . 09 % and 83 . 15 ± 0 . 85 % respectively. The results of this study indicate that our approach is a promising tool for the control of quasi-passive spinal exoskeletons.
Asunto(s)
Dispositivo Exoesqueleto , Movimiento , Distribución Normal , Algoritmos , Fenómenos Biomecánicos , Humanos , Modelos Teóricos , Columna VertebralRESUMEN
Digital human models (DHM) allow for a proactive ergonomic assessment of products by applying different models describing the user-product interaction. In engineering design, DHM tools are currently not established as computer-aided ergonomics tools, since (among other reasons) the interaction models are either cumbersome to use, unstandardised, time-demanding or not trustworthy. To understand the challenges in interaction modelling, we conducted a systematic literature review with the aim of identification, classification and examination of existing interaction models. A schematic user-product interaction model for DHM is proposed, abstracting existing models and unifying the corresponding terminology. Additionally, nine general approaches to proactive interaction modelling were identified by classifying the reviewed interaction models. The approaches are discussed regarding their scope, limitations, strength and weaknesses. Ultimately, the literature review revealed that prevalent interaction models cannot be considered unconditionally suitable for engineering design since none of them offer a satisfactory combination of genuine proactivity and universal validity. Practitioner summary: This contribution presents a systematic literature review conducted to identify, classify and examine existing proactive interaction modelling approaches for digital human models in engineering design. Ultimately, the literature review revealed that prevalent interaction models cannot be considered unconditionally suitable for engineering design since none of them offer a satisfactory combination of genuine proactivity and universal validity. Abbreviations: DHM: digital human model; CAE: computer-aided engineering; RQ: research question.
Asunto(s)
Simulación por Computador , Diseño de Equipo , Ergonomía/métodos , Sistemas Hombre-Máquina , Modelos Anatómicos , Movimiento , Humanos , Terminología como AsuntoRESUMEN
Observing animal movements enables us to understand animal behavior changes, such as migration, interaction, foraging, and nesting. Based on spatiotemporal changes in weather and season, animals instinctively change their position for foraging, nesting, or breeding. It is known that moving patterns are closely related to their traits. Analyzing and predicting animals' movement patterns according to spatiotemporal change offers an opportunity to understand their unique traits and acquire ecological insights into animals. Hence, in this paper, we propose an animal movement prediction scheme using a predictive recurrent neural network architecture. To do that, we first collect and investigate geo records of animals and conduct pattern refinement by using random forest interpolation. Then, we generate animal movement patterns using the kernel density estimation and build a predictive recurrent neural network model to consider the spatiotemporal changes. In the experiment, we perform various predictions using 14 K long-billed curlew locations that contain their five-year movements of the breeding, non-breeding, pre-breeding, and post-breeding seasons. The experimental results confirm that our predictive model based on recurrent neural networks can be effectively used to predict animal movement.
RESUMEN
BACKGROUND: Partial hand amputation forms more than 90% of all upper limb amputations. This amputation has a notable effect on the amputee's life. To improve the quality of life for partial hand amputees different prosthesis options, including externally-powered prosthesis, have been investigated. The focus of this work is to explore force myography (FMG) as a technique for regressing grasping movement accompanied by wrist position variations. This study can lay the groundwork for a future investigation of FMG as a technique for controlling externally-powered prostheses continuously. METHODS: Ten able-bodied participants performed three hand movements while their wrist was fixed in one of six predefined positions. The angle between Thumb and Index finger ([Formula: see text]), and Thumb and Middle finger ([Formula: see text]) were calculated as measures of grasping movements. Two approaches were examined for estimating each angle: (i) one regression model, trained on data from all wrist positions and hand movements; (ii) a classifier that identified the wrist position followed by a separate regression model for each wrist position. The possibility of training the system using a limited number of wrist positions and testing it on all positions was also investigated. RESULTS: The first approach had a correlation of determination ([Formula: see text]) of 0.871 for [Formula: see text] and [Formula: see text]. Using the second approach [Formula: see text] and [Formula: see text] were obtained. The first approach is over two times faster than the second approach while having similar performance; thus the first approach was selected to investigate the effect of the wrist position variations. Training with 6 or 5 wrist positions yielded results which were not statistically significant. A statistically significant decrease in performance resulted when less than five wrist positions were used for training. CONCLUSIONS: The results indicate the potential of FMG to regress grasping movement, accompanied by wrist position variations, with a regression model for each angle. Also, it is necessary to include more than one wrist position in the training phase.
Asunto(s)
Fuerza de la Mano , Mano/fisiología , Miografía/métodos , Muñeca/fisiología , Adulto , Algoritmos , Amputación Quirúrgica , Amputados , Miembros Artificiales , Fenómenos Biomecánicos , Electromiografía , Diseño de Equipo , Femenino , Dedos , Humanos , Masculino , Movimiento , Análisis de Regresión , Procesamiento de Señales Asistido por Computador , Pulgar , Articulación de la Muñeca , Adulto JovenRESUMEN
A current trend in the development of assistive devices for rehabilitation, for example exoskeletons or active orthoses, is to utilize physiological data to enhance their functionality and usability, for example by predicting the patient's upcoming movements using electroencephalography (EEG) or electromyography (EMG). However, these modalities have different temporal properties and classification accuracies, which results in specific advantages and disadvantages. To use physiological data analysis in rehabilitation devices, the processing should be performed in real-time, guarantee close to natural movement onset support, provide high mobility, and should be performed by miniaturized systems that can be embedded into the rehabilitation device. We present a novel Field Programmable Gate Array (FPGA) -based system for real-time movement prediction using physiological data. Its parallel processing capabilities allows the combination of movement predictions based on EEG and EMG and additionally a P300 detection, which is likely evoked by instructions of the therapist. The system is evaluated in an offline and an online study with twelve healthy subjects in total. We show that it provides a high computational performance and significantly lower power consumption in comparison to a standard PC. Furthermore, despite the usage of fixed-point computations, the proposed system achieves a classification accuracy similar to systems with double precision floating-point precision.
Asunto(s)
Movimiento , Interfaces Cerebro-Computador , Electroencefalografía , Electromiografía , Humanos , Aparatos Ortopédicos , Dispositivos de AutoayudaRESUMEN
Exoskeleton-based support for patients requires the learning of individual machine-learning models to recognize movement intentions of patients based on the electroencephalogram (EEG). A major issue in EEG-based movement intention recognition is the long calibration time required to train a model. In this paper, we propose a transfer learning approach that eliminates the need for a calibration session. This approach is validated on healthy subjects in this study. We will use the proposed approach in our future rehabilitation application, where the movement intention of the affected arm of a patient can be inferred from the EEG data recorded during bilateral arm movements enabled by the exoskeleton mirroring arm movements from the unaffected to the affected arm. For the initial evaluation, we compared two trained models for predicting unilateral and bilateral movement intentions without applying a classifier transfer. For the main evaluation, we predicted unilateral movement intentions without a calibration session by transferring the classifier trained on data from bilateral movement intentions. Our results showed that the classification performance for the transfer case was comparable to that in the non-transfer case, even with only 4 or 8 EEG channels. Our results contribute to robotic rehabilitation by eliminating the need for a calibration session, since EEG data for training is recorded during the rehabilitation session, and only a small number of EEG channels are required for model training.
Asunto(s)
Electroencefalografía , Dispositivo Exoesqueleto , Intención , Movimiento , Humanos , Electroencefalografía/métodos , Masculino , Calibración , Movimiento/fisiología , Adulto , Aprendizaje Automático , Femenino , Adulto JovenRESUMEN
Stock price movement prediction is the basis for decision-making to maintain the stability and security of stock markets. It is important to generate predictions in an interpretable manner. The Belief Rule Base (BRB) has certain interpretability based on IF-THEN rule semantics. However, the interpretability of BRB in the whole process of stock prediction modeling may be weakened or lost. Therefore, this paper proposes an interpretable model for stock price movement prediction based on the hierarchical Belief Rule Base (HBRB-I). The interpretability of the model is considered, and several criteria are constructed based on the BRB expert system. First, the hierarchical structure of BRB is constructed to ensure the interpretability of the initial modeling. Second, the interpretability of the inference process is ensured by the Evidential Reasoning (ER) method as a transparent inference engine. Third, a new Projection Covariance Matrix Adaptive Evolution Strategy (P-CMA-ES) algorithm with interpretability criteria is designed to ensure the interpretability of the optimization process. The final mean squared error value of 1.69E-04 was obtained with similar accuracy to the initial BRB and enhanced in terms of interpretability. This paper is for short-term stock forecasting, and more data will be collected in the future to update the rules to enhance the forecasting capability of the rule base.
RESUMEN
Navigation through complex environments requires motor planning, motor preparation, and the coordination between multiple sensory-motor modalities. For example, the stepping motion when we walk is coordinated with motion of the torso, arms, head, and eyes. In rodents, movement of the animal through the environment is coordinated with whisking. Even head-fixed mice navigating a plus maze position their whiskers asymmetrically with the bilateral asymmetry signifying the upcoming turn direction. Here we report that, in addition to moving their whiskers, on every trial mice also move their eyes conjugately in the direction of the upcoming turn. Not only do mice move their eyes, but they coordinate saccadic eye movement with the asymmetric positioning of the whiskers. Our analysis shows that asymmetric positioning of whiskers predicted the turn direction that mice will make at an earlier stage than eye movement. Consistent with these results, our observations also revealed that whisker asymmetry increases before saccadic eye movement. Importantly, this work shows that when rodents plan for active behavior, their motor plans can involve both eye and whisker movement. We conclude that, when mice are engaged in and moving through complex real-world environments, their behavioral state can be read out in the movement of both their whiskers and eyes.
Asunto(s)
Movimientos Oculares , Vibrisas , Animales , Ratones , Movimiento , TactoRESUMEN
In recent years, the human-robot interfaces (HRIs) based on surface electromyography (sEMG) have been widely used in lower-limb exoskeleton robots for movement prediction during rehabilitation training for patients with hemiplegia. However, accurate and efficient lower-limb movement prediction for patients with hemiplegia remains a challenge due to complex movement information and individual differences. Traditional movement prediction methods usually use hand-crafted features, which are computationally cheap but can only extract some shallow heuristic information. Deep learning-based methods have a stronger feature expression ability, but it is easy to fall into the dilemma of local features, resulting in poor generalization performance of the method. In this article, a human-exoskeleton interface fusing convolutional neural networks with hand-crafted features is proposed. On the basis of our previous study, a lower-limb movement prediction framework (HCSNet) in patients with hemiplegia is constructed by fusing time and frequency domain hand-crafted features and channel synergy learning-based features. An sEMG data acquisition experiment is designed to compare and analyze the effectiveness of HCSNet. Experimental results show that the method can achieve 95.93 and 90.37% prediction accuracy in both within-subject and cross-subject cases, respectively. Compared with related lower-limb movement prediction methods, the proposed method has better prediction performance.
RESUMEN
A challenging task for the biological neural signal-based human-exoskeleton interface is to achieve accurate lower limb movement prediction of patients with hemiplegia in rehabilitation training scenarios. The human-exoskeleton interface based on single-modal biological signals such as electroencephalogram (EEG) is currently not mature in predicting movements, due to its unreliability. The multimodal human-exoskeleton interface is a very novel solution to this problem. This kind of interface normally combines the EEG signal with surface electromyography (sEMG) signal. However, their use for the lower limb movement prediction is still limited-the connection between sEMG and EEG signals and the deep feature fusion between them are ignored. In this article, a Dense con-attention mechanism-based Multimodal Enhance Fusion Network (DMEFNet) is proposed for predicting lower limb movement of patients with hemiplegia. The DMEFNet introduces the con-attention structure to extract the common attention between sEMG and EEG signal features. To verify the effectiveness of DMEFNet, an sEMG and EEG data acquisition experiment and an incomplete asynchronous data collection paradigm are designed. The experimental results show that DMEFNet has a good movement prediction performance in both within-subject and cross-subject situations, reaching an accuracy of 82.96 and 88.44%, respectively.
RESUMEN
The human-robot interface (HRI) based on biological signals can realize the natural interaction between human and robot. It has been widely used in exoskeleton robots recently to help predict the wearer's movement. Surface electromyography (sEMG)-based HRI has mature applications on the exoskeleton. However, the sEMG signals of paraplegic patients' lower limbs are weak, which means that most HRI based on lower limb sEMG signals cannot be applied to the exoskeleton. Few studies have explored the possibility of using upper limb sEMG signals to predict lower limb movement. In addition, most HRIs do not consider the contribution and synergy of sEMG signal channels. This paper proposes a human-exoskeleton interface based on upper limb sEMG signals to predict lower limb movements of paraplegic patients. The interface constructs an channel synergy-based network (MCSNet) to extract the contribution and synergy of different feature channels. An sEMG data acquisition experiment is designed to verify the effectiveness of MCSNet. The experimental results show that our method has a good movement prediction performance in both within-subject and cross-subject situations, reaching an accuracy of 94.51 and 80.75%, respectively. Furthermore, feature visualization and model ablation analysis show that the features extracted by MCSNet are physiologically interpretable.
RESUMEN
Brain-wide activities revealed by neuroimaging and recording techniques have been used to predict motor and cognitive functions in both human and animal models. However, although studies have shown the existence of micrometer-scale spatial organization of neurons in the motor cortex relevant to motor control, two-photon microscopy (TPM) calcium imaging at cellular resolution has not been fully exploited for the same purpose. Here, we ask if calcium imaging data recorded by TPM in rodent brain can provide enough information to predict features of upcoming movement. We collected calcium imaging signal from rostral forelimb area in layer 2/3 of the motor cortex while mice performed a two-dimensional lever reaching task. Images of average calcium activity collected during motion preparation period and inter-trial interval (ITI) were used to predict the forelimb reach results. The evaluation was based on a deep learning model that had been applied for object recognition. We found that the prediction accuracy for both maximum reaching location and trial outcome based on motion preparation period but not ITI were higher than the probabilities governed by chance. Our study demonstrated that imaging data encompassing information on the spatial organization of functional neuronal clusters in the motor cortex is useful in predicting motor acts even in the absence of detailed dynamics of neural activities.
RESUMEN
While numerous studies show that brain signals contain information about an individual's current state that are potentially valuable for smoothing man-machine interfaces, this has not yet lead to the use of brain computer interfaces (BCI) in daily life. One of the main challenges is the common requirement of personal data that is correctly labeled concerning the state of interest in order to train a model, where this trained model is not guaranteed to generalize across time and context. Another challenge is the requirement to wear electrodes on the head. We here propose a BCI that can tackle these issues and may be a promising case for BCI research and application in everyday life. The BCI uses EEG signals to predict head rotation in order to improve images presented in a virtual reality (VR) headset. When presenting a 360° video to a headset, field-of-view approaches only stream the content that is in the current field of view and leave out the rest. When the user rotates the head, other content parts need to be made available soon enough to go unnoticed by the user, which is problematic given the available bandwidth. By predicting head rotation, the content parts adjacent to the currently viewed part could be retrieved in time for display when the rotation actually takes place. We here studied whether head rotations can be predicted on the basis of EEG sensor data and if so, whether application of such predictions could be applied to improve display of streaming images. Eleven participants generated left- and rightward head rotations while head movements were recorded using the headsets motion sensing system and EEG. We trained neural network models to distinguish EEG epochs preceding rightward, leftward, and no rotation. Applying these models to streaming EEG data that was withheld from the training showed that 400 ms before rotation onset, the probability "no rotation" started to decrease and the probabilities of an upcoming right- or leftward rotation started to diverge in the correct direction. In the proposed BCI scenario, users already wear a device on their head allowing for integrated EEG sensors. Moreover, it is possible to acquire accurately labeled training data on the fly, and continuously monitor and improve the model's performance. The BCI can be harnessed if it will improve imagery and therewith enhance immersive experience.
RESUMEN
Human motion during walking provides biometric information which can be utilized to quantify the similarity between two persons or identify a person. The purpose of this study was to develop a method for identifying a person using their walking motion when another walking motion under different conditions is given. This type of situation occurs frequently in forensic gait science. Twenty-eight subjects were asked to walk in a gait laboratory, and the positions of their joints were tracked using a three-dimensional motion capture system. The subjects repeated their walking motion both without a weight and with a tote bag weighing a total of 5% of their body weight in their right hand. The positions of 17 anatomical landmarks during two cycles of a gait trial were generated to form a gait vector. We developed two different linear transformation methods to determine the functional relationship between the normal gait vectors and the tote-bag gait vectors from the collected gait data, one using linear transformations and the other using partial least squares regression. These methods were validated by predicting the tote-bag gait vector given a normal gait vector of a person, accomplished by calculating the Euclidean distance between the predicted vector to the measured tote-bag gait vector of the same person. The mean values of the prediction scores for the two methods were 96.4 and 95.0, respectively. This study demonstrated the potential for identifying a person based on their walking motion, even under different walking conditions.
Asunto(s)
Identificación Biométrica/métodos , Marcha/fisiología , Caminata/fisiología , Fenómenos Biomecánicos/fisiología , Humanos , Articulaciones/fisiología , Análisis de los Mínimos Cuadrados , Masculino , Análisis de Componente Principal , Adulto JovenRESUMEN
This work explores to what extent the notion of abstraction in dance is valid and what it entails. Unlike abstraction in the fine arts that aims for a certain independence from representation of the external world through the use of non-figurative elements, dance is realized by a highly familiar object - the human body. In fact, we are all experts in recognizing the human body. For instance, we can mentally reconstruct its motion from minimal information (e.g., via a "dot display"), predict body trajectory during movement and identify emotional expressions of the body. Nonetheless, despite the presence of a human dancer on stage and our extreme familiarity with the human body, the process of abstraction is applicable also to dance. Abstract dance removes itself from familiar daily movements, violates the observer's predictions about future movements and detaches itself from narratives. In so doing, abstract dance exposes the observer to perceptions of unfamiliar situations, thus paving the way to new interpretations of human motion and hence to perceiving ourselves differently in both the physical and emotional domains.