Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Comput Methods Programs Biomed ; 221: 106904, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35636356

RESUMO

BACKGROUND AND OBJECTIVE: Facial palsy patients or patients with facial transplantation have abnormal facial motion due to altered facial muscle functions and nerve damage. Computer-aided system and physics-based models have been developed to provide objective and quantitative information. However, the predictive capacity of these solutions is still limited to explore the facial motion patterns with emerging properties. The present study aims to couple the reinforcement learning and the finite element modeling for facial motion learning and prediction. METHODS: A novel modeling workflow for learning facial motion was developed. A physically-based model of the face within the Artisynth modeling platform was used. Information exchange protocol was proposed to link reinforcement learning and rigid multi-bodies dynamics outcomes. Two reinforcement learning algorithms (deep deterministic policy gradient (DDPG) and Twin-delayed DDPG (TD3)) were used and implemented to drive the simulations of symmetry-oriented and smile movements. Numerical outcomes were compared to experimental observations (Bosphorus database) for evaluation and validation purposes. RESULTS: As result, after more than 100 episodes of exploring the environment, the agent starts to learn from previous trials and can find the optimal policy after more than 300 episodes of training. Regarding the symmetry-oriented motion, the muscle excitations predicted by the trained agent help to increase the value of reward from R = -2.06 to R = -0.23, which counts for ∼89% improvement of the symmetry value of the face. For smile-oriented motion, two points at the edge of the mouth move up 0.35 cm, which is within the range of movements estimated from the Bosphorus database (0.4 ± 0.32 cm). CONCLUSIONS: The present study explored the muscle excitation patterns by coupling reinforcement learning with a detailed finite element model of the face. We developed, for the first time, a novel coupling scheme to integrate the finite element simulation into the reinforcement learning process for facial motion learning. As perspectives, this present workflow will be applied for facial palsy and facial transplantation patients to guide and optimize the functional rehabilitation program.


Assuntos
Paralisia Facial , Algoritmos , Simulação por Computador , Análise de Elementos Finitos , Humanos , Movimento
2.
Med Biol Eng Comput ; 60(2): 559-581, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35023072

RESUMO

Skull prediction from the head is a challenging issue toward a cost-effective therapeutic solution for facial disorders. This issue was initially studied in our previous work using full head-to-skull relationship learning. However, the head-skull thickness topology is locally shaped, especially in the face region. Thus, the objective of the present study was to enhance our head-to-skull prediction problem by using local topological features for training and predicting. Head and skull feature points were sampled on 329 head and skull models from computed tomography (CT) images. These feature points were classified into the back and facial topologies. Head-to-skull relations were trained using the partial least square regression (PLSR) models separately in the two topologies. A hyperparameter tuning process was also conducted for selecting optimal parameters for each training model. Thus, a new skull could be generated so that its shape was statistically fitted with the target head. Mean errors of the predicted skulls using the topology-based learning method were better than those using the non-topology-based learning method. After tenfold cross-validation, the mean error was enhanced 36.96% for the skull shapes and 14.17% for the skull models. Mean error in the facial skull region was especially improved with 4.98%. The mean errors were also improved 11.71% and 25.74% in the muscle attachment regions and the back skull regions respectively. Moreover, using the enhanced learning strategy, the errors (mean ± SD) for the best and worst prediction cases are from 1.1994 ± 1.1225 mm (median: 0.9036, coefficient of multiple determination (R2): 0.997274) to 3.6972 ± 2.4118 mm (median: 3.9089, R2: 0.999614) and from 2.0172 ± 2.0454 mm (median: 1.2999, R2: 0.995959) to 4.0227 ± 2.6098 mm (median: 3.9998, R2: 0.998577) for the predicted skull shapes and the predicted skull models respectively. This present study showed that more detailed information on the head-skull shape leads to a better accuracy level for the skull prediction from the head. In particular, local topological features on the back and face regions of interest should be considered toward a better learning strategy for the head-to-skull prediction problem. In perspective, this enhanced learning strategy was used to update our developed clinical decision support system for facial disorders. Furthermore, a new class of learning methods, called geometric deep learning will be studied.


Assuntos
Cabeça , Crânio , Face , Cabeça/diagnóstico por imagem , Modelos Estatísticos , Crânio/diagnóstico por imagem , Tomografia Computadorizada por Raios X
3.
Bioengineering (Basel) ; 9(11)2022 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-36354529

RESUMO

The 3D reconstruction of an accurate face model is essential for delivering reliable feedback for clinical decision support. Medical imaging and specific depth sensors are accurate but not suitable for an easy-to-use and portable tool. The recent development of deep learning (DL) models opens new challenges for 3D shape reconstruction from a single image. However, the 3D face shape reconstruction of facial palsy patients is still a challenge, and this has not been investigated. The contribution of the present study is to apply these state-of-the-art methods to reconstruct the 3D face shape models of facial palsy patients in natural and mimic postures from one single image. Three different methods (3D Basel Morphable model and two 3D Deep Pre-trained models) were applied to the dataset of two healthy subjects and two facial palsy patients. The reconstructed outcomes were compared to the 3D shapes reconstructed using Kinect-driven and MRI-based information. As a result, the best mean error of the reconstructed face according to the Kinect-driven reconstructed shape is 1.5±1.1 mm. The best error range is 1.9±1.4 mm when compared to the MRI-based shapes. Before using the procedure to reconstruct the 3D faces of patients with facial palsy or other facial disorders, several ideas for increasing the accuracy of the reconstruction can be discussed based on the results. This present study opens new avenues for the fast reconstruction of the 3D face shapes of facial palsy patients from a single image. As perspectives, the best DL method will be implemented into our computer-aided decision support system for facial disorders.

4.
Med Biol Eng Comput ; 59(6): 1235-1244, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34028664

RESUMO

Facial expression recognition plays an essential role in human conversation and human-computer interaction. Previous research studies have recognized facial expressions mainly based on 2D image processing requiring sensitive feature engineering and conventional machine learning approaches. The purpose of the present study was to recognize facial expressions by applying a new class of deep learning called geometric deep learning directly on 3D point cloud data. Two databases (Bosphorus and SIAT-3DFE) were used. The Bosphorus database includes sixty-five subjects with seven basic expressions (i.e., anger, disgust, fearness, happiness, sadness, surprise, and neutral). The SIAT-3DFE database has 150 subjects and 4 basic facial expressions (neutral, happiness, sadness, and surprise). First, preprocessing procedures such as face center cropping, data augmentation, and point cloud denoising were applied on 3D face scans. Then, a geometric deep learning model called PointNet++ was applied. A hyperparameter tuning process was performed to find the optimal model parameters. Finally, the developed model was evaluated using the recognition rate and confusion matrix. The facial expression recognition accuracy on the Bosphorus database was 69.01% for 7 expressions and could reach 85.85% when recognizing five specific expressions (anger, disgust, happiness, surprise, and neutral). The recognition rate was 78.70% with the SIAT-3DFE database. The present study suggested that 3D point cloud could be directly processed for facial expression recognition by using geometric deep learning approach. In perspectives, the developed model will be applied for facial palsy patients to guide and optimize the functional rehabilitation program.


Assuntos
Aprendizado Profundo , Reconhecimento Facial , Emoções , Expressão Facial , Felicidade , Humanos
5.
Forensic Sci Int ; 290: 303-309, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-30103180

RESUMO

Human motion during walking provides biometric information which can be utilized to quantify the similarity between two persons or identify a person. The purpose of this study was to develop a method for identifying a person using their walking motion when another walking motion under different conditions is given. This type of situation occurs frequently in forensic gait science. Twenty-eight subjects were asked to walk in a gait laboratory, and the positions of their joints were tracked using a three-dimensional motion capture system. The subjects repeated their walking motion both without a weight and with a tote bag weighing a total of 5% of their body weight in their right hand. The positions of 17 anatomical landmarks during two cycles of a gait trial were generated to form a gait vector. We developed two different linear transformation methods to determine the functional relationship between the normal gait vectors and the tote-bag gait vectors from the collected gait data, one using linear transformations and the other using partial least squares regression. These methods were validated by predicting the tote-bag gait vector given a normal gait vector of a person, accomplished by calculating the Euclidean distance between the predicted vector to the measured tote-bag gait vector of the same person. The mean values of the prediction scores for the two methods were 96.4 and 95.0, respectively. This study demonstrated the potential for identifying a person based on their walking motion, even under different walking conditions.


Assuntos
Identificação Biométrica/métodos , Marcha/fisiologia , Caminhada/fisiologia , Fenômenos Biomecânicos/fisiologia , Humanos , Articulações/fisiologia , Análise dos Mínimos Quadrados , Masculino , Análise de Componente Principal , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA