Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 18 de 18
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(21)2023 Oct 27.
Artículo en Inglés | MEDLINE | ID: mdl-37960457

RESUMEN

This paper proposes a portable wireless transmission system for the multi-channel acquisition of surface electromyography (EMG) signals. Because EMG signals have great application value in psychotherapy and human-computer interaction, this system is designed to acquire reliable, real-time facial-muscle-movement signals. Electrodes placed on the surface of a facial-muscle source can inhibit facial-muscle movement due to weight, size, etc., and we propose to solve this problem by placing the electrodes at the periphery of the face to acquire the signals. The multi-channel approach allows this system to detect muscle activity in 16 regions simultaneously. Wireless transmission (Wi-Fi) technology is employed to increase the flexibility of portable applications. The sampling rate is 1 KHz and the resolution is 24 bit. To verify the reliability and practicality of this system, we carried out a comparison with a commercial device and achieved a correlation coefficient of more than 70% on the comparison metrics. Next, to test the system's utility, we placed 16 electrodes around the face for the recognition of five facial movements. Three classifiers, random forest, support vector machine (SVM) and backpropagation neural network (BPNN), were used for the recognition of the five facial movements, in which random forest proved to be practical by achieving a classification accuracy of 91.79%. It is also demonstrated that electrodes placed around the face can still achieve good recognition of facial movements, making the landing of wearable EMG signal-acquisition devices more feasible.


Asunto(s)
Movimiento , Redes Neurales de la Computación , Humanos , Reproducibilidad de los Resultados , Electromiografía , Movimiento/fisiología , Músculos
2.
Org Biomol Chem ; 21(27): 5527-5531, 2023 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-37350504

RESUMEN

The stereoselective cyclization of geranylgeraniol catalysed by squalene-hopene cyclase (SHC) was investigated. By use of this transformation, spongiane diterpenoids (+)-isoagatholactone and (+)-spongian-16-one, and meroterpenoid 3-deoxychavalone A were synthesized in a concise and redox-economic manner. This work showcases the application of SHC-catalysed cyclization as a key step in terpenoid synthesis.

3.
Entropy (Basel) ; 25(3)2023 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-36981348

RESUMEN

Micro-expression recognition (MER) is challenging due to the difficulty of capturing the instantaneous and subtle motion changes of micro-expressions (MEs). Early works based on hand-crafted features extracted from prior knowledge showed some promising results, but have recently been replaced by deep learning methods based on the attention mechanism. However, with limited ME sample sizes, features extracted by these methods lack discriminative ME representations, in yet-to-be improved MER performance. This paper proposes the Dual-branch Attention Network (Dual-ATME) for MER to address the problem of ineffective single-scale features representing MEs. Specifically, Dual-ATME consists of two components: Hand-crafted Attention Region Selection (HARS) and Automated Attention Region Selection (AARS). HARS uses prior knowledge to manually extract features from regions of interest (ROIs). Meanwhile, AARS is based on attention mechanisms and extracts hidden information from data automatically. Finally, through similarity comparison and feature fusion, the dual-scale features could be used to learn ME representations effectively. Experiments on spontaneous ME datasets (including CASME II, SAMM, SMIC) and their composite dataset, MEGC2019-CD, showed that Dual-ATME achieves better, or more competitive, performance than the state-of-the-art MER methods.

4.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 2782-2800, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-35560102

RESUMEN

Micro-expression (ME) is a significant non-verbal communication clue that reveals one person's genuine emotional state. The development of micro-expression analysis (MEA) has just gained attention in the last decade. However, the small sample size problem constrains the use of deep learning on MEA. Besides, ME samples distribute in six different databases, leading to database bias. Moreover, the ME database development is complicated. In this article, we introduce a large-scale spontaneous ME database: CAS(ME) 3. The contribution of this article is summarized as follows: (1) CAS(ME) 3 offers around 80 hours of videos with over 8,000,000 frames, including manually labeled 1,109 MEs and 3,490 macro-expressions. Such a large sample size allows effective MEA method validation while avoiding database bias. (2) Inspired by psychological experiments, CAS(ME) 3 provides the depth information as an additional modality unprecedentedly, contributing to multi-modal MEA. (3) For the first time, CAS(ME) 3 elicits ME with high ecological validity using the mock crime paradigm, along with physiological and voice signals, contributing to practical MEA. (4) Besides, CAS(ME) 3 provides 1,508 unlabeled videos with more than 4,000,000 frames, i.e., a data platform for unsupervised MEA methods. (5) Finally, we demonstrate the effectiveness of depth information by the proposed depth flow algorithm and RGB-D information.


Asunto(s)
Bases de Datos Factuales , Emociones , Expresión Facial , Femenino , Humanos , Masculino , Adulto Joven , Algoritmos , Sesgo , Bases de Datos Factuales/normas , Conjuntos de Datos como Asunto/normas , Estimulación Luminosa , Reproducibilidad de los Resultados , Tamaño de la Muestra , Aprendizaje Automático Supervisado/normas , Grabación en Video , Percepción Visual
5.
Front Psychol ; 13: 959124, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36186390

RESUMEN

Micro-expression (ME) is an extremely quick and uncontrollable facial movement that lasts for 40-200 ms and reveals thoughts and feelings that an individual attempts to cover up. Though much more difficult to detect and recognize, ME recognition is similar to macro-expression recognition in that it is influenced by facial features. Previous studies suggested that facial attractiveness could influence facial expression recognition processing. However, it remains unclear whether facial attractiveness could also influence ME recognition. Addressing this issue, this study tested 38 participants with two ME recognition tasks in a static condition or dynamically. Three different MEs (positive, neutral, and negative) at two attractiveness levels (attractive, unattractive). The results showed that participants recognized MEs on attractive faces much quicker than on unattractive ones, and there was a significant interaction between ME and facial attractiveness. Furthermore, attractive happy faces were recognized faster in both the static and the dynamic conditions, highlighting the happiness superiority effect. Therefore, our results provided the first evidence that facial attractiveness could influence ME recognition in a static condition or dynamically.

6.
J Comput Sci Technol ; 37(2): 330-343, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35496726

RESUMEN

COVID-19 is a contagious infection that has severe effects on the global economy and our daily life. Accurate diagnosis of COVID-19 is of importance for consultants, patients, and radiologists. In this study, we use the deep learning network AlexNet as the backbone, and enhance it with the following two aspects: 1) adding batch normalization to help accelerate the training, reducing the internal covariance shift; 2) replacing the fully connected layer in AlexNet with three classifiers: SNN, ELM, and RVFL. Therefore, we have three novel models from the deep COVID network (DC-Net) framework, which are named DC-Net-S, DC-Net-E, and DC-Net-R, respectively. After comparison, we find the proposed DC-Net-R achieves an average accuracy of 90.91% on a private dataset (available upon email request) comprising of 296 images while the specificity reaches 96.13%, and has the best performance among all three proposed classifiers. In addition, we show that our DC-Net-R also performs much better than other existing algorithms in the literature. Supplementary Information: The online version contains supplementary material available at 10.1007/s11390-020-0679-8.

7.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 5826-5846, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-33739920

RESUMEN

Unlike the conventional facial expressions, micro-expressions are involuntary and transient facial expressions capable of revealing the genuine emotions that people attempt to hide. Therefore, they can provide important information in a broad range of applications such as lie detection, criminal detection, etc. Since micro-expressions are transient and of low intensity, however, their detection and recognition is difficult and relies heavily on expert experiences. Due to its intrinsic particularity and complexity, video-based micro-expression analysis is attractive but challenging, and has recently become an active area of research. Although there have been numerous developments in this area, thus far there has been no comprehensive survey that provides researchers with a systematic overview of these developments with a unified evaluation. Accordingly, in this survey paper, we first highlight the key differences between macro- and micro-expressions, then use these differences to guide our research survey of video-based micro-expression analysis in a cascaded structure, encompassing the neuropsychological basis, datasets, features, spotting algorithms, recognition algorithms, applications and evaluation of state-of-the-art approaches. For each aspect, the basic techniques, advanced developments and major challenges are addressed and discussed. Furthermore, after considering the limitations of existing micro-expression datasets, we present and release a new dataset - called micro-and-macro expression warehouse (MMEW) - containing more video samples and more labeled emotion types. We then perform a unified comparison of representative methods on CAS(ME) 2 for spotting, and on MMEW and SAMM for recognition, respectively. Finally, some potential future research directions are explored and outlined.


Asunto(s)
Algoritmos , Expresión Facial , Emociones , Humanos
8.
IEEE Trans Image Process ; 30: 3956-3969, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33788686

RESUMEN

Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME)2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.

9.
Front Psychol ; 12: 784834, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35058850

RESUMEN

Facial expressions are a vital way for humans to show their perceived emotions. It is convenient for detecting and recognizing expressions or micro-expressions by annotating a lot of data in deep learning. However, the study of video-based expressions or micro-expressions requires that coders have professional knowledge and be familiar with action unit (AU) coding, leading to considerable difficulties. This paper aims to alleviate this situation. We deconstruct facial muscle movements from the motor cortex and systematically sort out the relationship among facial muscles, AU, and emotion to make more people understand coding from the basic principles: We derived the relationship between AU and emotion based on a data-driven analysis of 5,000 images from the RAF-AU database, along with the experience of professional coders.We discussed the complex facial motor cortical network system that generates facial movement properties, detailing the facial nucleus and the motor system associated with facial expressions.The supporting physiological theory for AU labeling of emotions is obtained by adding facial muscle movements patterns.We present the detailed process of emotion labeling and the detection and recognition of AU. Based on the above research, the video's coding of spontaneous expressions and micro-expressions is concluded and prospected.

10.
Org Lett ; 22(5): 1976-1979, 2020 03 06.
Artículo en Inglés | MEDLINE | ID: mdl-32052978

RESUMEN

A combined approach toward syntheses of epoxyguaiane sesquiterpenes is presented. By use of a fungus sesquiterpene cyclase, guaian-6,10(14)-diene was produced through metabolic engineering of the isoprenoid pathway in E. coli. (-)-Englerin A, (-)-oxyphyllol, (+)-orientatol E, and (+)-orientalol F have been synthesized in two to six steps. This strategy provided rapid access to the epoxyguaiane core structure and would facilitate syntheses of (-)-englerin A and its analogues for evaluation of their therapeutic potentials in drug discovery.


Asunto(s)
Escherichia coli/química , Sesquiterpenos de Guayano/síntesis química , Sesquiterpenos/síntesis química , Escherichia coli/metabolismo , Estructura Molecular , Sesquiterpenos/química , Sesquiterpenos de Guayano/química , Estereoisomerismo , Biología Sintética
11.
J Med Syst ; 42(1): 2, 2017 Nov 17.
Artículo en Inglés | MEDLINE | ID: mdl-29159706

RESUMEN

Alcohol use disorder (AUD) is an important brain disease. It alters the brain structure. Recently, scholars tend to use computer vision based techniques to detect AUD. We collected 235 subjects, 114 alcoholic and 121 non-alcoholic. Among the 235 image, 100 images were used as training set, and data augmentation method was used. The rest 135 images were used as test set. Further, we chose the latest powerful technique-convolutional neural network (CNN) based on convolutional layer, rectified linear unit layer, pooling layer, fully connected layer, and softmax layer. We also compared three different pooling techniques: max pooling, average pooling, and stochastic pooling. The results showed that our method achieved a sensitivity of 96.88%, a specificity of 97.18%, and an accuracy of 97.04%. Our method was better than three state-of-the-art approaches. Besides, stochastic pooling performed better than other max pooling and average pooling. We validated CNN with five convolution layers and two fully connected layers performed the best. The GPU yielded a 149× acceleration in training and a 166× acceleration in test, compared to CPU.


Asunto(s)
Alcoholismo/diagnóstico por imagen , Encéfalo/patología , Procesamiento de Imagen Asistido por Computador/métodos , Modelos Estadísticos , Redes Neurales de la Computación , Anciano , Alcoholismo/patología , Encéfalo/diagnóstico por imagen , China , Enfermedad Crónica , Femenino , Humanos , Masculino , Persona de Mediana Edad , Sensibilidad y Especificidad
12.
IEEE Trans Image Process ; 24(12): 6034-47, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26540689

RESUMEN

Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray.


Asunto(s)
Cara/fisiología , Expresión Facial , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Color , Bases de Datos Factuales , Emociones , Humanos
13.
PLoS One ; 9(1): e86041, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24475068

RESUMEN

A robust automatic micro-expression recognition system would have broad applications in national safety, police interrogation, and clinical diagnosis. Developing such a system requires high quality databases with sufficient training samples which are currently not available. We reviewed the previously developed micro-expression databases and built an improved one (CASME II), with higher temporal resolution (200 fps) and spatial resolution (about 280×340 pixels on facial area). We elicited participants' facial expressions in a well-controlled laboratory environment and proper illumination (such as removing light flickering). Among nearly 3000 facial movements, 247 micro-expressions were selected for the database with action units (AUs) and emotions labeled. For baseline evaluation, LBP-TOP and SVM were employed respectively for feature extraction and classifier with the leave-one-subject-out cross-validation method. The best performance is 63.41% for 5-class classification.


Asunto(s)
Biometría/métodos , Emociones/fisiología , Expresión Facial , Reconocimiento de Normas Patrones Automatizadas/métodos , Bases de Datos Factuales , Cara/anatomía & histología , Cara/fisiología , Ciencias Forenses/métodos , Humanos , Almacenamiento y Recuperación de la Información/métodos , Máquina de Vectores de Soporte
14.
IEEE Trans Image Process ; 23(2): 920-30, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-26270928

RESUMEN

As a general framework, Laplacian embedding, based on a pairwise similarity matrix, infers low dimensional representations from high dimensional data. However, it generally suffers from three issues: 1) algorithmic performance is sensitive to the size of neighbors; 2) the algorithm encounters the well known small sample size (SSS) problem; and 3) the algorithm de-emphasizes small distance pairs. To address these issues, here we propose exponential embedding using matrix exponential and provide a general framework for dimensionality reduction. In the framework, the matrix exponential can be roughly interpreted by the random walk over the feature similarity matrix, and thus is more robust. The positive definite property of matrix exponential deals with the SSS problem. The behavior of the decay function of exponential embedding is more significant in emphasizing small distance pairs. Under this framework, we apply matrix exponential to extend many popular Laplacian embedding algorithms, e.g., locality preserving projections, unsupervised discriminant projections, and marginal fisher analysis. Experiments conducted on the synthesized data, UCI, and the Georgia Tech face database show that the proposed new framework can well address the issues mentioned above.

15.
PLoS One ; 8(7): e66647, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23840864

RESUMEN

Tensor subspace transformation, a commonly used subspace transformation technique, has gained more and more popularity over the past few years because many objects in the real world can be naturally represented as multidimensional arrays, i.e. tensors. For example, a RGB facial image can be represented as a three-dimensional array (or 3rd-order tensor). The first two dimensionalities (or modes) represent the facial spatial information and the third dimensionality (or mode) represents the color space information. Each mode of the tensor may express a different semantic meaning. Thus different transformation strategies should be applied to different modes of the tensor according to their semantic meanings to obtain the best performance. To the best of our knowledge, there are no existing tensor subspace transformation algorithm which implements different transformation strategies on different modes of a tensor accordingly. In this paper, we propose a fusion tensor subspace transformation framework, a novel idea where different transformation strategies are implemented on separate modes of a tensor. Under the framework, we propose the Fusion Tensor Color Space (FTCS) model for face recognition.


Asunto(s)
Algoritmos , Cara/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Inteligencia Artificial , Humanos
16.
J Med Syst ; 36(4): 2505-19, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-21537848

RESUMEN

Breast cancer is becoming a leading cause of death among women in the whole world, meanwhile, it is confirmed that the early detection and accurate diagnosis of this disease can ensure a long survival of the patients. In this paper, a swarm intelligence technique based support vector machine classifier (PSO_SVM) is proposed for breast cancer diagnosis. In the proposed PSO-SVM, the issue of model selection and feature selection in SVM is simultaneously solved under particle swarm (PSO optimization) framework. A weighted function is adopted to design the objective function of PSO, which takes into account the average accuracy rates of SVM (ACC), the number of support vectors (SVs) and the selected features simultaneously. Furthermore, time varying acceleration coefficients (TVAC) and inertia weight (TVIW) are employed to efficiently control the local and global search in PSO algorithm. The effectiveness of PSO-SVM has been rigorously evaluated against the Wisconsin Breast Cancer Dataset (WBCD), which is commonly used among researchers who use machine learning methods for breast cancer diagnosis. The proposed system is compared with the grid search method with feature selection by F-score. The experimental results demonstrate that the proposed approach not only obtains much more appropriate model parameters and discriminative feature subset, but also needs smaller set of SVs for training, giving high predictive accuracy. In addition, Compared to the existing methods in previous studies, the proposed system can also be regarded as a promising success with the excellent classification accuracy of 99.3% via 10-fold cross validation (CV) analysis. Moreover, a combination of five informative features is identified, which might provide important insights to the nature of the breast cancer disease and give an important clue for the physicians to take a closer attention. We believe the promising result can ensure that the physicians make very accurate diagnostic decision in clinical breast cancer diagnosis.


Asunto(s)
Neoplasias de la Mama/diagnóstico , Diagnóstico por Computador , Máquina de Vectores de Soporte , Algoritmos , Inteligencia Artificial , Neoplasias de la Mama/clasificación , Femenino , Humanos
17.
IEEE Trans Neural Netw Learn Syst ; 23(6): 876-88, 2012 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24806760

RESUMEN

As one of the fundamental features, color provides useful information and plays an important role for face recognition. Generally, the choice of a color space is different for different visual tasks. How can a color space be sought for the specific face recognition problem? To address this problem, we propose a sparse tensor discriminant color space (STDCS) model that represents a color image as a third-order tensor in this paper. The model cannot only keep the underlying spatial structure of color images but also enhance robustness and give intuitionistic or semantic interpretation. STDCS transforms the eigenvalue problem to a series of regression problems. Then one spare color space transformation matrix and two sparse discriminant projection matrices are obtained by applying lasso or elastic net on the regression problems. The experiments on three color face databases, AR, Georgia Tech, and Labeled Faces in the Wild face databases, show that both the performance and the robustness of the proposed method outperform those of the state-of-the-art TDCS model.


Asunto(s)
Biometría/métodos , Colorimetría/métodos , Cara/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Inteligencia Artificial , Color , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
IEEE Trans Image Process ; 20(9): 2490-501, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21356616

RESUMEN

Recent research efforts reveal that color may provide useful information for face recognition. For different visual tasks, the choice of a color space is generally different. How can a color space be sought for the specific face recognition problem? To address this problem, this paper represents a color image as a third-order tensor and presents the tensor discriminant color space (TDCS) model. The model can keep the underlying spatial structure of color images. With the definition of n-mode between-class scatter matrices and within-class scatter matrices, TDCS constructs an iterative procedure to obtain one color space transformation matrix and two discriminant projection matrices by maximizing the ratio of these two scatter matrices. The experiments are conducted on two color face databases, AR and Georgia Tech face databases, and the results show that both the performance and the efficiency of the proposed method are better than those of the state-of-the-art color image discriminant model, which involve one color space transformation matrix and one discriminant projection matrix, specifically in a complicated face database with various pose variations.


Asunto(s)
Algoritmos , Identificación Biométrica/métodos , Cara/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos , Inteligencia Artificial , Color , Bases de Datos Factuales , Análisis Discriminante , Humanos , Modelos Teóricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA