Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Physiol Meas ; 45(3)2024 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-38422513

RESUMO

Objective.Extracting discriminative spatial information from multiple electrodes is a crucial and challenging problem for electroencephalogram (EEG)-based emotion recognition. Additionally, the domain shift caused by the individual differences degrades the performance of cross-subject EEG classification.Approach.To deal with the above problems, we propose the cerebral asymmetry representation learning-based deep subdomain adaptation network (CARL-DSAN) to enhance cross-subject EEG-based emotion recognition. Specifically, the CARL module is inspired by the neuroscience findings that asymmetrical activations of the left and right brain hemispheres occur during cognitive and affective processes. In the CARL module, we introduce a novel two-step strategy for extracting discriminative features through intra-hemisphere spatial learning and asymmetry representation learning. Moreover, the transformer encoders within the CARL module can emphasize the contributive electrodes and electrode pairs. Subsequently, the DSAN module, known for its superior performance over global domain adaptation, is adopted to mitigate domain shift and further improve the cross-subject performance by aligning relevant subdomains that share the same class samples.Main Results.To validate the effectiveness of the CARL-DSAN, we conduct subject-independent experiments on the DEAP database, achieving accuracies of 68.67% and 67.11% for arousal and valence classification, respectively, and corresponding accuracies of 67.70% and 67.18% on the MAHNOB-HCI database.Significance.The results demonstrate that CARL-DSAN can achieve an outstanding cross-subject performance in both arousal and valence classification.


Assuntos
Nível de Alerta , Eletroencefalografia , Bases de Dados Factuais , Fontes de Energia Elétrica , Emoções
2.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 40(5): 928-937, 2023 Oct 25.
Artigo em Chinês | MEDLINE | ID: mdl-37879922

RESUMO

Accurate segmentation of pediatric echocardiograms is a challenging task, because significant heart-size changes with age and faster heart rate lead to more blurred boundaries on cardiac ultrasound images compared with adults. To address these problems, a dual decoder network model combining channel attention and scale attention is proposed in this paper. Firstly, an attention-guided decoder with deep supervision strategy is used to obtain attention maps for the ventricular regions. Then, the generated ventricular attention is fed back to multiple layers of the network through skip connections to adjust the feature weights generated by the encoder and highlight the left and right ventricular areas. Finally, a scale attention module and a channel attention module are utilized to enhance the edge features of the left and right ventricles. The experimental results demonstrate that the proposed method in this paper achieves an average Dice coefficient of 90.63% in acquired bilateral ventricular segmentation dataset, which is better than some conventional and state-of-the-art methods in the field of medical image segmentation. More importantly, the method has a more accurate effect in segmenting the edge of the ventricle. The results of this paper can provide a new solution for pediatric echocardiographic bilateral ventricular segmentation and subsequent auxiliary diagnosis of congenital heart disease.


Assuntos
Ecocardiografia , Ventrículos do Coração , Adulto , Humanos , Criança , Ventrículos do Coração/diagnóstico por imagem , Processamento de Imagem Assistida por Computador
3.
Physiol Meas ; 44(9)2023 09 21.
Artigo em Inglês | MEDLINE | ID: mdl-37619586

RESUMO

Objective. To enhance the accuracy of heart sound classification, this study aims to overcome the limitations of common models which rely on handcrafted feature extraction. These traditional methods may distort or discard crucial pathological information within heart sounds due to their requirement of tedious parameter settings.Approach.We propose a learnable front-end based Efficient Channel Attention Network (ECA-Net) for heart sound classification. This novel approach optimizes the transformation of waveform-to-spectrogram, enabling adaptive feature extraction from heart sound signals without domain knowledge. The features are subsequently fed into an ECA-Net based convolutional recurrent neural network, which emphasizes informative features and suppresses irrelevant information. To address data imbalance, Focal loss is employed in our model.Main results.Using the well-known public PhysioNet challenge 2016 dataset, our method achieved a classification accuracy of 97.77%, outperforming the majority of previous studies and closely rivaling the best model with a difference of just 0.57%.Significance.The learnable front-end facilitates end-to-end training by replacing the conventional heart sound feature extraction module. This provides a novel and efficient approach for heart sound classification research and applications, enhancing the practical utility of end-to-end models in this field.


Assuntos
Ruídos Cardíacos , Redes Neurais de Computação , Som
4.
Entropy (Basel) ; 25(3)2023 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-36981354

RESUMO

Computed tomography (CT) images play a vital role in diagnosing rib fractures and determining the severity of chest trauma. However, quickly and accurately identifying rib fractures in a large number of CT images is an arduous task for radiologists. We propose a U-net-based detection method designed to extract rib fracture features at the pixel level to find rib fractures rapidly and precisely. Two modules are applied to the segmentation network-a combined attention module (CAM) and a hybrid dense dilated convolution module (HDDC). The features of the same layer of the encoder and the decoder are fused through CAM, strengthening the local features of the subtle fracture area and increasing the edge features. HDDC is used between the encoder and decoder to obtain sufficient semantic information. Experiments show that on the public dataset, the model test brings the effects of Recall (81.71%), F1 (81.86%), and Dice (53.28%). Experienced radiologists reach lower false positives for each scan, whereas they have underperforming neural network models in terms of detection sensitivities with a long time diagnosis. With the aid of our model, radiologists can achieve higher detection sensitivities than computer-only or human-only diagnosis.

5.
Comput Biol Med ; 109: 159-170, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-31059900

RESUMO

To estimate the reliability and cognitive states of operator performance in a human-machine collaborative environment, we propose a novel human mental workload (MW) recognizer based on deep learning principles and utilizing the features of the electroencephalogram (EEG). To determine personalized properties in high dimensional EEG indicators, we introduce a feature mapping layer in stacked denoising autoencoder (SDAE) that is capable of preserving the local information in EEG dynamics. The ensemble classifier is then built via the subject-specific integrated deep learning committee, and adapts to the cognitive properties of a specific human operator and alleviates inter-subject feature variations. We validate our algorithms and the ensemble SDAE classifier with local information preservation (denoted by EL-SDAE) on an EEG database collected during the execution of complex human-machine tasks. The classification performance indicates that the EL-SDAE outperforms several classical MW estimators when its optimal network architecture has been identified.


Assuntos
Cognição/fisiologia , Bases de Dados Factuais , Aprendizado Profundo , Eletroencefalografia , Modelos Neurológicos , Humanos
6.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 35(4): 621-630, 2018 08 25.
Artigo em Chinês | MEDLINE | ID: mdl-30124027

RESUMO

Rapid and accurate recognition of human action and road condition is a foundation and precondition of implementing self-control of intelligent prosthesis. In this paper, a Gaussian mixture model and hidden Markov model are used to recognize the road condition and human motion modes based on the inertial sensor in artificial limb (lower limb). Firstly, the inertial sensor is used to collect the acceleration, angle and angular velocity signals in the direction of x , y and z axes of lower limbs. Then we intercept the signal segment with the time window and eliminate the noise by wavelet packet transform, and the fast Fourier transform is used to extract the features of motion. Then the principal component analysis (PCA) is carried out to remove redundant information of the features. Finally, Gaussian mixture model and hidden Markov model are used to identify the human motion modes and road condition. The experimental results show that the recognition rate of routine movement (walking, running, riding, uphill, downhill, up stairs and down stairs) is 96.25%, 92.5%, 96.25%, 91.25%, 93.75%, 88.75% and 90% respectively. Compared with the support vector machine (SVM) method, the results show that the recognition rate of our proposed method is obviously higher, and it can provide a new way for the monitoring and control of the intelligent prosthesis in the future.

7.
Front Neurorobot ; 11: 19, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28443015

RESUMO

Using machine-learning methodologies to analyze EEG signals becomes increasingly attractive for recognizing human emotions because of the objectivity of physiological data and the capability of the learning principles on modeling emotion classifiers from heterogeneous features. However, the conventional subject-specific classifiers may induce additional burdens to each subject for preparing multiple-session EEG data as training sets. To this end, we developed a new EEG feature selection approach, transfer recursive feature elimination (T-RFE), to determine a set of the most robust EEG indicators with stable geometrical distribution across a group of training subjects and a specific testing subject. A validating set is introduced to independently determine the optimal hyper-parameter and the feature ranking of the T-RFE model aiming at controlling the overfitting. The effectiveness of the T-RFE algorithm for such cross-subject emotion classification paradigm has been validated by DEAP database. With a linear least square support vector machine classifier implemented, the performance of the T-RFE is compared against several conventional feature selection schemes and the statistical significant improvement has been found. The classification rate and F-score achieve 0.7867, 0.7526, 0.7875, and 0.8077 for arousal and valence dimensions, respectively, and outperform several recent reported works on the same database. In the end, the T-RFE based classifier is compared against two subject-generic classifiers in the literature. The investigation of the computational time for all classifiers indicates the accuracy improvement of the T-RFE is at the cost of the longer training time.

8.
Comput Methods Programs Biomed ; 140: 93-110, 2017 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28254094

RESUMO

BACKGROUND AND OBJECTIVE: Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. METHODS: In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. RESULTS: DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. CONCLUSIONS: The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances.


Assuntos
Emoções , Aprendizagem , Modelos Psicológicos , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA