Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Entropy (Basel) ; 25(3)2023 Mar 07.
Artículo en Inglés | MEDLINE | ID: mdl-36981354

RESUMEN

Computed tomography (CT) images play a vital role in diagnosing rib fractures and determining the severity of chest trauma. However, quickly and accurately identifying rib fractures in a large number of CT images is an arduous task for radiologists. We propose a U-net-based detection method designed to extract rib fracture features at the pixel level to find rib fractures rapidly and precisely. Two modules are applied to the segmentation network-a combined attention module (CAM) and a hybrid dense dilated convolution module (HDDC). The features of the same layer of the encoder and the decoder are fused through CAM, strengthening the local features of the subtle fracture area and increasing the edge features. HDDC is used between the encoder and decoder to obtain sufficient semantic information. Experiments show that on the public dataset, the model test brings the effects of Recall (81.71%), F1 (81.86%), and Dice (53.28%). Experienced radiologists reach lower false positives for each scan, whereas they have underperforming neural network models in terms of detection sensitivities with a long time diagnosis. With the aid of our model, radiologists can achieve higher detection sensitivities than computer-only or human-only diagnosis.

2.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 40(5): 928-937, 2023 Oct 25.
Artículo en Zh | MEDLINE | ID: mdl-37879922

RESUMEN

Accurate segmentation of pediatric echocardiograms is a challenging task, because significant heart-size changes with age and faster heart rate lead to more blurred boundaries on cardiac ultrasound images compared with adults. To address these problems, a dual decoder network model combining channel attention and scale attention is proposed in this paper. Firstly, an attention-guided decoder with deep supervision strategy is used to obtain attention maps for the ventricular regions. Then, the generated ventricular attention is fed back to multiple layers of the network through skip connections to adjust the feature weights generated by the encoder and highlight the left and right ventricular areas. Finally, a scale attention module and a channel attention module are utilized to enhance the edge features of the left and right ventricles. The experimental results demonstrate that the proposed method in this paper achieves an average Dice coefficient of 90.63% in acquired bilateral ventricular segmentation dataset, which is better than some conventional and state-of-the-art methods in the field of medical image segmentation. More importantly, the method has a more accurate effect in segmenting the edge of the ventricle. The results of this paper can provide a new solution for pediatric echocardiographic bilateral ventricular segmentation and subsequent auxiliary diagnosis of congenital heart disease.


Asunto(s)
Ecocardiografía , Ventrículos Cardíacos , Adulto , Humanos , Niño , Ventrículos Cardíacos/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador
3.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 35(4): 621-630, 2018 08 25.
Artículo en Zh | MEDLINE | ID: mdl-30124027

RESUMEN

Rapid and accurate recognition of human action and road condition is a foundation and precondition of implementing self-control of intelligent prosthesis. In this paper, a Gaussian mixture model and hidden Markov model are used to recognize the road condition and human motion modes based on the inertial sensor in artificial limb (lower limb). Firstly, the inertial sensor is used to collect the acceleration, angle and angular velocity signals in the direction of x , y and z axes of lower limbs. Then we intercept the signal segment with the time window and eliminate the noise by wavelet packet transform, and the fast Fourier transform is used to extract the features of motion. Then the principal component analysis (PCA) is carried out to remove redundant information of the features. Finally, Gaussian mixture model and hidden Markov model are used to identify the human motion modes and road condition. The experimental results show that the recognition rate of routine movement (walking, running, riding, uphill, downhill, up stairs and down stairs) is 96.25%, 92.5%, 96.25%, 91.25%, 93.75%, 88.75% and 90% respectively. Compared with the support vector machine (SVM) method, the results show that the recognition rate of our proposed method is obviously higher, and it can provide a new way for the monitoring and control of the intelligent prosthesis in the future.

4.
Comput Methods Programs Biomed ; 254: 108278, 2024 Jun 11.
Artículo en Inglés | MEDLINE | ID: mdl-38878360

RESUMEN

BACKGROUND AND OBJECTIVE: Training convolutional neural networks based on large amount of labeled data has made great progress in the field of image segmentation. However, in medical image segmentation tasks, annotating the data is expensive and time-consuming because pixel-level annotation requires experts in the relevant field. Currently, the combination of consistent regularization and pseudo labeling-based semi-supervised methods has shown good performance in image segmentation. However, in the training process, a portion of low-confidence pseudo labels are generated by the model. And the semi-supervised segmentation method still has the problem of distribution bias between labeled and unlabeled data. The objective of this study is to address the challenges of semi-supervised learning and improve the segmentation accuracy of semi-supervised models on medical images. METHODS: To address these issues, we propose an Uncertainty-based Region Clipping Algorithm for semi-supervised medical image segmentation, which consists of two main modules. A module is introduced to compute the uncertainty of two sub-networks predictions with diversity using Monte Carlo Dropout, allowing the model to gradually learn from more reliable targets. To retain model diversity, we use different loss functions for different branches and use Non-Maximum Suppression in one of the branches. The other module is proposed to generate new samples by masking the low-confidence pixels in the original image based on uncertainty information. New samples are fed into the model to facilitate the model to generate pseudo labels with high confidence and enlarge the training data distribution. RESULTS: Comprehensive experiments on the combination of two benchmarks ACDC and BraTS2019 show that our proposed model outperforms state-of-the-art methods in terms of Dice, HD95 and ASD. The results reach an average Dice score of 87.86 % and a HD95 score of 4.214 mm on ACDC dataset. For the brain tumor segmentation, the results reach an average Dice score of 84.79 % and a HD score of 10.13 mm. CONCLUSIONS: Our proposed method improves the accuracy of semi-supervised medical image segmentation. Extensive experiments on two public medical image datasets including 2D and 3D modalities demonstrate the superiority of our model. The code is available at: https://github.com/QuintinDong/URCA.

5.
Physiol Meas ; 45(3)2024 Mar 26.
Artículo en Inglés | MEDLINE | ID: mdl-38422513

RESUMEN

Objective.Extracting discriminative spatial information from multiple electrodes is a crucial and challenging problem for electroencephalogram (EEG)-based emotion recognition. Additionally, the domain shift caused by the individual differences degrades the performance of cross-subject EEG classification.Approach.To deal with the above problems, we propose the cerebral asymmetry representation learning-based deep subdomain adaptation network (CARL-DSAN) to enhance cross-subject EEG-based emotion recognition. Specifically, the CARL module is inspired by the neuroscience findings that asymmetrical activations of the left and right brain hemispheres occur during cognitive and affective processes. In the CARL module, we introduce a novel two-step strategy for extracting discriminative features through intra-hemisphere spatial learning and asymmetry representation learning. Moreover, the transformer encoders within the CARL module can emphasize the contributive electrodes and electrode pairs. Subsequently, the DSAN module, known for its superior performance over global domain adaptation, is adopted to mitigate domain shift and further improve the cross-subject performance by aligning relevant subdomains that share the same class samples.Main Results.To validate the effectiveness of the CARL-DSAN, we conduct subject-independent experiments on the DEAP database, achieving accuracies of 68.67% and 67.11% for arousal and valence classification, respectively, and corresponding accuracies of 67.70% and 67.18% on the MAHNOB-HCI database.Significance.The results demonstrate that CARL-DSAN can achieve an outstanding cross-subject performance in both arousal and valence classification.


Asunto(s)
Nivel de Alerta , Electroencefalografía , Bases de Datos Factuales , Suministros de Energía Eléctrica , Emociones
6.
Physiol Meas ; 45(7)2024 Jul 11.
Artículo en Inglés | MEDLINE | ID: mdl-38917842

RESUMEN

Objective. Physiological signals based emotion recognition is a prominent research domain in the field of human-computer interaction. Previous studies predominantly focused on unimodal data, giving limited attention to the interplay among multiple modalities. Within the scope of multimodal emotion recognition, integrating the information from diverse modalities and leveraging the complementary information are the two essential issues to obtain the robust representations.Approach. Thus, we propose a intermediate fusion strategy for combining low-rank tensor fusion with the cross-modal attention to enhance the fusion of electroencephalogram, electrooculogram, electromyography, and galvanic skin response. Firstly, handcrafted features from distinct modalities are individually fed to corresponding feature extractors to obtain latent features. Subsequently, low-rank tensor is fused to integrate the information by the modality interaction representation. Finally, a cross-modal attention module is employed to explore the potential relationships between the distinct latent features and modality interaction representation, and recalibrate the weights of different modalities. And the resultant representation is adopted for emotion recognition.Main results. Furthermore, to validate the effectiveness of the proposed method, we execute subject-independent experiments within the DEAP dataset. The proposed method has achieved the accuracies of 73.82% and 74.55% for valence and arousal classification.Significance. The results of extensive experiments verify the outstanding performance of the proposed method.


Asunto(s)
Electroencefalografía , Electromiografía , Emociones , Procesamiento de Señales Asistido por Computador , Humanos , Emociones/fisiología , Respuesta Galvánica de la Piel/fisiología , Atención/fisiología , Electrooculografía
7.
Physiol Meas ; 44(9)2023 09 21.
Artículo en Inglés | MEDLINE | ID: mdl-37619586

RESUMEN

Objective. To enhance the accuracy of heart sound classification, this study aims to overcome the limitations of common models which rely on handcrafted feature extraction. These traditional methods may distort or discard crucial pathological information within heart sounds due to their requirement of tedious parameter settings.Approach.We propose a learnable front-end based Efficient Channel Attention Network (ECA-Net) for heart sound classification. This novel approach optimizes the transformation of waveform-to-spectrogram, enabling adaptive feature extraction from heart sound signals without domain knowledge. The features are subsequently fed into an ECA-Net based convolutional recurrent neural network, which emphasizes informative features and suppresses irrelevant information. To address data imbalance, Focal loss is employed in our model.Main results.Using the well-known public PhysioNet challenge 2016 dataset, our method achieved a classification accuracy of 97.77%, outperforming the majority of previous studies and closely rivaling the best model with a difference of just 0.57%.Significance.The learnable front-end facilitates end-to-end training by replacing the conventional heart sound feature extraction module. This provides a novel and efficient approach for heart sound classification research and applications, enhancing the practical utility of end-to-end models in this field.


Asunto(s)
Ruidos Cardíacos , Redes Neurales de la Computación , Sonido
8.
Comput Biol Med ; 109: 159-170, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-31059900

RESUMEN

To estimate the reliability and cognitive states of operator performance in a human-machine collaborative environment, we propose a novel human mental workload (MW) recognizer based on deep learning principles and utilizing the features of the electroencephalogram (EEG). To determine personalized properties in high dimensional EEG indicators, we introduce a feature mapping layer in stacked denoising autoencoder (SDAE) that is capable of preserving the local information in EEG dynamics. The ensemble classifier is then built via the subject-specific integrated deep learning committee, and adapts to the cognitive properties of a specific human operator and alleviates inter-subject feature variations. We validate our algorithms and the ensemble SDAE classifier with local information preservation (denoted by EL-SDAE) on an EEG database collected during the execution of complex human-machine tasks. The classification performance indicates that the EL-SDAE outperforms several classical MW estimators when its optimal network architecture has been identified.


Asunto(s)
Cognición/fisiología , Bases de Datos Factuales , Aprendizaje Profundo , Electroencefalografía , Modelos Neurológicos , Humanos
9.
Comput Methods Programs Biomed ; 140: 93-110, 2017 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-28254094

RESUMEN

BACKGROUND AND OBJECTIVE: Using deep-learning methodologies to analyze multimodal physiological signals becomes increasingly attractive for recognizing human emotions. However, the conventional deep emotion classifiers may suffer from the drawback of the lack of the expertise for determining model structure and the oversimplification of combining multimodal feature abstractions. METHODS: In this study, a multiple-fusion-layer based ensemble classifier of stacked autoencoder (MESAE) is proposed for recognizing emotions, in which the deep structure is identified based on a physiological-data-driven approach. Each SAE consists of three hidden layers to filter the unwanted noise in the physiological features and derives the stable feature representations. An additional deep model is used to achieve the SAE ensembles. The physiological features are split into several subsets according to different feature extraction approaches with each subset separately encoded by a SAE. The derived SAE abstractions are combined according to the physiological modality to create six sets of encodings, which are then fed to a three-layer, adjacent-graph-based network for feature fusion. The fused features are used to recognize binary arousal or valence states. RESULTS: DEAP multimodal database was employed to validate the performance of the MESAE. By comparing with the best existing emotion classifier, the mean of classification rate and F-score improves by 5.26%. CONCLUSIONS: The superiority of the MESAE against the state-of-the-art shallow and deep emotion classifiers has been demonstrated under different sizes of the available physiological instances.


Asunto(s)
Emociones , Aprendizaje , Modelos Psicológicos , Humanos
10.
Front Neurorobot ; 11: 19, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28443015

RESUMEN

Using machine-learning methodologies to analyze EEG signals becomes increasingly attractive for recognizing human emotions because of the objectivity of physiological data and the capability of the learning principles on modeling emotion classifiers from heterogeneous features. However, the conventional subject-specific classifiers may induce additional burdens to each subject for preparing multiple-session EEG data as training sets. To this end, we developed a new EEG feature selection approach, transfer recursive feature elimination (T-RFE), to determine a set of the most robust EEG indicators with stable geometrical distribution across a group of training subjects and a specific testing subject. A validating set is introduced to independently determine the optimal hyper-parameter and the feature ranking of the T-RFE model aiming at controlling the overfitting. The effectiveness of the T-RFE algorithm for such cross-subject emotion classification paradigm has been validated by DEAP database. With a linear least square support vector machine classifier implemented, the performance of the T-RFE is compared against several conventional feature selection schemes and the statistical significant improvement has been found. The classification rate and F-score achieve 0.7867, 0.7526, 0.7875, and 0.8077 for arousal and valence dimensions, respectively, and outperform several recent reported works on the same database. In the end, the T-RFE based classifier is compared against two subject-generic classifiers in the literature. The investigation of the computational time for all classifiers indicates the accuracy improvement of the T-RFE is at the cost of the longer training time.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA