Your browser doesn't support javascript.
loading
A heart sound segmentation method based on multi-feature fusion network / 中国胸心血管外科临床杂志
Article de Zh | WPRIM | ID: wpr-1031682
Bibliothèque responsable: WPRO
ABSTRACT
@#Objective To propose a heart sound segmentation method based on multi-feature fusion network. Methods Data were obtained from the CinC/PhysioNet 2016 Challenge dataset (a total of 3 153 recordings from 764 patients, about 91.93% of whom were male, with an average age of 30.36 years). Firstly the features were extracted in time domain and time-frequency domain respectively, and reduced redundant features by feature dimensionality reduction. Then, we selected optimal features separately from the two feature spaces that performed best through feature selection. Next, the multi-feature fusion was completed through multi-scale dilated convolution, cooperative fusion, and channel attention mechanism. Finally, the fused features were fed into a bidirectional gated recurrent unit (BiGRU) network to heart sound segmentation results. Results The proposed method achieved precision, recall and F1 score of 96.70%, 96.99%, and 96.84% respectively. Conclusion The multi-feature fusion network proposed in this study has better heart sound segmentation performance, which can provide high-accuracy heart sound segmentation technology support for the design of automatic analysis of heart diseases based on heart sounds.
Mots clés
Texte intégral: 1 Base de données: WPRIM Langue: Zh Journal: Chinese Journal of Clinical Thoracic and Cardiovascular Surgery Année: 2024 Type de document: Article
Texte intégral: 1 Base de données: WPRIM Langue: Zh Journal: Chinese Journal of Clinical Thoracic and Cardiovascular Surgery Année: 2024 Type de document: Article