Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.862
Filter
1.
Sci Rep ; 14(1): 23196, 2024 10 05.
Article in English | MEDLINE | ID: mdl-39368993

ABSTRACT

Heart sound auscultation plays a crucial role in the early diagnosis of cardiovascular diseases. In recent years, great achievements have been made in the automatic classification of heart sounds, but most methods are based on segmentation features and traditional classifiers and do not fully exploit existing deep networks. This paper proposes a cardiac audio classification method based on image expression of multidimensional features (CACIEMDF). First, a 102-dimensional feature vector is designed by combining the characteristics of heart sound data in the time domain, frequency domain and statistical domain. Based on the feature vector, a two-dimensional feature projection space is constructed by PCA dimensionality reduction and the convex hull algorithm, and 102 pairs of coordinate representations of the feature vector in the two-dimensional space are calculated. Each one-dimensional component of the feature vector corresponds to a pair of 2D coordinate representations. Finally, the one-dimensional feature component value and its divergence into categories are used to fill the three channels of a color image, and a Gaussian model is used to dye the image to enrich its content. The color image is sent to a deep network such as ResNet50 for classification. In this paper, three public heart sound datasets are fused, and experiments are conducted using the above methods. The results show that for the two-classification/five-classification task of heart sounds, the method in this paper can achieve a classification accuracy of 95.68%/94.53% when combined with the current deep network.


Subject(s)
Algorithms , Heart Sounds , Humans , Heart Sounds/physiology , Image Processing, Computer-Assisted/methods , Heart Auscultation/methods
2.
J Feline Med Surg ; 26(10): 1098612X241275296, 2024 Oct.
Article in English | MEDLINE | ID: mdl-39387720

ABSTRACT

OBJECTIVES: Stress associated with manipulation during electrocardiography (ECG) recording in cats potentially limits the assessment of autonomic function through heart rate variability (HRV) in the feline population. This study proposed an alternative, cat friendly, stethoscopic approach to evaluate HRV with an easily acquired vasovagal tonus index (VVTI). METHODS: The aim of this prospective study was to evaluate whether VVTI derived from heart sound signals could distinguish between relaxed and stimulated states. A total of 29 cats with 56 recordings of heart sound and ECG on 31 occasions were included. In 25 cats in their home environment, a stethoscope connected to a digital recording device was used to record 2 mins of heart sounds twice - with the cats in a relaxed state and immediately after stimulation. The VVTI was calculated from 20, 60 and 120 consecutive beat-to-beat intervals on the heart sound spectrogram (stethoscopic-VVTI 20, 60 and 120), using the natural logarithm of the variance of the intervals based on previous literature. A 2-min ECG recording was obtained at home with the intention of avoiding strict restraint. To demonstrate the feasibility of the stethoscopic approach in a hospital setting, six cats (two of which were also recorded at home) underwent heart sound and ECG recordings during planned veterinary visits. RESULTS: Stethoscopic-VVTI 20 (5.43 to 4.79, P = 0.001), 60 (6.20 to 5.18, P <0.001) and 120 (6.24 to 5.60, P = 0.02) all significantly decreased after stimulation, indicating a reduced vasovagal tone as expected. Calculations of stethoscopic-VVTI from different sections of the recording yielded statistically similar results. Stethoscopic-VVTI showed a negative correlation with the corresponding heart rate. Bland-Altman analysis revealed a mean bias for the differences between stethoscopic-VVTI and ECG-VVTI of 0.50 and 1.07 at home and in the hospital, respectively. CONCLUSIONS AND RELEVANCE: VVTI can be successfully detected through a stethoscopic approach, serving as a less stressful tool for HRV evaluation in cats during routine auscultation.


Subject(s)
Electrocardiography , Heart Rate , Stethoscopes , Animals , Cats/physiology , Heart Rate/physiology , Stethoscopes/veterinary , Male , Female , Prospective Studies , Electrocardiography/veterinary , Stress, Physiological , Heart Sounds/physiology
3.
Eur J Pediatr ; 183(11): 4951-4958, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39304593

ABSTRACT

Our aim was to investigate the ability of an artificial intelligence (AI)-based algorithm to differentiate innocent murmurs from pathologic ones. An AI-based algorithm was developed using heart sound recordings collected from 1413 patients at the five university hospitals in Finland. The corresponding heart condition was verified using echocardiography. In the second phase of the study, patients referred to Helsinki New Children's Hospital due to a heart murmur were prospectively assessed with the algorithm, and then the results were compared with echocardiography findings. Ninety-eight children were included in this prospective study. The algorithm classified 72 (73%) of the heart sounds as normal and 26 (27%) as abnormal. Echocardiography was normal in 63 (64%) children and abnormal in 35 (36%). The algorithm recognized abnormal heart sounds in 24 of 35 children with abnormal echocardiography and normal heart sounds with normal echocardiography in 61 of 63 children. When the murmur was audible, the sensitivity and specificity of the algorithm were 83% (24/29) (confidence interval (CI) 64-94%) and 97% (59/61) (CI 89-100%), respectively. CONCLUSION: The algorithm was able to distinguish murmurs associated with structural cardiac anomalies from innocent murmurs with good sensitivity and specificity. The algorithm was unable to identify heart defects that did not cause a murmur. Further research is needed on the use of the algorithm in screening for heart murmurs in primary health care. WHAT IS KNOWN: • Innocent murmurs are common in children, while the incidence of moderate or severe congenital heart defects is low. Auscultation plays a significant role in assessing the need for further examinations of the murmur. The ability to differentiate innocent murmurs from those related to congenital heart defects requires clinical experience on the part of general practitioners. No AI-based auscultation algorithms have been systematically implemented in primary health care. WHAT IS NEW: • We developed an AI-based algorithm using a large dataset of sound samples validated by echocardiography. The algorithm performed well in recognizing pathological and innocent murmurs in children from different age groups.


Subject(s)
Algorithms , Echocardiography , Heart Defects, Congenital , Heart Murmurs , Heart Sounds , Humans , Child, Preschool , Prospective Studies , Female , Male , Child , Heart Murmurs/diagnosis , Infant , Echocardiography/methods , Heart Defects, Congenital/diagnosis , Sensitivity and Specificity , Artificial Intelligence , Adolescent , Heart Auscultation/methods , Finland , Infant, Newborn , Mass Screening/methods
4.
Sensors (Basel) ; 24(16)2024 Aug 17.
Article in English | MEDLINE | ID: mdl-39205027

ABSTRACT

Phonocardiography (PCG) is used as an adjunct to teach cardiac auscultation and is now a function of PCG-capable stethoscopes (PCS). To evaluate the efficacy of PCG and PCS, the authors investigated the impact of providing PCG data and PCSs on how frequently murmurs, rubs, and gallops (MRGs) were correctly identified by third-year medical students. Following their internal medicine rotation, third-year medical students from the Georgetown University School of Medicine completed a standardized auscultation assessment. Sound files of 10 different MRGs with a corresponding clinical vignette and physical exam location were provided with and without PCG (with interchangeable question stems) as 10 paired questions (20 total questions). Some (32) students also received a PCS to use during their rotation. Discrimination/difficulty indexes, comparative chi-squared, and McNemar test p-values were calculated. The addition of phonocardiograms to audio data was associated with more frequent identification of mitral stenosis, S4, and cardiac friction rub, but less frequent identification of ventricular septal defect, S3, and tricuspid regurgitation. Students with a PCS had a higher frequency of identifying a cardiac friction rub. PCG may improve the identification of low-frequency, usually diastolic, heart sounds but appears to worsen or have little effect on the identification of higher-frequency, often systolic, heart sounds. As digital and phonocardiography-capable stethoscopes become more prevalent, insights regarding their strengths and weaknesses may be incorporated into medical school curricula, bedside rounds (to enhance teaching and diagnosis), and telemedicine/tele-auscultation efforts.


Subject(s)
Stethoscopes , Students, Medical , Phonocardiography/methods , Humans , Heart Auscultation/methods , Heart Murmurs/diagnosis , Heart Murmurs/physiopathology , Heart Sounds/physiology
5.
Stud Health Technol Inform ; 316: 889-893, 2024 Aug 22.
Article in English | MEDLINE | ID: mdl-39176936

ABSTRACT

The use of heart sounds for the assessment of the hemodynamic condition of the heart in telemonitoring applications is object of wide research at date. Many different approaches have been tried out for the analysis of the first (S1) and second (S2) heart sounds, but their morphological interpretation is still to be explored: in fact, the sound morphology is not unique and this impact the separability of the heart sounds components with methods based on envelopes or model optimization. In this study, we propose a method to stratify S1 and S2 according to their morphology to explore their diversity and increase their morphological interpretability. The method we propose is based on unsupervised learning, which we obtain using the cascade of four Self-Organizing Maps (SOMs) of decreasing dimensions. When tested on a publicly available heart sounds dataset, the proposed clustering approach proved to be robust and consistent, with over 80% of the heartbeats of the same patient being clustered together. The identified heart sounds templates highlight differences in the time and energy domains which may open to new directions of analysis in the future.


Subject(s)
Heart Sounds , Unsupervised Machine Learning , Heart Sounds/physiology , Humans , Phonocardiography , Signal Processing, Computer-Assisted
6.
IEEE J Biomed Health Inform ; 28(9): 5055-5066, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39012744

ABSTRACT

Ubiquitous sensing has been widely applied in smart healthcare, providing an opportunity for intelligent heart sound auscultation. However, smart devices contain sensitive information, raising user privacy concerns. To this end, federated learning (FL) has been adopted as an effective solution, enabling decentralised learning without data sharing, thus preserving data privacy in the Internet of Health Things (IoHT). Nevertheless, traditional FL requires the same architectural models to be trained across local clients and global servers, leading to a lack of model heterogeneity and client personalisation. For medical institutions with private data clients, this study proposes Fed-MStacking, a heterogeneous FL framework that incorporates a stacking ensemble learning strategy to support clients in building their own models. The secondary objective of this study is to address scenarios involving local clients with data characterised by inconsistent labelling. Specifically, the local client contains only one case type, and the data cannot be shared within or outside the institution. To train a global multi-class classifier, we aggregate missing class information from all clients at each institution and build meta-data, which then participates in FL training via a meta-learner. We apply the proposed framework to a multi-institutional heart sound database. The experiments utilise random forests (RFs), feedforward neural networks (FNNs), and convolutional neural networks (CNNs) as base classifiers. The results show that the heterogeneous stacking of local models performs better compared to homogeneous stacking.


Subject(s)
Heart Sounds , Machine Learning , Signal Processing, Computer-Assisted , Humans , Heart Sounds/physiology , Algorithms , Heart Auscultation/methods , Adult
7.
PLoS One ; 19(7): e0305404, 2024.
Article in English | MEDLINE | ID: mdl-39008512

ABSTRACT

This work investigates whether inclusion of the low-frequency components of heart sounds can increase the accuracy, sensitivity and specificity of diagnosis of cardiovascular disorders. We standardized the measurement method to minimize changes in signal characteristics. We used the Continuous Wavelet Transform to analyze changing frequency characteristics over time and to allocate frequencies appropriately between the low-frequency and audible frequency bands. We used a Convolutional Neural Network (CNN) and deep-learning (DL) for image classification, and a CNN equipped with long short-term memory to enable sequential feature extraction. The accuracy of the learning model was validated using the PhysioNet 2016 CinC dataset, then we used our collected dataset to show that incorporating low-frequency components in the dataset increased the DL model's accuracy by 2% and sensitivity by 4%. Furthermore, the LSTM layer was 0.8% more accurate than the dense layer.


Subject(s)
Heart Sounds , Neural Networks, Computer , Phonocardiography/methods , Humans , Heart Sounds/physiology , Deep Learning , Male , Wavelet Analysis , Female , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/physiopathology , Adult , Signal Processing, Computer-Assisted
8.
J Acoust Soc Am ; 155(6): 3822-3832, 2024 06 01.
Article in English | MEDLINE | ID: mdl-38874464

ABSTRACT

This study proposes the use of vocal resonators to enhance cardiac auscultation signals and evaluates their performance for voice-noise suppression. Data were collected using two electronic stethoscopes while each study subject was talking. One collected auscultation signal from the chest while the other collected voice signals from one of the three voice resonators (cheek, back of the neck, and shoulder). The spectral subtraction method was applied to the signals. Both objective and subjective metrics were used to evaluate the quality of enhanced signals and to investigate the most effective vocal resonator for noise suppression. Our preliminary findings showed a significant improvement after enhancement and demonstrated the efficacy of vocal resonators. A listening survey was conducted with thirteen physicians to evaluate the quality of enhanced signals, and they have received significantly better scores regarding the sound quality than their original signals. The shoulder resonator group demonstrated significantly better sound quality than the cheek group when reducing voice sound in cardiac auscultation signals. The suggested method has the potential to be used for the development of an electronic stethoscope with a robust noise removal function. Significant clinical benefits are expected from the expedited preliminary diagnostic procedure.


Subject(s)
Heart Auscultation , Signal Processing, Computer-Assisted , Stethoscopes , Humans , Heart Auscultation/instrumentation , Heart Auscultation/methods , Heart Auscultation/standards , Male , Female , Adult , Heart Sounds/physiology , Sound Spectrography , Equipment Design , Voice/physiology , Middle Aged , Voice Quality , Vibration , Noise
9.
Artif Intell Med ; 153: 102867, 2024 07.
Article in English | MEDLINE | ID: mdl-38723434

ABSTRACT

OBJECTIVE: To develop a deep learning algorithm to perform multi-class classification of normal pediatric heart sounds, innocent murmurs, and pathologic murmurs. METHODS: We prospectively enrolled children under age 18 being evaluated by the Division of Pediatric Cardiology. Parents provided consent for a deidentified recording of their child's heart sounds with a digital stethoscope. Innocent murmurs were validated by a pediatric cardiologist and pathologic murmurs were validated by echocardiogram. To augment our collection of normal heart sounds, we utilized a public database of pediatric heart sound recordings (Oliveira, 2022). We propose two novel approaches for this audio classification task. We train a vision transformer on either Markov transition field or Gramian angular field image representations of the frequency spectrum. We benchmark our results against a ResNet-50 CNN trained on spectrogram images. RESULTS: Our final dataset consisted of 366 normal heart sounds, 175 innocent murmurs, and 216 pathologic murmurs. Innocent murmurs collected include Still's murmur, venous hum, and flow murmurs. Pathologic murmurs included ventricular septal defect, tetralogy of Fallot, aortic regurgitation, aortic stenosis, pulmonary stenosis, mitral regurgitation and stenosis, and tricuspid regurgitation. We find that the Vision Transformer consistently outperforms the ResNet-50 on all three image representations, and that the Gramian angular field is the superior image representation for pediatric heart sounds. We calculated a one-vs-rest multi-class ROC curve for each of the three classes. Our best model achieves an area under the curve (AUC) value of 0.92 ± 0.05, 0.83 ± 0.04, and 0.88 ± 0.04 for identifying normal heart sounds, innocent murmurs, and pathologic murmurs, respectively. CONCLUSION: We present two novel methods for pediatric heart sound classification, which outperforms the current standard of using a convolutional neural network trained on spectrogram images. To our knowledge, we are the first to demonstrate multi-class classification of pediatric murmurs. Multiclass output affords a more explainable and interpretable model, which can facilitate further model improvement in the downstream model development cycle and enhance clinician trust and therefore adoption.


Subject(s)
Deep Learning , Heart Murmurs , Humans , Heart Murmurs/diagnosis , Heart Murmurs/physiopathology , Heart Murmurs/classification , Child , Child, Preschool , Infant , Adolescent , Prospective Studies , Heart Sounds/physiology , Female , Male , Algorithms , Diagnosis, Differential , Heart Auscultation/methods
10.
IEEE Trans Biomed Eng ; 71(10): 2802-2813, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38700959

ABSTRACT

OBJECTIVE: Early diagnosis of cardiovascular diseases is a crucial task in medical practice. With the application of computer audition in the healthcare field, artificial intelligence (AI) has been applied to clinical non-invasive intelligent auscultation of heart sounds to provide rapid and effective pre-screening. However, AI models generally require large amounts of data which may cause privacy issues. Unfortunately, it is difficult to collect large amounts of healthcare data from a single centre. METHODS: In this study, we propose federated learning (FL) optimisation strategies for the practical application in multi-centre institutional heart sound databases. The horizontal FL is mainly employed to tackle the privacy problem by aligning the feature spaces of FL participating institutions without information leakage. In addition, techniques based on deep learning have poor interpretability due to their "black-box" property, which limits the feasibility of AI in real medical data. To this end, vertical FL is utilised to address the issues of model interpretability and data scarcity. CONCLUSION: Experimental results demonstrate that, the proposed FL framework can achieve good performance for heart sound abnormality detection by taking the personal privacy protection into account. Moreover, using the federated feature space is beneficial to balance the interpretability of the vertical FL and the privacy of the data. SIGNIFICANCE: This work realises the potential of FL from research to clinical practice, and is expected to have extensive application in the federated smart medical system.


Subject(s)
Heart Sounds , Humans , Heart Sounds/physiology , Signal Processing, Computer-Assisted , Male , Databases, Factual , Deep Learning , Adult , Female , Algorithms , Middle Aged , Young Adult , Child
11.
J Cardiol ; 84(4): 266-273, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38701945

ABSTRACT

BACKGROUND: Multi-parametric assessment, including heart sounds in addition to conventional parameters, may enhance the efficacy of noninvasive telemonitoring for heart failure (HF). We sought to assess the feasibility of self-telemonitoring with multiple devices including a handheld heart sound recorder and its association with clinical events in patients with HF. METHODS: Ambulatory HF patients recorded their own heart sounds, mono­lead electrocardiograms, oxygen saturation, body weight, and vital signs using multiple devices every morning for six months. RESULTS: In the 77 patients enrolled (63 ±â€¯13 years old, 84 % male), daily measurements were feasible with a self-measurement rate of >70 % of days in 75 % of patients. Younger age and higher Minnesota Living with Heart Failure Questionnaire scores were independently associated with lower adherence (p = 0.002 and 0.027, respectively). A usability questionnaire showed that 87 % of patients felt self-telemonitoring was helpful, and 96 % could use the devices without routine cohabitant support. Six patients experienced ten HF events of re-hospitalization and/or unplanned hospital visits due to HF. In patients who experienced HF events, a significant increase in heart rate and diastolic blood pressure and a decrease in the time interval from Q wave onset to the second heart sound were observed 7 days before the events compared with those without HF events. CONCLUSIONS: Self-telemonitoring with multiple devices including a handheld heart sound recorder was feasible even in elderly patients with HF. This intervention may confer a sense of relief to patients and enable monitoring of physiological parameters that could be valuable in detecting the deterioration of HF.


Subject(s)
Feasibility Studies , Heart Failure , Heart Sounds , Telemedicine , Humans , Heart Failure/physiopathology , Heart Failure/therapy , Male , Female , Middle Aged , Pilot Projects , Aged , Telemedicine/instrumentation , Self Care , Heart Rate , Surveys and Questionnaires , Electrocardiography
12.
Adv Mater ; 36(29): e2401508, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38747492

ABSTRACT

Electronic stethoscope used to detect cardiac sounds that contain essential clinical information is a primary tool for diagnosis of various cardiac disorders. However, the linear electromechanical constitutive relation makes conventional piezoelectric sensors rather ineffective to detect low-intensity, low-frequency heart acoustic signal without the assistance of complex filtering and amplification circuits. Herein, it is found that triboelectric sensor features superior advantages over piezoelectric one for microquantity sensing originated from the fast saturated constitutive characteristic. As a result, the triboelectric sensor shows ultrahigh sensitivity (1215 mV Pa-1) than the piezoelectric counterpart (21 mV Pa-1) in the sound pressure range of 50-80 dB under the same testing condition. By designing a trumpet-shaped auscultatory cavity with a power function cross-section to achieve acoustic energy converging and impedance matching, triboelectric stethoscope delivers 36 dB signal-to-noise ratio for human test (2.3 times of that for piezoelectric one). Further combining with machine learning, five cardiac states can be diagnosed at 97% accuracy. In general, the triboelectric sensor is distinctly unique in basic mechanism, provides a novel design concept for sensing micromechanical quantities, and presents significant potential for application in cardiac sounds sensing and disease diagnosis.


Subject(s)
Heart Sounds , Stethoscopes , Humans , Equipment Design , Acoustics/instrumentation , Signal-To-Noise Ratio
13.
Med Biol Eng Comput ; 62(8): 2485-2497, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38627355

ABSTRACT

Obtaining accurate cardiac auscultation signals, including basic heart sounds (S1 and S2) and subtle signs of disease, is crucial for improving cardiac diagnoses and making the most of telehealth. This research paper introduces an innovative approach that utilizes a modified cosine transform (MCT) and a masking strategy based on long short-term memory (LSTM) to effectively distinguish heart sounds and murmurs from background noise and interfering sounds. The MCT is used to capture the repeated pattern of the heart sounds, while the LSTMs are trained to construct masking based on the repeated MCT spectrum. The proposed strategy's performance in maintaining the clinical relevance of heart sounds continues to demonstrate effectiveness, even in environments marked by increased noise and complex disruptions. The present work highlights the clinical significance and reliability of the suggested methodology through in-depth signal visualization and rigorous statistical performance evaluations. In comparative assessments, the proposed approach has demonstrated superior performance compared to recent algorithms, such as LU-Net and PC-DAE. Furthermore, the system's adaptability to various datasets enhances its reliability and practicality. The suggested method is a potential way to improve the accuracy of cardiovascular diagnostics in an era of rapid advancement in medical signal processing. The proposed approach showed an enhancement in the average signal-to-noise ratio (SNR) by 9.6 dB at an input SNR of - 6 dB and by 3.3 dB at an input SNR of 10 dB. The average signal distortion ratio (SDR) achieved across a variety of input SNR values was 8.56 dB.


Subject(s)
Algorithms , Heart Auscultation , Heart Sounds , Signal Processing, Computer-Assisted , Signal-To-Noise Ratio , Humans , Heart Auscultation/methods , Heart Sounds/physiology , Reproducibility of Results
14.
Int J Med Educ ; 15: 37-43, 2024 Apr 05.
Article in English | MEDLINE | ID: mdl-38581237

ABSTRACT

Methods:   A pilot randomized controlled trial was conducted at our institution's simulation center with 32 first year medical students from a single medical institution. Participants were randomly divided into two equal groups and completed an educational module the identification and pathophysiology of five common cardiac sounds. The control group utilized traditional education methods, while the interventional group incorporated multisensory stimuli. Afterwards, participants listened to randomly selected cardiac sounds and competency data was collected through a multiple-choice post-assessment in both groups. Mann-Whitney U test was used to analyze the data. Results: Data were analyzed using the Mann-Whitney U test. Diagnostic accuracy was significantly higher in the multisensory group (Mdn=100%) compared to the control group (Mdn=60%) on the post-assessment (U=73.5, p<0.042). Likewise, knowledge acquisition was substantially better in the multisensory group (Mdn=80%) than in the control group (Mdn=50%) (U= 49, p<0.031). Conclusions: These findings suggest the incorporation of multisensory stimuli significantly improves cardiac auscultation competency. Given its cost-effectiveness and simplicity, this approach offers a viable alternative to more expensive simulation technologies like the Harvey simulator, particularly in settings with limited resources. Consequently, this teaching modality holds promise for global applicability, addressing the worldwide deterioration in cardiac auscultation skills and potentially leading to better patient outcomes. Future studies should broaden the sample size, span multiple institutions, and investigate long-term retention rates.


Subject(s)
Heart Sounds , Students, Medical , Humans , Heart Auscultation , Clinical Competence , Heart Sounds/physiology , Educational Measurement/methods
15.
Sci Rep ; 14(1): 8602, 2024 04 13.
Article in English | MEDLINE | ID: mdl-38615106

ABSTRACT

Although the esophageal stethoscope is used for continuous auscultation during general anesthesia, few studies have investigated phonocardiographic data as a continuous hemodynamic index. In this study, we aimed to induce hemodynamic variations and clarify the relationship between the heart sounds and hemodynamic variables through an experimental animal study. Changes in the cardiac contractility and vascular resistance were induced in anesthetized pigs by administering dobutamine, esmolol, phenylephrine, and nicardipine. In addition, a decrease in cardiac output was induced by restricting the venous return by clamping the inferior vena cava (IVC). The relationship between the hemodynamic changes and changes in the heart sound indices was analyzed. Experimental data from eight pigs were analyzed. The mean values of the correlation coefficients of changes in S1 amplitude (ΔS1amp) with systolic blood pressure (ΔSBP), pulse pressure (ΔPP), and ΔdP/dt during dobutamine administration were 0.94, 0.96, and 0.96, respectively. The mean values of the correlation coefficients of ΔS1amp with ΔSBP, ΔPP, and ΔdP/dt during esmolol administration were 0.80, 0.82, and 0.86, respectively. The hemodynamic changes caused by the administration of phenylephrine and nicardipine did not correlate significantly with changes in the heart rate. The S1 amplitude of the heart sound was significantly correlated with the hemodynamic changes caused by the changes in cardiac contractility but not with the variations in the vascular resistance. Heart sounds can potentially provide a non-invasive monitoring method to differentiate the cause of hemodynamic variations.


Subject(s)
Heart Sounds , Propanolamines , Animals , Swine , Dobutamine/pharmacology , Nicardipine , Hemodynamics , Phenylephrine/pharmacology
16.
Sensors (Basel) ; 24(5)2024 Feb 27.
Article in English | MEDLINE | ID: mdl-38475062

ABSTRACT

Cardiac auscultation is an essential part of physical examination and plays a key role in the early diagnosis of many cardiovascular diseases. The analysis of phonocardiography (PCG) recordings is generally based on the recognition of the main heart sounds, i.e., S1 and S2, which is not a trivial task. This study proposes a method for an accurate recognition and localization of heart sounds in Forcecardiography (FCG) recordings. FCG is a novel technique able to measure subsonic vibrations and sounds via small force sensors placed onto a subject's thorax, allowing continuous cardio-respiratory monitoring. In this study, a template-matching technique based on normalized cross-correlation was used to automatically recognize heart sounds in FCG signals recorded from six healthy subjects at rest. Distinct templates were manually selected from each FCG recording and used to separately localize S1 and S2 sounds, as well as S1-S2 pairs. A simultaneously recorded electrocardiography (ECG) trace was used for performance evaluation. The results show that the template matching approach proved capable of separately classifying S1 and S2 sounds in more than 96% of all heartbeats. Linear regression, correlation, and Bland-Altman analyses showed that inter-beat intervals were estimated with high accuracy. Indeed, the estimation error was confined within 10 ms, with negligible impact on heart rate estimation. Heart rate variability (HRV) indices were also computed and turned out to be almost comparable with those obtained from ECG. The preliminary yet encouraging results of this study suggest that the template matching approach based on normalized cross-correlation allows very accurate heart sounds localization and inter-beat intervals estimation.


Subject(s)
Heart Sounds , Humans , Heart Sounds/physiology , Phonocardiography , Heart/physiology , Heart Auscultation , Electrocardiography , Heart Rate
17.
Ann Noninvasive Electrocardiol ; 29(2): e13108, 2024 03.
Article in English | MEDLINE | ID: mdl-38450594

ABSTRACT

An 81-year-old male with a history of coronary artery disease, hypertension, paroxysmal atrial fibrillation and chronic kidney disease presents with asymptomatic bradycardia. Examination was notable for an early diastolic heart sound. 12-lead electrocardiogram revealed sinus bradycardia with a markedly prolonged PR interval and second-degree atrioventricular block, type I Mobitz. We review the differential diagnosis of early diastolic heart sounds and present a case of Wenckebach associated with a variable early diastolic sound on physical exam.


Subject(s)
Atrial Fibrillation , Atrioventricular Block , Heart Sounds , Aged, 80 and over , Humans , Male , Atrial Fibrillation/diagnosis , Atrioventricular Block/diagnosis , Bradycardia , Electrocardiography , Heart Atria
18.
Comput Methods Programs Biomed ; 248: 108122, 2024 May.
Article in English | MEDLINE | ID: mdl-38507960

ABSTRACT

BACKGROUND AND OBJECTIVE: Most of the existing machine learning-based heart sound classification methods achieve limited accuracy. Since they primarily depend on single domain feature information and tend to focus equally on each part of the signal rather than employing a selective attention mechanism. In addition, they fail to exploit convolutional neural network (CNN) - based features with an effective fusion strategy. METHODS: In order to overcome these limitations, a novel multimodal attention convolutional neural network (MACNN) with a feature-level fusion strategy, in which Mel-cepstral domain as well as general frequency domain features are incorporated to increase the diversity of the features, is proposed in this paper. In the proposed method, DilationAttenNet is first utilized to construct attention-based CNN feature extractors and then these feature extractors are jointly optimized in MACNN at the feature-level. The attention mechanism aims to suppress irrelevant information and focus on crucial diverse features extracted from the CNN. RESULTS: Extensive experiments are carried out to study the efficacy of the feature level fusion in comparison to that with early fusion. The results show that the proposed MACNN method significantly outperforms the state-of-the-art approaches in terms of accuracy and score for the two publicly available Github and Physionet datasets. CONCLUSION: The findings of our experiments demonstrated the high performance for heart sound classification based on the proposed MACNN, and hence have potential clinical usefulness in the identification of heart diseases. This technique can assist cardiologists and researchers in the design and development of heart sound classification methods.


Subject(s)
Heart Diseases , Heart Sounds , Humans , Machine Learning , Neural Networks, Computer
19.
Sci Rep ; 14(1): 3123, 2024 02 07.
Article in English | MEDLINE | ID: mdl-38326488

ABSTRACT

As cardiovascular disorders are prevalent, there is a growing demand for reliable and precise diagnostic methods within this domain. Audio signal-based heart disease detection is a promising area of research that leverages sound signals generated by the heart to identify and diagnose cardiovascular disorders. Machine learning (ML) and deep learning (DL) techniques are pivotal in classifying and identifying heart disease from audio signals. This study investigates ML and DL techniques to detect heart disease by analyzing noisy sound signals. This study employed two subsets of datasets from the PASCAL CHALLENGE having real heart audios. The research process and visually depict signals using spectrograms and Mel-Frequency Cepstral Coefficients (MFCCs). We employ data augmentation to improve the model's performance by introducing synthetic noise to the heart sound signals. In addition, a feature ensembler is developed to integrate various audio feature extraction techniques. Several machine learning and deep learning classifiers are utilized for heart disease detection. Among the numerous models studied and previous study findings, the multilayer perceptron model performed best, with an accuracy rate of 95.65%. This study demonstrates the potential of this methodology in accurately detecting heart disease from sound signals. These findings present promising opportunities for enhancing medical diagnosis and patient care.


Subject(s)
Cardiovascular Diseases , Heart Diseases , Heart Sounds , Humans , Artificial Intelligence , Neural Networks, Computer , Heart Diseases/diagnosis , Machine Learning
20.
Technol Health Care ; 32(3): 1925-1945, 2024.
Article in English | MEDLINE | ID: mdl-38393859

ABSTRACT

BACKGROUND: Cardiac diseases are highly detrimental illnesses, responsible for approximately 32% of global mortality [1]. Early diagnosis and prompt treatment can reduce deaths caused by cardiac diseases. In paediatric patients, it is challenging for paediatricians to identify functional murmurs and pathological murmurs from heart sounds. OBJECTIVE: The study intends to develop a novel blended ensemble model using hybrid deep learning models and softmax regression to classify adult, and paediatric heart sounds into five distinct classes, distinguishing itself as a groundbreaking work in this domain. Furthermore, the research aims to create a comprehensive 5-class paediatric phonocardiogram (PCG) dataset. The dataset includes two critical pathological classes, namely atrial septal defects and ventricular septal defects, along with functional murmurs, pathological and normal heart sounds. METHODS: The work proposes a blended ensemble model (HbNet-Heartbeat Network) comprising two hybrid models, CNN-BiLSTM and CNN-LSTM, as base models and Softmax regression as meta-learner. HbNet leverages the strengths of base models and improves the overall PCG classification accuracy. Mel Frequency Cepstral Coefficients (MFCC) capture the crucial audio signal characteristics relevant to the classification. The amalgamation of these two deep learning structures enhances the precision and reliability of PCG classification, leading to improved diagnostic results. RESULTS: The HbNet model exhibited excellent results with an average accuracy of 99.72% and sensitivity of 99.3% on an adult dataset, surpassing all the existing state-of-the-art works. The researchers have validated the reliability of the HbNet model by testing it on a real-time paediatric dataset. The paediatric model's accuracy is 86.5%. HbNet detected functional murmur with 100% precision. CONCLUSION: The results indicate that the HbNet model exhibits a high level of efficacy in the early detection of cardiac disorders. Results also imply that HbNet has the potential to serve as a valuable tool for the development of decision-support systems that aid medical practitioners in confirming their diagnoses. This method makes it easier for medical professionals to diagnose and initiate prompt treatment while performing preliminary auscultation and reduces unnecessary echocardiograms.


Subject(s)
Heart Sounds , Humans , Phonocardiography/methods , Child , Heart Sounds/physiology , Deep Learning , Neural Networks, Computer , Heart Murmurs/diagnosis , Child, Preschool
SELECTION OF CITATIONS
SEARCH DETAIL