Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Front Cardiovasc Med ; 10: 1170804, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38328674

RESUMO

Objective: This study aims to assess the ability of state-of-the-art machine learning algorithms to detect valvular heart disease (VHD) from digital heart sound recordings in a general population that includes asymptomatic cases and intermediate stages of disease progression. Methods: We trained a recurrent neural network to predict murmurs from heart sound audio using annotated recordings collected with digital stethoscopes from four auscultation positions in 2,124 participants from the Tromsø7 study. The predicted murmurs were used to predict VHD as determined by echocardiography. Results: The presence of aortic stenosis (AS) was detected with a sensitivity of 90.9%, a specificity of 94.5%, and an area under the curve (AUC) of 0.979 (CI: 0.963-0.995). At least moderate AS was detected with an AUC of 0.993 (CI: 0.989-0.997). Moderate or greater aortic and mitral regurgitation (AR and MR) were predicted with AUC values of 0.634 (CI: 0.565-703) and 0.549 (CI: 0.506-0.593), respectively, which increased to 0.766 and 0.677 when clinical variables were added as predictors. The AUC for predicting symptomatic cases was higher for AR and MR, 0.756 and 0.711, respectively. Screening jointly for symptomatic regurgitation or presence of stenosis resulted in an AUC of 0.86, with 97.7% of AS cases (n = 44) and all 12 MS cases detected. Conclusions: The algorithm demonstrated excellent performance in detecting AS in a general cohort, surpassing observations from similar studies on selected cohorts. The detection of AR and MR based on HS audio was poor, but accuracy was considerably higher for symptomatic cases, and the inclusion of clinical variables improved the performance of the model significantly.

2.
Sensors (Basel) ; 19(8)2019 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-30991690

RESUMO

We applied deep learning to create an algorithm for breathing phase detection in lung sound recordings, and we compared the breathing phases detected by the algorithm and manually annotated by two experienced lung sound researchers. Our algorithm uses a convolutional neural network with spectrograms as the features, removing the need to specify features explicitly. We trained and evaluated the algorithm using three subsets that are larger than previously seen in the literature. We evaluated the performance of the method using two methods. First, discrete count of agreed breathing phases (using 50% overlap between a pair of boxes), shows a mean agreement with lung sound experts of 97% for inspiration and 87% for expiration. Second, the fraction of time of agreement (in seconds) gives higher pseudo-kappa values for inspiration (0.73-0.88) than expiration (0.63-0.84), showing an average sensitivity of 97% and an average specificity of 84%. With both evaluation methods, the agreement between the annotators and the algorithm shows human level performance for the algorithm. The developed algorithm is valid for detecting breathing phases in lung sound recordings.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA