Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Ann Fam Med ; 21(6): 517-525, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38012028

RESUMO

PURPOSE: The advent of new medical devices allows patients with asthma to self-monitor at home, providing a more complete picture of their disease than occasional in-person clinic visits. This raises a pertinent question: which devices and parameters perform best in exacerbation detection? METHODS: A total of 149 patients with asthma (90 children, 59 adults) participated in a 6-month observational study. Participants (or parents) regularly (daily for the first 2 weeks and weekly for the next 5.5 months, with increased frequency during exacerbations) performed self-examinations using 3 devices: an artificial intelligence (AI)-aided home stethoscope (providing wheezes, rhonchi, and coarse and fine crackles intensity; respiratory and heart rate; and inspiration-to-expiration ratio), a peripheral capillary oxygen saturation (SpO2) meter, and a peak expiratory flow (PEF) meter and filled out a health state survey. The resulting 6,029 examinations were evaluated by physicians for the presence of exacerbations. For each registered parameter, a machine learning model was trained, and the area under the receiver operating characteristic curve (AUC) was calculated to assess its utility in exacerbation detection. RESULTS: The best single-parameter discriminators of exacerbations were wheezes intensity for young children (AUC 84% [95% CI, 82%-85%]), rhonchi intensity for older children (AUC 81% [95% CI, 79%-84%]), and survey answers for adults (AUC 92% [95% CI, 89%-95%]). The greatest efficacy (in terms of AUC) was observed for a combination of several parameters. CONCLUSIONS: The AI-aided home stethoscope provides reliable information on asthma exacerbations. The parameters provided are effective for children, especially those younger than 5 years of age. The introduction of this tool to the health care system might enhance asthma exacerbation detection substantially and make remote monitoring of patients easier.


Assuntos
Asma , Estetoscópios , Humanos , Criança , Adulto , Adolescente , Pré-Escolar , Inteligência Artificial , Sons Respiratórios , Asma/diagnóstico , Aprendizado de Máquina
2.
Sensors (Basel) ; 22(7)2022 Mar 22.
Artigo em Inglês | MEDLINE | ID: mdl-35408056

RESUMO

Monaural speech enhancement aims to remove background noise from an audio recording containing speech in order to improve its clarity and intelligibility. Currently, the most successful solutions for speech enhancement use deep neural networks. In a typical setting, such neural networks process the noisy input signal once and produces a single enhanced signal. However, it was recently shown that a U-Net-based network can be trained in such a way that allows it to process the same input signal multiple times in order to enhance the speech even further. Unfortunately, this was tested only for two-iteration enhancement. In the current research, we extend previous efforts and demonstrate how the multi-forward-pass speech enhancement can be successfully applied to other architectures, namely the ResBLSTM and Transformer-Net. Moreover, we test the three architectures with up to five iterations, thus identifying the method's limit in terms of performance gain. In our experiments, we used the audio samples from the WSJ0, Noisex-92, and DCASE datasets and measured speech enhancement quality using SI-SDR, STOI, and PESQ. The results show that performing speech enhancement up to five times still brings improvements to speech intelligibility, but the gain becomes smaller with each iteration. Nevertheless, performing five iterations instead of two gives additional a 0.6 dB SI-SDR and four-percentage-point STOI gain. However, these increments are not equal between different architectures, and the U-Net and Transformer-Net benefit more from multi-forward pass compared to ResBLSTM.


Assuntos
Percepção da Fala , Redes Neurais de Computação , Ruído , Inteligibilidade da Fala
3.
Front Physiol ; 12: 745635, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34858203

RESUMO

Background: Effective and reliable monitoring of asthma at home is a relevant factor that may reduce the need to consult a doctor in person. Aim: We analyzed the possibility to determine intensities of pathological breath phenomena based on artificial intelligence (AI) analysis of sounds recorded during standard stethoscope auscultation. Methods: The evaluation set comprising 1,043 auscultation examinations (9,319 recordings) was collected from 899 patients. Examinations were assigned to one of four groups: asthma with and without abnormal sounds (AA and AN, respectively), no-asthma with and without abnormal sounds (NA and NN, respectively). Presence of abnormal sounds was evaluated by a panel of 3 physicians that were blinded to the AI predictions. AI was trained on an independent set of 9,847 recordings to determine intensity scores (indexes) of wheezes, rhonchi, fine and coarse crackles and their combinations: continuous phenomena (wheezes + rhonchi) and all phenomena. The pair-comparison of groups of examinations based on Area Under ROC-Curve (AUC) was used to evaluate the performance of each index in discrimination between groups. Results: Best performance in separation between AA and AN was observed with Continuous Phenomena Index (AUC 0.94) while for NN and NA. All Phenomena Index (AUC 0.91) showed the best performance. AA showed slightly higher prevalence of wheezes compared to NA. Conclusions: The results showed a high efficiency of the AI to discriminate between the asthma patients with normal and abnormal sounds, thus this approach has a great potential and can be used to monitor asthma symptoms at home.

4.
Eur J Pediatr ; 178(6): 883-890, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30927097

RESUMO

Lung auscultation is an important part of a physical examination. However, its biggest drawback is its subjectivity. The results depend on the experience and ability of the doctor to perceive and distinguish pathologies in sounds heard via a stethoscope. This paper investigates a new method of automatic sound analysis based on neural networks (NNs), which has been implemented in a system that uses an electronic stethoscope for capturing respiratory sounds. It allows the detection of auscultatory sounds in four classes: wheezes, rhonchi, and fine and coarse crackles. In the blind test, a group of 522 auscultatory sounds from 50 pediatric patients were presented, and the results provided by a group of doctors and an artificial intelligence (AI) algorithm developed by the authors were compared. The gathered data show that machine learning (ML)-based analysis is more efficient in detecting all four types of phenomena, which is reflected in high values of recall (also called as sensitivity) and F1-score.Conclusions: The obtained results suggest that the implementation of automatic sound analysis based on NNs can significantly improve the efficiency of this form of examination, leading to a minimization of the number of errors made in the interpretation of auscultation sounds. What is Known: • Auscultation performance of average physician is very low. AI solutions presented in scientific literature are based on small data bases with isolated pathological sounds (which are far from real recordings) and mainly on leave-one-out validation method thus they are not reliable. What is New: • AI learning process was based on thousands of signals from real patients and a reliable description of recordings was based on multiple validation by physicians and acoustician resulting in practical and statistical prove of AI high performance.


Assuntos
Auscultação/instrumentação , Aprendizado de Máquina , Redes Neurais de Computação , Sons Respiratórios/diagnóstico , Adolescente , Algoritmos , Auscultação/métodos , Criança , Pré-Escolar , Humanos , Lactente , Sons Respiratórios/classificação , Estetoscópios
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA