Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Ann Fam Med ; 21(6): 517-525, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38012028

RESUMO

PURPOSE: The advent of new medical devices allows patients with asthma to self-monitor at home, providing a more complete picture of their disease than occasional in-person clinic visits. This raises a pertinent question: which devices and parameters perform best in exacerbation detection? METHODS: A total of 149 patients with asthma (90 children, 59 adults) participated in a 6-month observational study. Participants (or parents) regularly (daily for the first 2 weeks and weekly for the next 5.5 months, with increased frequency during exacerbations) performed self-examinations using 3 devices: an artificial intelligence (AI)-aided home stethoscope (providing wheezes, rhonchi, and coarse and fine crackles intensity; respiratory and heart rate; and inspiration-to-expiration ratio), a peripheral capillary oxygen saturation (SpO2) meter, and a peak expiratory flow (PEF) meter and filled out a health state survey. The resulting 6,029 examinations were evaluated by physicians for the presence of exacerbations. For each registered parameter, a machine learning model was trained, and the area under the receiver operating characteristic curve (AUC) was calculated to assess its utility in exacerbation detection. RESULTS: The best single-parameter discriminators of exacerbations were wheezes intensity for young children (AUC 84% [95% CI, 82%-85%]), rhonchi intensity for older children (AUC 81% [95% CI, 79%-84%]), and survey answers for adults (AUC 92% [95% CI, 89%-95%]). The greatest efficacy (in terms of AUC) was observed for a combination of several parameters. CONCLUSIONS: The AI-aided home stethoscope provides reliable information on asthma exacerbations. The parameters provided are effective for children, especially those younger than 5 years of age. The introduction of this tool to the health care system might enhance asthma exacerbation detection substantially and make remote monitoring of patients easier.


Assuntos
Asma , Estetoscópios , Humanos , Criança , Adulto , Adolescente , Pré-Escolar , Inteligência Artificial , Sons Respiratórios , Asma/diagnóstico , Aprendizado de Máquina
2.
Front Physiol ; 12: 745635, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34858203

RESUMO

Background: Effective and reliable monitoring of asthma at home is a relevant factor that may reduce the need to consult a doctor in person. Aim: We analyzed the possibility to determine intensities of pathological breath phenomena based on artificial intelligence (AI) analysis of sounds recorded during standard stethoscope auscultation. Methods: The evaluation set comprising 1,043 auscultation examinations (9,319 recordings) was collected from 899 patients. Examinations were assigned to one of four groups: asthma with and without abnormal sounds (AA and AN, respectively), no-asthma with and without abnormal sounds (NA and NN, respectively). Presence of abnormal sounds was evaluated by a panel of 3 physicians that were blinded to the AI predictions. AI was trained on an independent set of 9,847 recordings to determine intensity scores (indexes) of wheezes, rhonchi, fine and coarse crackles and their combinations: continuous phenomena (wheezes + rhonchi) and all phenomena. The pair-comparison of groups of examinations based on Area Under ROC-Curve (AUC) was used to evaluate the performance of each index in discrimination between groups. Results: Best performance in separation between AA and AN was observed with Continuous Phenomena Index (AUC 0.94) while for NN and NA. All Phenomena Index (AUC 0.91) showed the best performance. AA showed slightly higher prevalence of wheezes compared to NA. Conclusions: The results showed a high efficiency of the AI to discriminate between the asthma patients with normal and abnormal sounds, thus this approach has a great potential and can be used to monitor asthma symptoms at home.

3.
PLoS One ; 14(8): e0220606, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31404066

RESUMO

BACKGROUND: Auscultation is one of the first examinations that a patient is subjected to in a GP's office, especially in relation to diseases of the respiratory system. However it is a highly subjective process and depends on the physician's ability to interpret the sounds as determined by his/her psychoacoustical characteristics. Here, we present a cross-sectional assessment of the skills of physicians of different specializations and medical students in the classification of respiratory sounds in children. METHODS AND FINDINGS: 185 participants representing different medical specializations took part in the experiment. The experiment comprised 24 respiratory system auscultation sounds. The participants were tasked with listening to, and matching the sounds with provided descriptions of specific sound classes. The results revealed difficulties in both the recognition and description of respiratory sounds. The pulmonologist group was found to perform significantly better than other groups in terms of number of correct answers. We also found that performance significantly improved when similar sound classes were grouped together into wider, more general classes. CONCLUSIONS: These results confirm that ambiguous identification and interpretation of sounds in auscultation is a generic issue which should not be neglected as it can potentially lead to inaccurate diagnosis and mistreatment. Our results lend further support to the already widespread acknowledgment of the need to standardize the nomenclature of auscultation sounds (according to European Respiratory Society, International Lung Sounds Association and American Thoracic Society). In particular, our findings point towards important educational challenges in both theory (nomenclature) and practice (training).


Assuntos
Auscultação , Competência Clínica/estatística & dados numéricos , Médicos/estatística & dados numéricos , Sons Respiratórios/diagnóstico , Estudantes de Medicina/estatística & dados numéricos , Adolescente , Adulto , Criança , Pré-Escolar , Estudos Transversais , Humanos , Lactente , Pulmão/fisiopatologia
4.
Eur J Pediatr ; 178(6): 883-890, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30927097

RESUMO

Lung auscultation is an important part of a physical examination. However, its biggest drawback is its subjectivity. The results depend on the experience and ability of the doctor to perceive and distinguish pathologies in sounds heard via a stethoscope. This paper investigates a new method of automatic sound analysis based on neural networks (NNs), which has been implemented in a system that uses an electronic stethoscope for capturing respiratory sounds. It allows the detection of auscultatory sounds in four classes: wheezes, rhonchi, and fine and coarse crackles. In the blind test, a group of 522 auscultatory sounds from 50 pediatric patients were presented, and the results provided by a group of doctors and an artificial intelligence (AI) algorithm developed by the authors were compared. The gathered data show that machine learning (ML)-based analysis is more efficient in detecting all four types of phenomena, which is reflected in high values of recall (also called as sensitivity) and F1-score.Conclusions: The obtained results suggest that the implementation of automatic sound analysis based on NNs can significantly improve the efficiency of this form of examination, leading to a minimization of the number of errors made in the interpretation of auscultation sounds. What is Known: • Auscultation performance of average physician is very low. AI solutions presented in scientific literature are based on small data bases with isolated pathological sounds (which are far from real recordings) and mainly on leave-one-out validation method thus they are not reliable. What is New: • AI learning process was based on thousands of signals from real patients and a reliable description of recordings was based on multiple validation by physicians and acoustician resulting in practical and statistical prove of AI high performance.


Assuntos
Auscultação/instrumentação , Aprendizado de Máquina , Redes Neurais de Computação , Sons Respiratórios/diagnóstico , Adolescente , Algoritmos , Auscultação/métodos , Criança , Pré-Escolar , Humanos , Lactente , Sons Respiratórios/classificação , Estetoscópios
5.
Sci Total Environ ; 523: 191-200, 2015 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-25863510

RESUMO

The aim of the study was to examine how visual and audio information influences audio-visual environment assessment. Original audio-visual recordings were made at seven different places in the city of Poznan. Participants of the psychophysical experiments were asked to rate, on a numerical standardized scale, the degree of comfort they would feel if they were in such an environment. The assessments of audio-visual comfort were carried out in a laboratory in four different conditions: (a) audio samples only, (b) original audio-visual samples, (c) video samples only, and (d) mixed audio-visual samples. The general results of this experiment showed a significant difference between the investigated conditions, but not for all the investigated samples. There was a significant improvement in comfort assessment when visual information was added (in only three out of 7 cases), when conditions (a) and (b) were compared. On the other hand, the results show that the comfort assessment of audio-visual samples could be changed by manipulating the audio rather than the video part of the audio-visual sample. Finally, it seems, that people could differentiate audio-visual representations of a given place in the environment based rather of on the sound sources' compositions than on the sound level. Object identification is responsible for both landscape and soundscape grouping.


Assuntos
Acústica , Meio Ambiente , Monitoramento Ambiental/métodos , Ruído , Cidades
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...