Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Respir Res ; 25(1): 177, 2024 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-38658980

RESUMEN

BACKGROUND: Computer Aided Lung Sound Analysis (CALSA) aims to overcome limitations associated with standard lung auscultation by removing the subjective component and allowing quantification of sound characteristics. In this proof-of-concept study, a novel automated approach was evaluated in real patient data by comparing lung sound characteristics to structural and functional imaging biomarkers. METHODS: Patients with cystic fibrosis (CF) aged > 5y were recruited in a prospective cross-sectional study. CT scans were analyzed by the CF-CT scoring method and Functional Respiratory Imaging (FRI). A digital stethoscope was used to record lung sounds at six chest locations. Following sound characteristics were determined: expiration-to-inspiration (E/I) signal power ratios within different frequency ranges, number of crackles per respiratory phase and wheeze parameters. Linear mixed-effects models were computed to relate CALSA parameters to imaging biomarkers on a lobar level. RESULTS: 222 recordings from 25 CF patients were included. Significant associations were found between E/I ratios and structural abnormalities, of which the ratio between 200 and 400 Hz appeared to be most clinically relevant due to its relation with bronchiectasis, mucus plugging, bronchial wall thickening and air trapping on CT. The number of crackles was also associated with multiple structural abnormalities as well as regional airway resistance determined by FRI. Wheeze parameters were not considered in the statistical analysis, since wheezing was detected in only one recording. CONCLUSIONS: The present study is the first to investigate associations between auscultatory findings and imaging biomarkers, which are considered the gold standard to evaluate the respiratory system. Despite the exploratory nature of this study, the results showed various meaningful associations that highlight the potential value of automated CALSA as a novel non-invasive outcome measure in future research and clinical practice.


Asunto(s)
Biomarcadores , Fibrosis Quística , Ruidos Respiratorios , Humanos , Estudios Transversales , Masculino , Femenino , Estudios Prospectivos , Adulto , Fibrosis Quística/fisiopatología , Fibrosis Quística/diagnóstico por imagen , Adulto Joven , Adolescente , Auscultación/métodos , Tomografía Computarizada por Rayos X/métodos , Pulmón/diagnóstico por imagen , Pulmón/fisiopatología , Niño , Prueba de Estudio Conceptual , Diagnóstico por Computador/métodos , Persona de Mediana Edad
2.
J Acoust Soc Am ; 152(3): 1932, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-36182282

RESUMEN

Project-based learning engages students in practical activities related to course content and has been demonstrated to improve academic performance. Due to its reported benefits, this form of active learning was incorporated with an ongoing research project into an introductory, graduate-level Musical Acoustics course at the Peabody Institute of The Johns Hopkins University. Students applied concepts from the course to characterize a contact sensor with a polymer diaphragm for musical instrument recording. Assignments throughout the semester introduced students to completing a literature review, planning an experiment, collecting and analyzing data, and presenting results. While students were given broad goals to understand the performance of the contact sensor compared to traditional microphones, they were allowed independence in determining the specific methods used. The efficacy of the course framework and research project was assessed with student feedback provided through open-ended prompts and Likert-type survey questions. Overall, the students responded positively to the project-based learning and demonstrated mastery of the course learning objectives. The work provides a possible framework for instructors considering using project-based learning through research in their own course designs.


Asunto(s)
Acústica , Aprendizaje Basado en Problemas , Retroalimentación , Humanos
3.
Sensors (Basel) ; 22(23)2022 Dec 06.
Artículo en Inglés | MEDLINE | ID: mdl-36502232

RESUMEN

Coronavirus disease 2019 (COVID-19) has led to countless deaths and widespread global disruptions. Acoustic-based artificial intelligence (AI) tools could provide a simple, scalable, and prompt method to screen for COVID-19 using easily acquirable physiological sounds. These systems have been demonstrated previously and have shown promise but lack robust analysis of their deployment in real-world settings when faced with diverse recording equipment, noise environments, and test subjects. The primary aim of this work is to begin to understand the impacts of these real-world deployment challenges on the system performance. Using Mel-Frequency Cepstral Coefficients (MFCC) and RelAtive SpecTrAl-Perceptual Linear Prediction (RASTA-PLP) features extracted from cough, speech, and breathing sounds in a crowdsourced dataset, we present a baseline classification system that obtains an average receiver operating characteristic area under the curve (AUC-ROC) of 0.77 when discriminating between COVID-19 and non-COVID subjects. The classifier performance is then evaluated on four additional datasets, resulting in performance variations between 0.64 and 0.87 AUC-ROC, depending on the sound type. By analyzing subsets of the available recordings, it is noted that the system performance degrades with certain recording devices, noise contamination, and with symptom status. Furthermore, performance degrades when a uniform classification threshold from the training data is subsequently used across all datasets. However, the system performance is robust to confounding factors, such as gender, age group, and the presence of other respiratory conditions. Finally, when analyzing multiple speech recordings from the same subjects, the system achieves promising performance with an AUC-ROC of 0.78, though the classification does appear to be impacted by natural speech variations. Overall, the proposed system, and by extension other acoustic-based diagnostic aids in the literature, could provide comparable accuracy to rapid antigen testing but significant deployment challenges need to be understood and addressed prior to clinical use.


Asunto(s)
Inteligencia Artificial , COVID-19 , Humanos , COVID-19/diagnóstico , Acústica , Sonido , Ruidos Respiratorios
4.
Sensors (Basel) ; 22(23)2022 Nov 23.
Artículo en Inglés | MEDLINE | ID: mdl-36501787

RESUMEN

Many commercial and prototype devices are available for capturing body sounds that provide important information on the health of the lungs and heart; however, a standardized method to characterize and compare these devices is not agreed upon. Acoustic phantoms are commonly used because they generate repeatable sounds that couple to devices using a material layer that mimics the characteristics of skin. While multiple acoustic phantoms have been presented in literature, it is unclear how design elements, such as the driver type and coupling layer, impact the acoustical characteristics of the phantom and, therefore, the device being measured. Here, a design of experiments approach is used to compare the frequency responses of various phantom constructions. An acoustic phantom that uses a loudspeaker to generate sound and excite a gelatin layer supported by a grid is determined to have a flatter and more uniform frequency response than other possible designs with a sound exciter and plate support. When measured on an optimal acoustic phantom, three devices are shown to have more consistent measurements with added weight and differing positions compared to a non-optimal phantom. Overall, the statistical models developed here provide greater insight into acoustic phantom design for improved device characterization.


Asunto(s)
Acústica , Sonido , Diseño de Equipo , Fantasmas de Imagen , Gelatina
5.
ACS Appl Bio Mater ; 6(8): 3241-3256, 2023 08 21.
Artículo en Inglés | MEDLINE | ID: mdl-37470762

RESUMEN

Acoustic sensors are able to capture more incident energy if their acoustic impedance closely matches the acoustic impedance of the medium being probed, such as skin or wood. Controlling the acoustic impedance of polymers can be achieved by selecting materials with appropriate densities and stiffnesses as well as adding ceramic nanoparticles. This study follows a statistical methodology to examine the impact of polymer type and nanoparticle addition on the fabrication of acoustic sensors with desired acoustic impedances in the range of 1-2.2 MRayls. The proposed method using a design of experiments approach measures sensors with diaphragms of varying impedances when excited with acoustic vibrations traveling through wood, gelatin, and plastic. The sensor diaphragm is subsequently optimized for body sound monitoring, and the sensor's improved body sound coherence and airborne noise rejection are evaluated on an acoustic phantom in simulated noise environments and compared to electronic stethoscopes with onboard noise cancellation. The impedance-matched sensor demonstrates high sensitivity to body sounds, low sensitivity to airborne sound, a frequency response comparable to two state-of-the-art electronic stethoscopes, and the ability to capture lung and heart sounds from a real subject. Due to its small size, use of flexible materials, and rejection of airborne noise, the sensor provides an improved solution for wearable body sound monitoring, as well as sensing from other mediums with acoustic impedances in the range of 1-2.2 MRayls, such as water and wood.


Asunto(s)
Acústica , Diafragma , Impedancia Eléctrica , Electricidad Estática , Vibración
6.
Artif Intell Med ; 133: 102417, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36328670

RESUMEN

Cardiac auscultation is an essential point-of-care method used for the early diagnosis of heart diseases. Automatic analysis of heart sounds for abnormality detection is faced with the challenges of additive noise and sensor-dependent degradation. This paper aims to develop methods to address the cardiac abnormality detection problem when both of these components are present in the cardiac auscultation sound. We first mathematically analyze the effect of additive noise and convolutional distortion on short-term mel-filterbank energy-based features and a Convolutional Neural Network (CNN) layer. Based on the analysis, we propose a combination of linear and logarithmic spectrogram-image features. These 2D features are provided as input to a residual CNN network (ResNet) for heart sound abnormality detection. Experimental validation is performed first on an open-access, multiclass heart sound dataset where we analyzed the effect of additive noise by mixing lung sound noise with the recordings. In noisy conditions, the proposed method outperforms one of the best-performing methods in the literature achieving an Macc (mean of sensitivity and specificity) of 89.55% and an average F-1 score of 82.96%, respectively, when averaged over all noise levels. Next, we perform heart sound abnormality detection (binary classification) experiments on the 2016 Physionet/CinC Challenge dataset that involves noisy recordings obtained from multiple stethoscope sensors. The proposed method achieves significantly improved results compared to the conventional approaches on this dataset, in the presence of both additive noise and channel distortion, with an area under the ROC (receiver operating characteristics) curve (AUC) of 91.36%, F-1 score of 84.09%, and Macc of 85.08%. We also show that the proposed method shows the best mean accuracy across different source domains, including stethoscope and noise variability, demonstrating its effectiveness in different recording conditions. The proposed combination of linear and logarithmic features along with the ResNet classifier effectively minimizes the impact of background noise and sensor variability for classifying phonocardiogram (PCG) signals. The method thus paves the way toward developing computer-aided cardiac auscultation systems in noisy environments using low-cost stethoscopes.


Asunto(s)
Ruidos Cardíacos , Procesamiento de Señales Asistido por Computador , Grabaciones de Sonido , Redes Neurales de la Computación , Auscultación
7.
IEEE J Biomed Health Inform ; 26(4): 1847-1860, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34705660

RESUMEN

Digital auscultation is a well-known method for assessing lung sounds, but remains a subjective process in typical practice, relying on the human interpretation. Several methods have been presented for detecting or analyzing crackles but are limited in their real-world application because few have been integrated into comprehensive systems or validated on non-ideal data. This work details a complete signal analysis methodology for analyzing crackles in challenging recordings. The procedure comprises five sequential processing blocks: (1) motion artifact detection, (2) deep learning denoising network, (3) respiratory cycle segmentation, (4) separation of discontinuous adventitious sounds from vesicular sounds, and (5) crackle peak detection. This system uses a collection of new methods and robustness-focused improvements on previous methods to analyze respiratory cycles and crackles therein. To validate the accuracy, the system is tested on a database of 1000 simulated lung sounds with varying levels of motion artifacts, ambient noise, cycle lengths and crackle intensities, in which ground truths are exactly known. The system performs with average F-score of 91.07% for detecting motion artifacts and 94.43% for respiratory cycle extraction, and an overall F-score of 94.08% for detecting the locations of individual crackles. The process also successfully detects healthy recordings. Preliminary validation is also presented on a small set of 20 patient recordings, for which the system performs comparably. These methods provide quantifiable analysis of respiratory sounds to enable clinicians to distinguish between types of crackles, their timing within the respiratory cycle, and the level of occurrence. Crackles are one of the most common abnormal lung sounds, presenting in multiple cardiorespiratory diseases. These features will contribute to a better understanding of disease severity and progression in an objective, simple and non-invasive way.


Asunto(s)
Ruidos Respiratorios , Procesamiento de Señales Asistido por Computador , Auscultación/métodos , Humanos , Pulmón , Frecuencia Respiratoria
8.
J Glob Health ; 12: 04033, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35493777

RESUMEN

Background: Frontline health care workers use World Health Organization Integrated Management of Childhood Illnesses (IMCI) guidelines for child pneumonia care in low-resource settings. IMCI guideline pneumonia diagnostic criterion performs with low specificity, resulting in antibiotic overtreatment. Digital auscultation with automated lung sound analysis may improve the diagnostic performance of IMCI pneumonia guidelines. This systematic review aims to summarize the evidence on detecting adventitious lung sounds by digital auscultation with automated analysis compared to reference physician acoustic analysis for child pneumonia diagnosis. Methods: In this review, articles were searched from MEDLINE, Embase, CINAHL Plus, Web of Science, Global Health, IEEExplore database, Scopus, and the ClinicalTrial.gov databases from the inception of each database to October 27, 2021, and reference lists of selected studies and relevant review articles were searched manually. Studies reporting diagnostic performance of digital auscultation and/or computerized lung sound analysis compared against physicians' acoustic analysis for pneumonia diagnosis in children under the age of 5 were eligible for this systematic review. Retrieved citations were screened and eligible studies were included for extraction. Risk of bias was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. All these steps were independently performed by two authors and disagreements between the reviewers were resolved through discussion with an arbiter. Narrative data synthesis was performed. Results: A total of 3801 citations were screened and 46 full-text articles were assessed. 10 studies met the inclusion criteria. Half of the studies used a publicly available respiratory sound database to evaluate their proposed work. Reported methodologies/approaches and performance metrics for classifying adventitious lung sounds varied widely across the included studies. All included studies except one reported overall diagnostic performance of the digital auscultation/computerised sound analysis to distinguish adventitious lung sounds, irrespective of the disease condition or age of the participants. The reported accuracies for classifying adventitious lung sounds in the included studies varied from 66.3% to 100%. However, it remained unclear to what extent these results would be applicable for classifying adventitious lung sounds in children with pneumonia. Conclusions: This systematic review found very limited evidence on the diagnostic performance of digital auscultation to diagnose pneumonia in children. Well-designed studies and robust reporting are required to evaluate the accuracy of digital auscultation in the paediatric population.


Asunto(s)
Neumonía , Ruidos Respiratorios , Auscultación , Niño , Humanos , Pulmón , Neumonía/diagnóstico , Ruidos Respiratorios/diagnóstico
9.
BMJ Open ; 12(2): e059630, 2022 Feb 09.
Artículo en Inglés | MEDLINE | ID: mdl-35140164

RESUMEN

INTRODUCTION: The WHO's Integrated Management of Childhood Illnesses (IMCI) algorithm for diagnosis of child pneumonia relies on counting respiratory rate and observing respiratory distress to diagnose childhood pneumonia. IMCI case defination for pneumonia performs with high sensitivity but low specificity, leading to overdiagnosis of child pneumonia and unnecessary antibiotic use. Including lung auscultation in IMCI could improve specificity of pneumonia diagnosis. Our objectives are: (1) assess lung sound recording quality by primary healthcare workers (HCWs) from under-5 children with the Feelix Smart Stethoscope and (2) determine the reliability and performance of recorded lung sound interpretations by an automated algorithm compared with reference paediatrician interpretations. METHODS AND ANALYSIS: In a cross-sectional design, community HCWs will record lung sounds of ~1000 under-5-year-old children with suspected pneumonia at first-level facilities in Zakiganj subdistrict, Sylhet, Bangladesh. Enrolled children will be evaluated for pneumonia, including oxygen saturation, and have their lung sounds recorded by the Feelix Smart stethoscope at four sequential chest locations: two back and two front positions. A novel sound-filtering algorithm will be applied to recordings to address ambient noise and optimise recording quality. Recorded sounds will be assessed against a predefined quality threshold. A trained paediatric listening panel will classify recordings into one of the following categories: normal, crackles, wheeze, crackles and wheeze or uninterpretable. All sound files will be classified into the same categories by the automated algorithm and compared with panel classifications. Sensitivity, specificity and predictive values, of the automated algorithm will be assessed considering the panel's final interpretation as gold standard. ETHICS AND DISSEMINATION: The study protocol was approved by the National Research Ethics Committee of Bangladesh Medical Research Council, Bangladesh (registration number: 09630012018) and Academic and Clinical Central Office for Research and Development Medical Research Ethics Committee, Edinburgh, UK (REC Reference: 18-HV-051). Dissemination will be through conference presentations, peer-reviewed journals and stakeholder engagement meetings in Bangladesh. TRIAL REGISTRATION NUMBER: NCT03959956.


Asunto(s)
Neumonía , Ruidos Respiratorios , Auscultación , Bangladesh , Preescolar , Protocolos Clínicos , Estudios Transversales , Humanos , Lactante , Neumonía/diagnóstico , Reproducibilidad de los Resultados , Ruidos Respiratorios/diagnóstico
10.
IEEE J Biomed Health Inform ; 25(7): 2583-2594, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-33534721

RESUMEN

Chest auscultation is a widely used clinical tool for respiratory disease detection. The stethoscope has undergone a number of transformative enhancements since its invention, including the introduction of electronic systems in the last two decades. Nevertheless, stethoscopes remain riddled with a number of issues that limit their signal quality and diagnostic capability, rendering both traditional and electronic stethoscopes unusable in noisy or non-traditional environments (e.g., emergency rooms, rural clinics, ambulatory vehicles). This work outlines the design and validation of an advanced electronic stethoscope that dramatically reduces external noise contamination through hardware redesign and real-time, dynamic signal processing. The proposed system takes advantage of an acoustic sensor array, an external facing microphone, and on-board processing to perform adaptive noise suppression. The proposed system is objectively compared to six commercially-available acoustic and electronic devices in varying levels of simulated noisy clinical settings and quantified using two metrics that reflect perceptual audibility and statistical similarity, normalized covariance measure (NCM) and magnitude squared coherence (MSC). The analyses highlight the major limitations of current stethoscopes and the significant improvements the proposed system makes in challenging settings by minimizing both distortion of lung sounds and contamination by ambient noise.


Asunto(s)
Auscultación , Estetoscopios , Humanos , Pulmón , Ruido , Ruidos Respiratorios
11.
IEEE J Biomed Health Inform ; 25(5): 1542-1549, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-32870803

RESUMEN

Electronic stethoscopes offer several advantages over conventional acoustic stethoscopes, including noise reduction, increased amplification, and ability to store and transmit sounds. However, the acoustical characteristics of electronic and acoustic stethoscopes can differ significantly, introducing a barrier for clinicians to transition to electronic stethoscopes. This work proposes a method to process lung sounds recorded by an electronic stethoscope, such that the sounds are perceived to have been captured by an acoustic stethoscope. The proposed method calculates an electronic-to-acoustic stethoscope filter by measuring the difference between the average frequency responses of an acoustic and an electronic stethoscope to multiple lung sounds. To validate the method, a change detection experiment was conducted with 51 medical professionals to compare filtered electronic, unfiltered electronic, and acoustic stethoscope lung sounds. Participants were asked to detect when transitions occurred in sounds comprising several sections of the three types of recordings. Transitions between the filtered electronic and acoustic stethoscope sections were detected, on average, by chance (sensitivity index equal to zero) and also detected significantly less than transitions between the unfiltered electronic and acoustic stethoscope sections ( ), demonstrating the effectiveness of the method to filter electronic stethoscopes to mimic an acoustic stethoscope. This processing could incentivize clinicians to adopt electronic stethoscopes by providing a means to shift between the sound characteristics of acoustic and electronic stethoscopes in a single device, allowing for a faster transition to new technology and greater appreciation for the electronic sound quality.


Asunto(s)
Auscultación , Electrónica , Estetoscopios , Acústica , Humanos , Ruidos Respiratorios
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA