Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Sensors (Basel) ; 24(12)2024 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-38931497

RESUMO

Depression is a major psychological disorder with a growing impact worldwide. Traditional methods for detecting the risk of depression, predominantly reliant on psychiatric evaluations and self-assessment questionnaires, are often criticized for their inefficiency and lack of objectivity. Advancements in deep learning have paved the way for innovations in depression risk detection methods that fuse multimodal data. This paper introduces a novel framework, the Audio, Video, and Text Fusion-Three Branch Network (AVTF-TBN), designed to amalgamate auditory, visual, and textual cues for a comprehensive analysis of depression risk. Our approach encompasses three dedicated branches-Audio Branch, Video Branch, and Text Branch-each responsible for extracting salient features from the corresponding modality. These features are subsequently fused through a multimodal fusion (MMF) module, yielding a robust feature vector that feeds into a predictive modeling layer. To further our research, we devised an emotion elicitation paradigm based on two distinct tasks-reading and interviewing-implemented to gather a rich, sensor-based depression risk detection dataset. The sensory equipment, such as cameras, captures subtle facial expressions and vocal characteristics essential for our analysis. The research thoroughly investigates the data generated by varying emotional stimuli and evaluates the contribution of different tasks to emotion evocation. During the experiment, the AVTF-TBN model has the best performance when the data from the two tasks are simultaneously used for detection, where the F1 Score is 0.78, Precision is 0.76, and Recall is 0.81. Our experimental results confirm the validity of the paradigm and demonstrate the efficacy of the AVTF-TBN model in detecting depression risk, showcasing the crucial role of sensor-based data in mental health detection.


Assuntos
Depressão , Imagem Multimodal , Depressão/diagnóstico , Imagem Multimodal/instrumentação , Imagem Multimodal/métodos , Fatores de Risco , Envio de Mensagens de Texto , Gravação em Vídeo , Gravação de Som , Humanos , Masculino , Feminino , Adulto Jovem , Adulto , Pessoa de Meia-Idade , Conjuntos de Dados como Assunto , Emoções , Expressão Facial
4.
Infancy ; 29(2): 196-215, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38014953

RESUMO

There is little systematically collected quantitative empirical data on how much linguistic input children in small-scale societies encounter, with some estimates suggesting low levels of directed speech. We report on an ecologically-valid analysis of speech experienced over the course of a day by young children (N = 24, 6-58 months old, 33% female) in a forager-horticulturalist population of lowland Bolivia. A permissive definition of input (i.e., including overlapping, background, and non-linguistic vocalizations) leads to massive changes in terms of input quantity, including a quadrupling of the estimate for overall input compared to a restrictive definition (only near and clear speech), while who talked to and around a focal child is relatively stable across input definitions. We discuss implications of these results for theoretical and empirical research into language acquisition.


Assuntos
Fazendeiros , Fala , Criança , Humanos , Feminino , Pré-Escolar , Lactente , Masculino , Gravação de Som , Desenvolvimento da Linguagem
5.
Artigo em Inglês | MEDLINE | ID: mdl-38082867

RESUMO

Objective cough sound evaluation is useful in the diagnosis and management of respiratory diseases. However, the performance of cough sound analysis models can degrade in the presence of background noises common in everyday environments. This brings forward the need for cough sound denoising. This work utilizes a method for denoising cough sound recordings using signal processing and machine learning techniques, inspired by research in the field of speech enhancement. It uses supervised learning to find a mapping between the noisy and clean spectra of cough sound signals using a fully connected feed-forward neural network. The method is validated on a dataset of 300 manually annotated cough sound recordings corrupted with babble noise. The effect of various signal processing and neural network parameters on denoising performance is investigated. The method is shown to improve cough sound quality and intelligibility and outperform conventional denoising methods.


Assuntos
Gravação de Som , Inteligibilidade da Fala , Humanos , Redes Neurais de Computação , Ruído , Tosse/diagnóstico
6.
J Psycholinguist Res ; 52(6): 3001-3017, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37962821

RESUMO

This paper aims to study how different musical act modes influence the student's psychological state, creative development, and music appreciation. In particular, the research focuses on concert videos, video clips, and audio records. Based on the Likert scale, the authors determined that video clips significantly influenced students' learning process since they contributed to the combination of visual and sound effects. Video concerts were less important. Concerts are mainly staged actions with frequent use of pre-recorded music, affecting the accuracy of singing techniques. The authors concluded that the most effective approach is systematical learning using the effect of colors and sounds with a preliminary analysis of musical compositions. The results showed that the most significant number of students significantly improved their knowledge (87%, with an average score of 0.92), and the elements of a musical act (rhythm, color scheme, text, and performance) influenced their development. The practical significance of the paper lies in the use of approaches to learning using colors and sound effects with an emphasis on the development of certain elements. The study prospects involve determining how effectively the elements of a musical act influence the psychological state resulting from comparing music genres.


Assuntos
Música , Humanos , Gravação de Som , Estudantes , Aprendizagem , Percepção Auditiva
7.
BMJ Open ; 13(9): e074948, 2023 09 11.
Artigo em Inglês | MEDLINE | ID: mdl-37696633

RESUMO

BACKGROUND: Chronic non-cancer pain (CNCP) treatment's primary goal is to maintain physical and mental functioning while improving quality of life. Opioid use in CNCP patients has increased in recent years, and non-pharmacological interventions such as music listening have been proposed to counter it. Unlike other auditive stimuli, music can activate emotional-regulating and reward-regulating circuits, making it a potential tool to modulate attentional processes and regulate mood. This study's primary objective is to provide the first evidence on the distinct (separate) effects of music listening as a coadjuvant maintenance analgesic treatment in CNCP patients undergoing opioid analgesia. METHODS AND ANALYSIS: This will be a single-centre, phase II, open-label, parallel-group, proof-of-concept randomised clinical trial with CNCP patients under a minimum 4-week regular opioid treatment. We plan to include 70 consecutive patients, which will be randomised (1:1) to either the experimental group (active music listening) or the control group (active audiobooks listening). During 28 days, both groups will listen daily (for at least 30 min and up to 1 hour) to preset playlists tailored to individual preferences.Pain intensity scores at each visit, the changes (differences) from baseline and the proportions of responders according to various definitions based on pain intensity differences will be described and compared between study arms. We will apply longitudinal data assessment methods (mixed generalised linear models) taking the patient as a cluster to assess and compare the endpoints' evolution. We will also use the mediation analysis framework to adjust for the effects of additional therapeutic measures and obtain estimates of effect with a causal interpretation. ETHICS AND DISSEMINATION: The study protocol has been reviewed, and ethics approval has been obtained from the Bellvitge University Hospital Institutional Review Board, L'Hospitalet de Llobregat, Barcelona, Spain. The results from this study will be actively disseminated through manuscript publications and conference presentations. TRIAL REGISTRATION NUMBER: NCT05726266.


Assuntos
Dor do Câncer , Dor Crônica , Música , Humanos , Dor Crônica/tratamento farmacológico , Analgésicos Opioides/uso terapêutico , Centros de Atenção Terciária , Qualidade de Vida , Gravação de Som , Ensaios Clínicos Controlados Aleatórios como Assunto , Ensaios Clínicos Fase II como Assunto
8.
J Am Med Inform Assoc ; 30(10): 1673-1683, 2023 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-37478477

RESUMO

OBJECTIVES: Patient-clinician communication provides valuable explicit and implicit information that may indicate adverse medical conditions and outcomes. However, practical and analytical approaches for audio-recording and analyzing this data stream remain underexplored. This study aimed to 1) analyze patients' and nurses' speech in audio-recorded verbal communication, and 2) develop machine learning (ML) classifiers to effectively differentiate between patient and nurse language. MATERIALS AND METHODS: Pilot studies were conducted at VNS Health, the largest not-for-profit home healthcare agency in the United States, to optimize audio-recording patient-nurse interactions. We recorded and transcribed 46 interactions, resulting in 3494 "utterances" that were annotated to identify the speaker. We employed natural language processing techniques to generate linguistic features and built various ML classifiers to distinguish between patient and nurse language at both individual and encounter levels. RESULTS: A support vector machine classifier trained on selected linguistic features from term frequency-inverse document frequency, Linguistic Inquiry and Word Count, Word2Vec, and Medical Concepts in the Unified Medical Language System achieved the highest performance with an AUC-ROC = 99.01 ± 1.97 and an F1-score = 96.82 ± 4.1. The analysis revealed patients' tendency to use informal language and keywords related to "religion," "home," and "money," while nurses utilized more complex sentences focusing on health-related matters and medical issues and were more likely to ask questions. CONCLUSION: The methods and analytical approach we developed to differentiate patient and nurse language is an important precursor for downstream tasks that aim to analyze patient speech to identify patients at risk of disease and negative health outcomes.


Assuntos
Idioma , Gravação de Som , Humanos , Comunicação , Linguística , Aprendizado de Máquina
9.
J Hosp Palliat Nurs ; 25(5): 271-276, 2023 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-37347958

RESUMO

Stories for Life is a UK charity that provides a free and confidential service for terminally ill patients to audio record their "life story." Patients are given a copy of the recording and, if they wish, can then pass a copy on to their family/friends. This study explored how a group of terminally ill patients, receiving hospice care, experienced the process of making a voice recording of their biographies. Interviews were conducted with 5 terminally ill patients and 1 family member. Study participants found that talking to a trained volunteer provided a neutral, nonjudgmental interviewer. Patients reported a feeling of catharsis while telling their story as well as being able to reflect on significant life events. However, it was challenging to convey difficult emotions while also being mindful of protecting family who may listen to the recording. Although there was some uncertainty about how the recording would be perceived by listeners, leaving a voice-recorded life account was felt to be beneficial for immediate family members, as well as maintaining a meaningful connection with future generations. Overall, recording an audio biography in terminal illness can allow patients a space for reflection and a meaningful connection with their families.


Assuntos
Cuidados Paliativos na Terminalidade da Vida , Humanos , Doente Terminal/psicologia , Instituições de Caridade , Gravação de Som , Família/psicologia
10.
J Med Internet Res ; 25: e46216, 2023 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-37261889

RESUMO

BACKGROUND: The growing public interest and awareness regarding the significance of sleep is driving the demand for sleep monitoring at home. In addition to various commercially available wearable and nearable devices, sound-based sleep staging via deep learning is emerging as a decent alternative for their convenience and potential accuracy. However, sound-based sleep staging has only been studied using in-laboratory sound data. In real-world sleep environments (homes), there is abundant background noise, in contrast to quiet, controlled environments such as laboratories. The use of sound-based sleep staging at homes has not been investigated while it is essential for practical use on a daily basis. Challenges are the lack of and the expected huge expense of acquiring a sufficient size of home data annotated with sleep stages to train a large-scale neural network. OBJECTIVE: This study aims to develop and validate a deep learning method to perform sound-based sleep staging using audio recordings achieved from various uncontrolled home environments. METHODS: To overcome the limitation of lacking home data with known sleep stages, we adopted advanced training techniques and combined home data with hospital data. The training of the model consisted of 3 components: (1) the original supervised learning using 812 pairs of hospital polysomnography (PSG) and audio recordings, and the 2 newly adopted components; (2) transfer learning from hospital to home sounds by adding 829 smartphone audio recordings at home; and (3) consistency training using augmented hospital sound data. Augmented data were created by adding 8255 home noise data to hospital audio recordings. Besides, an independent test set was built by collecting 45 pairs of overnight PSG and smartphone audio recording at homes to examine the performance of the trained model. RESULTS: The accuracy of the model was 76.2% (63.4% for wake, 64.9% for rapid-eye movement [REM], and 83.6% for non-REM) for our test set. The macro F1-score and mean per-class sensitivity were 0.714 and 0.706, respectively. The performance was robust across demographic groups such as age, gender, BMI, or sleep apnea severity (accuracy 73.4%-79.4%). In the ablation study, we evaluated the contribution of each component. While the supervised learning alone achieved accuracy of 69.2% on home sound data, adding consistency training to the supervised learning helped increase the accuracy to a larger degree (+4.3%) than adding transfer learning (+0.1%). The best performance was shown when both transfer learning and consistency training were adopted (+7.0%). CONCLUSIONS: This study shows that sound-based sleep staging is feasible for home use. By adopting 2 advanced techniques (transfer learning and consistency training) the deep learning model robustly predicts sleep stages using sounds recorded at various uncontrolled home environments, without using any special equipment but smartphones only.


Assuntos
Aprendizado Profundo , Smartphone , Humanos , Gravação de Som , Ambiente Domiciliar , Fases do Sono , Sono
11.
Ann Behav Med ; 57(9): 753-764, 2023 08 21.
Artigo em Inglês | MEDLINE | ID: mdl-37178456

RESUMO

BACKGROUND: The experience of cancer can create considerable emotional distress for patients and their committed partners. How couples communicate about cancer-related concerns can have important implications for adjustment. However, past research has primarily utilized cross-sectional designs and retrospective self-reports of couple communication. While informative, little is known about how patients and partners express emotion during conversations about cancer, and how these emotional patterns predict individual and relational adjustment. PURPOSE: The current investigation examined how patterns of emotional arousal within couples' communication about cancer was associated with concurrent and prospective individual psychological and relational adjustment. METHODS: At baseline, 133 patients with stage II- breast, lung, or colorectal cancer and their partners completed a conversation about a cancer-related concern. Vocally expressed emotional arousal (f0) was extracted from recorded conversations. Couples completed self-report measures of individual psychological and relational adjustment at baseline and at 4, 8, and 12 months later. RESULTS: Couples who started the conversation higher in f0 (i.e., greater emotional arousal) reported better individual and relational adjustment at baseline. If the non-cancer partner had lower f0 relative to patients, this predicted worse individual adjustment across follow-up. Additionally, couples who maintained their level of f0 rather than decreasing later in the conversation reported improvements in individual adjustment across follow-up. CONCLUSIONS: Elevated emotional arousal within a cancer-related conversation may be adaptive for adjustment, as it may reflect greater emotional engagement and processing of an important topic. These results may suggest ways for therapists to guide emotional engagement to enhance resilience in couples experiencing cancer.


Cancer is a stressful experience for patients and their partners. We know that how couples communicate about cancer is important, but we do not know much about how couples express emotion during cancer conversations and how those emotional expressions affect well-being. Our study looked at how couples' emotional arousal within cancer conversations relate to individual and relationship well-being. At the beginning of the study, cancer patients and their partners had a conversation about cancer. Within these conversations, we tracked the emotional arousal expressed in their voices. Couples also completed surveys about their well-being at the beginning of the study and later in time (4, 8, and 12 months later). We found that couples who started the conversation with higher emotional arousal had better initial well-being. Couples who remained higher in arousal later in the conversation improved in their individual well-being over time. We also found that if the non-cancer partner was low in arousal compared with patients, this predicted worse well-being over time. More research is needed, but these findings suggest that being emotionally aroused during conversations about important topics like cancer might be helpful for well-being, potentially because couples are discussing concerns and not backing off when it feels challenging.


Assuntos
Nível de Alerta , Comunicação , Ajustamento Emocional , Emoções Manifestas , Características da Família , Relações Familiares , Neoplasias , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Relações Familiares/psicologia , Seguimentos , Neoplasias/psicologia , Resiliência Psicológica , Gravação de Som , Voz , Apoio Familiar/psicologia
12.
Physiol Meas ; 44(4)2023 04 18.
Artigo em Inglês | MEDLINE | ID: mdl-36975197

RESUMO

Objective.Current wearable respiratory monitoring devices provide a basic assessment of the breathing pattern of the examined subjects. More complex monitoring is needed for healthcare applications in patients with lung diseases. A multi-sensor vest allowing continuous lung imaging by electrical impedance tomography (EIT) and auscultation at six chest locations was developed for such advanced application. The aims of our study were to determine the vest's capacity to record the intended bio-signals, its safety and the comfort of wearing in a first clinical investigation in healthy adult subjects.Approach.Twenty subjects (age range: 23-65 years) were studied while wearing the vests during a 14-step study protocol comprising phases of quiet and deep breathing, slow and forced full expiration manoeuvres, coughing, breath-holding in seated and three horizontal postures. EIT, chest sound and accelerometer signals were streamed to a tablet using a dedicated application and uploaded to a back-end server. The subjects filled in a questionnaire on the vest properties using a Likert scale.Main results.All subjects completed the full protocol. Good to excellent EIT waveforms and functional EIT images were obtained in 89% of the subjects. Breathing pattern and posture dependent changes in ventilation distribution were properly detected by EIT. Chest sounds were recorded in all subjects. Detection of audible heart sounds was feasible in 44%-67% of the subjects, depending on the sensor location. Accelerometry correctly identified the posture in all subjects. The vests were safe and their properties positively rated, thermal and tactile properties achieved the highest scores.Significance.The functionality and safety of the studied wearable multi-sensor vest and the high level of its acceptance by the study participants were confirmed. Availability of personalized vests might further advance its performance by improving the sensor-skin contact.


Assuntos
Gravação de Som , Dispositivos Eletrônicos Vestíveis , Adulto , Humanos , Adulto Jovem , Pessoa de Meia-Idade , Idoso , Voluntários Saudáveis , Pulmão/diagnóstico por imagem , Monitorização Fisiológica , Impedância Elétrica , Tomografia/métodos
13.
J Voice ; 37(4): 546-552, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34049760

RESUMO

OBJECTIVES: Normative data are important in the clinical setting of Speech and Language Pathology. The purpose of this study was to develop a normative reference dataset of voice range profiles from young females. STUDY DESIGN: Descriptive study including a prospective collection of voice range profile data. METHODS: Voice range profile recordings from 39 females with healthy voices aged 18 to 28 years were conducted. Seven voice range profile variables were analyzed: minimum and maximum fundamental frequency and intensity, semitone and intensity ranges, and voice range profile area. Descriptive statistical methods were applied. RESULTS: An age-specific voice range profile normative dataset was established. The mean values and standard deviations were as follows: semitone range 34.7 ± 3.9 ST, minimum fundamental frequency 143.6 ± 21.7 hertz, maximum fundamental frequency 1063.5 ± 160 hertz, intensity range 65.6 ± 5.0 dB, minimum intensity 43.2 ± 2.5 dB SPL, maximum SPL 108.9 ± 5.1 dB SPL, and voice range profile area 1346 ± 222 cells. CONCLUSION: A normative dataset usable for optimization of future voice assessments has been established. It may especially benefit evaluation and treatment planning for younger females suffering from vocal fold nodules.


Assuntos
Acústica da Fala , Patologia da Fala e Linguagem , Qualidade da Voz , Feminino , Humanos , Estudos Prospectivos , Patologia da Fala e Linguagem/estatística & dados numéricos , Adolescente , Adulto Jovem , Adulto , Valores de Referência , Qualidade da Voz/fisiologia , Conjuntos de Dados como Assunto , Gravação de Som
14.
IEEE Trans Biomed Eng ; 70(5): 1436-1446, 2023 05.
Artigo em Inglês | MEDLINE | ID: mdl-36301781

RESUMO

OBJECTIVE: Doppler ultrasound (DU) is used to detect venous gas emboli (VGE) post dive as a marker of decompression stress for diving physiology research as well as new decompression procedure validation to minimize decompression sickness risk. In this article, we propose the first deep learning model for VGE grading in DU audio recordings. METHODS: A database of real-world data was assembled and labeled for the purpose of developing the algorithm, totaling 274 recordings comprising both subclavian and precordial measurements. Synthetic data was also generated by acquiring baseline DU signals from human volunteers and superimposing laboratory-acquired DU signals of bubbles flowing in a tissue mimicking material. A novel squeeze-and-excitation deep learning model was designed to effectively classify recordings on the 5-class Spencer scoring system used by trained human raters. RESULTS: On the real-data test set, we show that synthetic data pretraining achieves average ordinal accuracy of 84.9% for precordial and 90.4% for subclavian DU which is a 24.6% and 26.2% increase over training with real-data and time-series augmentation only. The weighted kappa coefficients of agreement between the model and human ground truth were 0.74 and 0.69 for precordial and subclavian respectively, indicating substantial agreement similar to human inter-rater agreement for this type of data. CONCLUSION: The present work demonstrates the first application of deep-learning for DU VGE grading using a combination of synthetic and real-world data. SIGNIFICANCE: The proposed method can contribute to accelerating DU analysis for decompression research.


Assuntos
Doença da Descompressão , Aprendizado Profundo , Embolia Aérea , Humanos , Gravação de Som , Embolia Aérea/diagnóstico por imagem , Ultrassonografia Doppler
15.
J Forensic Sci ; 68(1): 139-153, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36273272

RESUMO

The number of smartwatch users has been rapidly increasing in recent years. A smartwatch is a wearable device that collects various types of data using sensors and provides basic functions, such as healthcare-related measurements and audio recording. In this study, we proposed the forensic authentication method for audio recordings from the Voice Recording application in the Samsung Galaxy Watch4 series. First, a total of 240 audio recordings from each of the four different models, paired with four different smartphones for synchronization via Bluetooth, were collected and verified. To analyze the characteristics of smartwatch audio recordings, we examined the transition of the audio latency, writable audio bandwidth, timestamps, and file structure between those generated in the smartwatches and those edited using the Voice Recording application of the paired smartphones. In addition, the devices with the audio recordings were examined via the Android Debug Bridge (ADB) tool and compared with the timestamps stored in the file system. The experimental results showed that the audio latency, writable audio bandwidth, and file structure of audio recordings generated by smartwatches differed from those generated by smartphones. Additionally, by analyzing the file structure, audio recordings can be classified as unmanipulated, manipulation has been attempted, or manipulated. Finally, we can forensically authenticate the audio recordings generated by the Voice Recorder application in the Samsung Galaxy Watch4 series by accessing the smartwatches and analyzing the timestamps related to the audio recordings in the file system.


Assuntos
Gravação de Som , Dispositivos Eletrônicos Vestíveis , Smartphone , Medicina Legal
16.
Technol Cult ; 64(1): 172-201, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38588171

RESUMO

When Thomas Edison handed over his 1877 invention of the phonograph-the new sound recording technology-to a group of investors to market across the U.S, the company lacked the proper expertise, manufacturing, and supply networks to do so. This article traces one company's struggle in dealing with the recurring malfunction of exhibition phonographs, which impacted how audiences came to view the innovation. In doing so, the article revisits the issue of technological determinism as parsed by scholars such as Bruno Latour, Robert Heilbroner, Thomas Hughes, Donald MacKenzie, and Judy Wajcman. Investigating under what circumstances one innovation enjoys more (or less) autonomy vis-à-vis social forces, the case study suggests that scholars should make more granular assessments of how technology and society impact each other.


Assuntos
Invenções , Tecnologia , Gravação de Som , Comércio
17.
Artif Intell Med ; 133: 102417, 2022 11.
Artigo em Inglês | MEDLINE | ID: mdl-36328670

RESUMO

Cardiac auscultation is an essential point-of-care method used for the early diagnosis of heart diseases. Automatic analysis of heart sounds for abnormality detection is faced with the challenges of additive noise and sensor-dependent degradation. This paper aims to develop methods to address the cardiac abnormality detection problem when both of these components are present in the cardiac auscultation sound. We first mathematically analyze the effect of additive noise and convolutional distortion on short-term mel-filterbank energy-based features and a Convolutional Neural Network (CNN) layer. Based on the analysis, we propose a combination of linear and logarithmic spectrogram-image features. These 2D features are provided as input to a residual CNN network (ResNet) for heart sound abnormality detection. Experimental validation is performed first on an open-access, multiclass heart sound dataset where we analyzed the effect of additive noise by mixing lung sound noise with the recordings. In noisy conditions, the proposed method outperforms one of the best-performing methods in the literature achieving an Macc (mean of sensitivity and specificity) of 89.55% and an average F-1 score of 82.96%, respectively, when averaged over all noise levels. Next, we perform heart sound abnormality detection (binary classification) experiments on the 2016 Physionet/CinC Challenge dataset that involves noisy recordings obtained from multiple stethoscope sensors. The proposed method achieves significantly improved results compared to the conventional approaches on this dataset, in the presence of both additive noise and channel distortion, with an area under the ROC (receiver operating characteristics) curve (AUC) of 91.36%, F-1 score of 84.09%, and Macc of 85.08%. We also show that the proposed method shows the best mean accuracy across different source domains, including stethoscope and noise variability, demonstrating its effectiveness in different recording conditions. The proposed combination of linear and logarithmic features along with the ResNet classifier effectively minimizes the impact of background noise and sensor variability for classifying phonocardiogram (PCG) signals. The method thus paves the way toward developing computer-aided cardiac auscultation systems in noisy environments using low-cost stethoscopes.


Assuntos
Ruídos Cardíacos , Processamento de Sinais Assistido por Computador , Gravação de Som , Redes Neurais de Computação , Auscultação
18.
Psico USF ; 27(4): 699-710, Oct.-Dec. 2022. graf
Artigo em Inglês | LILACS, Index Psicologia - Periódicos | ID: biblio-1422344

RESUMO

The present study aimed to analyze the conceptions of a health management team about their relationship with professionals in the services offered by a municipal healthcare network. The Focus Groups technique was used for data collection: three groups were conducted with an average of 12 participants each and an approximate duration of two hours. The IRAMUTEQ software was used for data analysis, which allows a lexical analysis of the Descending Hierarchical Classification type. Five distinct classes were found. The theoretical-philosophical framework of Edgar Morin's Theory of Complexity was used to discuss the results, which proposes the aspiration to non-reductionist knowledge and the recognition of the incompleteness of any type of knowledge. As final considerations, we understand that the phenomenon addressed in this study consists of multiple factors that recursively affect each other. We stress the discussion about the intersection of the territory in the work dynamics of management teams. In addition, we highlight the value of the meeting between different actors as a possibility of genuine openness to difference, toward the collective construction of this health system. (AU)


Objetivou-se analisar as concepções de uma equipe gestora em saúde sobre sua relação com profissionais dos serviços de uma rede municipal. Aplicou-se a técnica de Grupos Focais: foram três grupos, com 12 participantes cada e duração aproximada de duas horas. Para a análise empregou-se o software IRAMUTEQ que permite uma análise lexical do tipo Classificação Hierárquica Descendente, que resultou em cinco classes distintas. Para discussão dos resultados utilizou-se referencial teórico-filosófico da Teoria da Complexidade de Edgar Morin, que traz a aspiração a um saber não reducionista e o reconhecimento da incompletude de qualquer conhecimento. Entende-se que o fenômeno aqui abordado é constituído por múltiplos fatores que se afetam mútua e recursivamente. Destaca-se a discussão dos atravessamentos do território na dinâmica de trabalho da equipe gestoras. Reforça-se o valor do encontro entre diversos atores enquanto possibilidade de abertura genuína à diferença, na direção de uma construção coletiva do sistema de saúde. (AU)


El presente estudio tuvo como objetivo analizar las concepciones de un equipo de gestión de salud sobre su relación con los profesionales de los servicios ofrecidos por una red municipal de salud. Para la recolección de datos se utilizó la técnica de Grupos Focales: se realizaron tres grupos, con una media de 12 participantes cada uno y una duración aproximada de dos horas. Para el análisis de datos se empleó el software IRAMUTEQ que permite realizar análisis léxicos del tipo Clasificación Jerárquica Descendente, resultando en cinco clases distintas. Para la discusión de los resultados se utilizó el referencial teórico-filosófico de la Teoría del Pensamiento Complejo de Edgar Morin, que propone la aspiración a un conocimiento no reduccionista y el reconocimiento de la incompletud de cualquier conocimiento. Como consideraciones finales se entiende que el fenómeno abordado es constituido por múltiples factores que se afectan mutua y recursivamente entre sí. Se destaca la discusión sobre la intersección del territorio en las dinámicas de trabajo de los equipos de gestión. Además, se refuerza el valor del encuentro entre los diversos actores como una posibilidad de verdadera apertura a la diferencia, hacia la construcción colectiva de este sistema de salud. (AU)


Assuntos
Humanos , Masculino , Feminino , Adulto , Pessoa de Meia-Idade , Sistema Único de Saúde , Administração de Serviços de Saúde , Relações Interpessoais , Centros de Saúde , Distribuição de Qui-Quadrado , Pessoal de Saúde/psicologia , Grupos Focais , Pesquisa Qualitativa , Gravação de Som , Política de Saúde
19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 820-823, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36086057

RESUMO

In view of using abdominal microphones for fetal heart rate (FHR) monitoring, the analysis of the obtained abdominal phonocardiogram (PCG) signals is complex due to many interferential noises including blood flow sounds. In order to improve the understanding of abdominal phonocardiography, a preliminary study was conducted in one healthy volunteer and designed to characterize the PCG signals all over the abdomen. Acquisitions of PCG signals in different abdominal areas were realized, synchronously with one thoracic PCG signal and one electrocardiogram signal. The analysis was carried out based on the temporal behavior, amplitude and mean pattern of each signal. The synchronized rhythmic signature of each signal confirms that the PCG signals obtained on the abdominal area are resulting from heart function. However, the abdominal PCG patterns are totally different from the thoracic PCG one, suggesting the recording of vascular blood flow sounds on the abdomen instead of cardiac valve sounds. Moreover, the abdominal signal magnitude depends on the sensor position and therefore to the size of the underlying vessel. The sounds characterization of abdominal PCG signals could help improving the processing of such signals in the purpose of FHR monitoring.


Assuntos
Ruídos Cardíacos , Gravação de Som , Abdome , Feminino , Coração/fisiologia , Ruídos Cardíacos/fisiologia , Humanos , Fonocardiografia/métodos , Gravidez
20.
Proc Natl Acad Sci U S A ; 119(7)2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-35131939

RESUMO

Correctly assessing the total impact of predators on prey population growth rates (lambda, λ) is critical to comprehending the importance of predators in species conservation and wildlife management. Experiments over the past decade have demonstrated that the fear (antipredator responses) predators inspire can affect prey fecundity and early offspring survival in free-living wildlife, but recent reviews have highlighted the absence of evidence experimentally linking such effects to significant impacts on prey population growth. We experimentally manipulated fear in free-living wild songbird populations over three annual breeding seasons by intermittently broadcasting playbacks of either predator or nonpredator vocalizations and comprehensively quantified the effects on all the components of population growth, together with evidence of a transgenerational impact on offspring survival as adults. Fear itself significantly reduced the population growth rate (predator playback mean λ = 0.91, 95% CI = 0.80 to 1.04; nonpredator mean λ = 1.06, 95% CI = 0.96 to 1.16) by causing cumulative, compounding adverse effects on fecundity and every component of offspring survival, resulting in predator playback parents producing 53% fewer recruits to the adult breeding population. Fear itself was consequently projected to halve the population size in just 5 years, or just 4 years when the evidence of a transgenerational impact was additionally considered (λ = 0.85). Our results not only demonstrate that fear itself can significantly impact prey population growth rates in free-living wildlife, comparing them with those from hundreds of predator manipulation experiments indicates that fear may constitute a very considerable part of the total impact of predators.


Assuntos
Envelhecimento/fisiologia , Medo/fisiologia , Aves Canoras/fisiologia , Animais , Animais Selvagens , Colúmbia Britânica , Crescimento Demográfico , Comportamento Predatório , Gravação de Som , Vocalização Animal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA