Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Thorax ; 77(1): 79-81, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34088787

RESUMO

Patients suspected of ventilator-associated lower respiratory tract infections (VA-LRTIs) commonly receive broad-spectrum antimicrobial therapy unnecessarily. We tested whether exhaled breath analysis can discriminate between patients suspected of VA-LRTI with confirmed infection, from patients with negative cultures. Breath from 108 patients suspected of VA-LRTI was analysed by gas chromatography-mass spectrometry. The breath test had a sensitivity of 98% at a specificity of 49%, confirmed with a second analytical method. The breath test had a negative predictive value of 96% and excluded pneumonia in half of the patients with negative cultures. Trial registration number: UKCRN ID number 19086, registered May 2015.


Assuntos
Pneumonia Associada à Ventilação Mecânica , Infecções Respiratórias , Testes Respiratórios , Testes Diagnósticos de Rotina , Expiração , Humanos , Infecções Respiratórias/diagnóstico , Ventiladores Mecânicos
2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 298-301, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891295

RESUMO

Measuring the respiration and heart rate unobtrusively in home settings is an important goal for health monitoring. In this work, use of a pressure sensitive mat was explored. Two methods using body morphology information, based on shoulder blades and weighted centroid, were developed for respiration rate (RR) calculation. Heart rate (HR) was calculated by combining the frequency information from different body regions. Experimental data were collected from 15 participants in a supine position via a pressure sensitive mat placed under the upper torso. RR and HR estimations derived from accelerometer sensors attached to participants' bodies were used as references to evaluate the accuracy of the proposed methods. All three methods achieved a reasonable estimation compared to the reference. The root mean squared error of the proposed RR estimation methods were 1.32 and 0.87 breath/minute respectively, and the root mean squared error of the HR estimation method was 5.55 bpm.


Assuntos
Respiração , Taxa Respiratória , Frequência Cardíaca , Humanos , Motivação , Tronco
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 5146-5149, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33019144

RESUMO

We introduce a novel monitoring solution for fluid accumulation in the human body (e.g. internal bleeding), based on observation of a selected energy-describing feature of the Ballistocardiogram (BCG) signal. It is hypothesized that, because of additional damping generated by the fluid, BCG signal energy decreases as compared to its baseline value. Data were collected from 15 human volunteers via accelerometers attached to the participants' body, and an electromechanical-film (EMFi) sensor-equipped bed. Fluid accumulation along the gastrointestinal (GI) tract was induced by means of water intake by the participants, and the BCG signal was recorded before and after intake. Based on performance evaluation, we selected a suitable energy feature and sensing channel amongst the ones investigated. The chosen feature showed a significant decrease in signal energy from baseline to after-intake condition (p-value<0.001), and identified the presence of fluid accumulation with high sensitivity (90% in bed-based, and 100% in standing-position monitoring).


Assuntos
Balistocardiografia , Ingestão de Líquidos , Humanos
4.
J Acoust Soc Am ; 123(6): 4547-58, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18537403

RESUMO

Emotional information in speech is commonly described in terms of prosody features such as F0, duration, and energy. In this paper, the focus is on how F0 characteristics can be used to effectively parametrize emotional quality in speech signals. Using an analysis-by-synthesis approach, F0 mean, range, and shape properties of emotional utterances are systematically modified. The results show the aspects of the F0 parameter that can be modified without causing any significant changes in the perception of emotions. To model this behavior the concept of emotional regions is introduced. Emotional regions represent the variability present in the emotional speech and provide a new procedure for studying speech cues for judgments of emotion. The method is applied to F0 but can be also used on other aspects of prosody such as duration or loudness. Statistical analysis of the factors affecting the emotional regions, and discussion of the effects of F0 modifications on the emotion and speech quality perception are also presented. The results show that F0 range is more important than F0 mean for emotion expression.


Assuntos
Afeto , Emoções , Percepção da Fala , Feminino , Humanos , Julgamento , Idioma , Masculino , Fonação , Acústica da Fala , Inteligibilidade da Fala , Comportamento Verbal
5.
IEEE Trans Vis Comput Graph ; 12(6): 1523-34, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17073374

RESUMO

Synthesizing expressive facial animation is a very challenging topic within the graphics community. In this paper, we present an expressive facial animation synthesis system enabled by automated learning from facial motion capture data. Accurate 3D motions of the markers on the face of a human subject are captured while he/she recites a predesigned corpus, with specific spoken and visual expressions. We present a novel motion capture mining technique that "learns" speech coarticulation models for diphones and triphones from the recorded data. A Phoneme-Independent Expression Eigenspace (PIEES) that encloses the dynamic expression signals is constructed by motion signal processing (phoneme-based time-warping and subtraction) and Principal Component Analysis (PCA) reduction. New expressive facial animations are synthesized as follows: First, the learned coarticulation models are concatenated to synthesize neutral visual speech according to novel speech input, then a texture-synthesis-based approach is used to generate a novel dynamic expression signal from the PIEES model, and finally the synthesized expression signal is blended with the synthesized neutral visual speech to create the final expressive facial animation. Our experiments demonstrate that the system can effectively synthesize realistic expressive facial animation.


Assuntos
Inteligência Artificial , Face/fisiologia , Expressão Facial , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Medida da Produção da Fala/métodos , Fala/fisiologia , Face/anatomia & histologia , Humanos , Modelos Biológicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...