Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Blood Press ; 31(1): 288-296, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36266938

RESUMO

PURPOSE: Obesity is a clear risk factor for hypertension. Blood pressure (BP) measurement in obese patients may be biased by cuff size and upper arm shape which may affect the accuracy of measurements. This study aimed to assess the accuracy of the OptiBP smartphone application for three different body mass index (BMI) categories (normal, overweight and obese). MATERIALS AND METHODS: Participants with a wide range of BP and BMI were recruited at Lausanne University Hospital's hypertension clinic in Switzerland. OptiBP estimated BP by recording an optical signal reflecting light from the participants' fingertips into a smartphone camera. Age, sex and BP distribution were collected to fulfil the AAMI/ESH/ISO universal standards. Both auscultatory BP references and OptiBP BP were measured and compared using the simultaneous opposite arms method, as described in the 81060-2:2018 ISO norm. Subgroup analyses were performed for each BMI category. RESULTS: We analyzed 414 recordings from 95 patients: 34 were overweight and 15 were obese. The OptiBP application had a performance acceptance rate of 82%. The mean and standard deviation (SD) differences between the optical BP estimations and the auscultatory reference rates (criterion 1) were respected in all subgroups: SBP mean value was 2.08 (SD 7.58); 1.32 (6.44); -2.29 (5.62) respectively in obese, overweight and normal weight subgroup. For criterion 2, which investigates the precision errors on an individual level, the threshold for systolic BP in the obese group was slightly above the requirement for this criterion. CONCLUSION: This study demonstrated that the OptiBP application is easily applicable to overweight and obese participants. Differences between the reference measure and the OptiBP estimation were within ISO limits (criterion 1). In obese participants, the SD of mean error was outside criterion 2 limits. Whether auscultatory measurement, due to arm morphology or the OptiBP is associated with increasing bias in obese still needs to be studied.


What is the context? • Hypertension and obesity have a major impact on population health and costs. • Obesity is a chronic disease characterized by abnormal or excessive fat accumulation. • Obesity, in combination with other diseases like hypertension, is a major risk factor for cardiovascular and total death. • In Europe, the obesity rate is 21.5% for men and 24.5% for women. • Hypertension, which continues to increase in the population, is a factor that can be modified when well managed. • Blood pressure measurement by the usual method may be complicated in obese patients due to fat accumulation and the shape of the arm and can lead to measurement errors. In addition, the non-invasive blood pressure measurement can be constraining and uncomfortable.What is new? • Smartphone apps are gradually appearing and allow the measurement of blood pressure without a pressure cuff using photoplethysmography. • OptiBP is a smartphone application that provides an estimate of blood pressure that has been evaluated in the general population. • The objective of this study is to assess whether OptiBP is equally effective in obese and overweight patients.What is the impact? • The use of smartphones to estimate BP in overweight and obese patients may be a solution to the known bias associated with cuff measurement. • The acquisition of more and more data with a larger number of patients will allow the continuous improvement of the application's algorithm.


Assuntos
Hipertensão , Aplicativos Móveis , Humanos , Pressão Sanguínea/fisiologia , Índice de Massa Corporal , Sobrepeso/complicações , Determinação da Pressão Arterial/métodos , Obesidade/complicações
2.
J Deaf Stud Deaf Educ ; 24(4): 346-355, 2019 10 01.
Artigo em Inglês | MEDLINE | ID: mdl-31271428

RESUMO

We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.


Assuntos
Surdez/psicologia , Emoções , Expressão Facial , Reconhecimento Facial , Adolescente , Adulto , Feminino , Humanos , Masculino , Língua de Sinais , Adulto Jovem
3.
J Deaf Stud Deaf Educ ; 23(1): 62-70, 2018 01 01.
Artigo em Inglês | MEDLINE | ID: mdl-28977622

RESUMO

Previous research has suggested that early deaf signers differ in face processing. Which aspects of face processing are changed and the role that sign language may have played in that change are however unclear. Here, we compared face categorization (human/non-human) and human face recognition performance in early profoundly deaf signers, hearing signers, and hearing non-signers. In the face categorization task, the three groups performed similarly in term of both response time and accuracy. However, in the face recognition task, signers (both deaf and hearing) were slower than hearing non-signers to accurately recognize faces, but had a higher accuracy rate. We conclude that sign language experience, but not deafness, drives a speed-accuracy trade-off in face recognition (but not face categorization). This suggests strategic differences in the processing of facial identity for individuals who use a sign language, regardless of their hearing status.


Assuntos
Surdez/psicologia , Reconhecimento Facial , Língua de Sinais , Adulto , Análise de Variância , Surdez/reabilitação , Feminino , Auxiliares de Audição , Humanos , Masculino , Pessoa de Meia-Idade , Tempo de Reação/fisiologia , Adulto Jovem
4.
Heliyon ; 7(5): e07018, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34041389

RESUMO

During real-life interactions, facial expressions of emotion are perceived dynamically with multimodal sensory information. In the absence of auditory sensory channel inputs, it is unclear how facial expressions are recognised and internally represented by deaf individuals. Few studies have investigated facial expression recognition in deaf signers using dynamic stimuli, and none have included all six basic facial expressions of emotion (anger, disgust, fear, happiness, sadness, and surprise) with stimuli fully controlled for their low-level visual properties, leaving the question of whether or not a dynamic advantage for deaf observers exists unresolved. We hypothesised, in line with the enhancement hypothesis, that the absence of auditory sensory information might have forced the visual system to better process visual (unimodal) signals, and predicted that this greater sensitivity to visual stimuli would result in better recognition performance for dynamic compared to static stimuli, and for deaf-signers compared to hearing non-signers in the dynamic condition. To this end, we performed a series of psychophysical studies with deaf signers with early-onset severe-to-profound deafness (dB loss >70) and hearing controls to estimate their ability to recognize the six basic facial expressions of emotion. Using static, dynamic, and shuffled (randomly permuted video frames of an expression) stimuli, we found that deaf observers showed similar categorization profiles and confusions across expressions compared to hearing controls (e.g., confusing surprise with fear). In contrast to our hypothesis, we found no recognition advantage for dynamic compared to static facial expressions for deaf observers. This observation shows that the decoding of dynamic facial expression emotional signals is not superior even in the deaf expert visual system, suggesting the existence of optimal signals in static facial expressions of emotion at the apex. Deaf individuals match hearing individuals in the recognition of facial expressions of emotion.

5.
Cognition ; 191: 103957, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31255921

RESUMO

While a substantial body of work has suggested that deafness brings about an increased allocation of visual attention to the periphery there has been much less work on how using a signed language may also influence this attentional allocation. Signed languages are visual-gestural and produced using the body and perceived via the human visual system. Signers fixate upon the face of interlocutors and do not directly look at the hands moving in the inferior visual field. It is therefore reasonable to predict that signed languages require a redistribution of covert visual attention to the inferior visual field. Here we report a prospective and statistically powered assessment of the spatial distribution of attention to inferior and superior visual fields in signers - both deaf and hearing - in a visual search task. Using a Bayesian Hierarchical Drift Diffusion Model, we estimated decision making parameters for the superior and inferior visual field in deaf signers, hearing signers and hearing non-signers. Results indicated a greater attentional redistribution toward the inferior visual field in adult signers (both deaf and hearing) than in hearing sign-naïve adults. The effect was smaller for hearing signers than for deaf signers, suggestive of either a role for extent of exposure or greater plasticity of the visual system in the deaf. The data provide support for a process by which the demands of linguistic processing can influence the human attentional system.


Assuntos
Surdez/fisiopatologia , Percepção Espacial/fisiologia , Percepção Visual/fisiologia , Adulto , Atenção , Humanos , Estudos Prospectivos , Língua de Sinais , Campos Visuais
6.
Vision Res ; 153: 105-110, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30165056

RESUMO

Studies have observed that deaf signers have a larger Visual Field (VF) than hearing non-signers with a particular large extension in the lower part of the VF. This increment could stem from early deafness or from the extensive use of sign language, since the lower VF is critical to perceive and understand linguistics gestures in sign language communication. The aim of the present study was to explore the potential impact of sign language experience without deafness on the VF sensitivity within its lower part. Using standard Humphrey Visual Field Analyzer, we compared luminance sensitivity in the fovea and between 3 and 27 degrees of visual eccentricity for the upper and lower VF, between hearing users of French Sign Language and age-matched hearing non-signers. The sensitivity in the fovea and in the upper VF were similar in both groups. Hearing signers had, however, higher luminance sensitivity than non-signers in the lower VF but only between 3 and 15°, the visual location for sign language perception. Sign language experience, no associated with deafness, may then be a modulating factor of VF sensitivity but restricted to the very specific location where signs are perceived.


Assuntos
Adaptação Fisiológica/fisiologia , Audição/fisiologia , Língua de Sinais , Campos Visuais/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
7.
Appl Bionics Biomech ; 2015: 543492, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-27019586

RESUMO

Background. Common manufactured depth sensors generate depth images that humans normally obtain from their eyes and hands. Various designs converting spatial data into sound have been recently proposed, speculating on their applicability as sensory substitution devices (SSDs). Objective. We tested such a design as a travel aid in a navigation task. Methods. Our portable device (MeloSee) converted 2D array of a depth image into melody in real-time. Distance from the sensor was translated into sound intensity, stereo-modulated laterally, and the pitch represented verticality. Twenty-one blindfolded young adults navigated along four different paths during two sessions separated by one-week interval. In some instances, a dual task required them to recognize a temporal pattern applied through a tactile vibrator while they navigated. Results. Participants learnt how to use the system on both new paths and on those they had already navigated from. Based on travel time and errors, performance improved from one week to the next. The dual task was achieved successfully, slightly affecting but not preventing effective navigation. Conclusions. The use of Kinect-type sensors to implement SSDs is promising, but it is restricted to indoor use and it is inefficient on too short range.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA