Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Proc Natl Acad Sci U S A ; 121(25): e2311865121, 2024 Jun 18.
Artigo em Inglês | MEDLINE | ID: mdl-38861610

RESUMO

We experience a life that is full of ups and downs. The ability to bounce back after adverse life events such as the loss of a loved one or serious illness declines with age, and such isolated events can even trigger accelerated aging. How humans respond to common day-to-day perturbations is less clear. Here, we infer the aging status from smartphone behavior by using a decision tree regression model trained to accurately estimate the chronological age based on the dynamics of touchscreen interactions. Individuals (N = 280, 21 to 87 y of age) expressed smartphone behavior that appeared younger on certain days and older on other days through the observation period that lasted up to ~4 y. We captured the essence of these fluctuations by leveraging the mathematical concept of critical transitions and tipping points in complex systems. In most individuals, we find one or more alternative stable aging states separated by tipping points. The older the individual, the lower the resilience to forces that push the behavior across the tipping point into an older state. Traditional accounts of aging based on sparse longitudinal data spanning decades suggest a gradual behavioral decline with age. Taken together with our current results, we propose that the gradual age-related changes are interleaved with more complex dynamics at shorter timescales where the same individual may navigate distinct behavioral aging states from one day to the next. Real-world behavioral data modeled as a complex system can transform how we view and study aging.


Assuntos
Envelhecimento , Smartphone , Humanos , Idoso , Pessoa de Meia-Idade , Masculino , Adulto , Feminino , Envelhecimento/fisiologia , Idoso de 80 Anos ou mais , Adulto Jovem , Resiliência Psicológica
2.
NPJ Digit Med ; 6(1): 49, 2023 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-36959382

RESUMO

The idea that abnormal human activities follow multi-day rhythms is found in ancient beliefs on the moon to modern clinical observations in epilepsy and mood disorders. To explore multi-day rhythms in healthy human behavior our analysis includes over 300 million smartphone touchscreen interactions logging up to 2 years of day-to-day activities (N401 subjects). At the level of each individual, we find a complex expression of multi-day rhythms where the rhythms occur scattered across diverse smartphone behaviors. With non-negative matrix factorization, we extract the scattered rhythms to reveal periods ranging from 7 to 52 days - cutting across age and gender. The rhythms are likely free-running - instead of being ubiquitously driven by the moon - as they did not show broad population-level synchronization even though the sampled population lived in northern Europe. We propose that multi-day rhythms are a common trait, but their consequences are uniquely experienced in day-to-day behavior.

3.
iScience ; 25(8): 104791, 2022 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-36039357

RESUMO

Smartphones touchscreen interactions may help resolve if and how real-world behavioral dynamics are shaped by aging. Here, in a sample spanning the adult life span (16 to 86 years, N = 598, accumulating 355 million interactions), we clustered the smartphone interactions according to their next inter-touch interval dynamics. There were age-related behavioral losses at the clusters occupying short intervals (∼100 ms, R2 ∼ 0.8) but gains at the long intervals (∼4 s, R2 ∼ 0.4). Our approach revealed a sophisticated form of behavioral aging where individuals simultaneously demonstrated accelerated aging in one behavioral cluster versus a deceleration in another. Contrary to the common notion of a simple behavioral decline with age based on conventional cognitive tests, we show that the nature of aging systematically varies according to the underlying dynamics. Of all the imaginable factors determining smartphone interactions, age-sensitive cognitive and behavioral processes may dominatingly shape smartphone dynamics.

4.
iScience ; 25(8): 104792, 2022 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-36039359

RESUMO

Smartphones offer unique opportunities to trace the convoluted behavioral patterns accompanying healthy aging. Here we captured smartphone touchscreen interactions from a healthy population (N = 684, ∼309 million interactions) spanning 16 to 86 years of age and trained a decision tree regression model to estimate chronological age based on the interactions. The interactions were clustered according to their next interval dynamics to quantify diverse smartphone behaviors. The regression model well-estimated the chronological age in health (mean absolute error = 6 years, R2 = 0.8). We next deployed this model on a population of stroke survivors (N = 41) to find larger prediction errors such that the estimated age was advanced by 6 years. A similar pattern was observed in people with epilepsy (N = 51), with prediction errors advanced by 10 years. The smartphone behavioral model trained in health can be used to study altered aging in neurological diseases.

5.
iScience ; 24(6): 102538, 2021 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-34308281

RESUMO

A range of abnormal electrical activity patterns termed epileptiform discharges can occur in the brains of persons with epilepsy. These epileptiform discharges can be monitored and recorded with implanted devices that deliver therapeutic neurostimulation. These continuous recordings provide an opportunity to study the behavioral correlates of epileptiform discharges as the patients go about their daily lives. Here, we captured the smartphone touchscreen interactions in eight patients in conjunction with electrographic recordings (accumulating 35,714 h) and by using an artificial neural network model addressed if the behavior reflected the epileptiform discharges. The personalized model outputs based on smartphone behavioral inputs corresponded well with the observed electrographic data (R: 0.2-0.6, median 0.4). The realistic reconstructions of epileptiform activity based on smartphone use demonstrate how day-to-day digital behavior may be converted to personalized markers of disease activity in epilepsy.

6.
Front Neurosci ; 14: 637, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32903824

RESUMO

Hand gestures are a form of non-verbal communication used by individuals in conjunction with speech to communicate. Nowadays, with the increasing use of technology, hand-gesture recognition is considered to be an important aspect of Human-Machine Interaction (HMI), allowing the machine to capture and interpret the user's intent and to respond accordingly. The ability to discriminate between human gestures can help in several applications, such as assisted living, healthcare, neuro-rehabilitation, and sports. Recently, multi-sensor data fusion mechanisms have been investigated to improve discrimination accuracy. In this paper, we present a sensor fusion framework that integrates complementary systems: the electromyography (EMG) signal from muscles and visual information. This multi-sensor approach, while improving accuracy and robustness, introduces the disadvantage of high computational cost, which grows exponentially with the number of sensors and the number of measurements. Furthermore, this huge amount of data to process can affect the classification latency which can be crucial in real-case scenarios, such as prosthetic control. Neuromorphic technologies can be deployed to overcome these limitations since they allow real-time processing in parallel at low power consumption. In this paper, we present a fully neuromorphic sensor fusion approach for hand-gesture recognition comprised of an event-based vision sensor and three different neuromorphic processors. In particular, we used the event-based camera, called DVS, and two neuromorphic platforms, Loihi and ODIN + MorphIC. The EMG signals were recorded using traditional electrodes and then converted into spikes to be fed into the chips. We collected a dataset of five gestures from sign language where visual and electromyography signals are synchronized. We compared a fully neuromorphic approach to a baseline implemented using traditional machine learning approaches on a portable GPU system. According to the chip's constraints, we designed specific spiking neural networks (SNNs) for sensor fusion that showed classification accuracy comparable to the software baseline. These neuromorphic alternatives have increased inference time, between 20 and 40%, with respect to the GPU system but have a significantly smaller energy-delay product (EDP) which makes them between 30× and 600× more efficient. The proposed work represents a new benchmark that moves neuromorphic computing toward a real-world scenario.

7.
Neuroimage ; 223: 117282, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32828921

RESUMO

Hearing-impaired people often struggle to follow the speech stream of an individual talker in noisy environments. Recent studies show that the brain tracks attended speech and that the attended talker can be decoded from neural data on a single-trial level. This raises the possibility of "neuro-steered" hearing devices in which the brain-decoded intention of a hearing-impaired listener is used to enhance the voice of the attended speaker from a speech separation front-end. So far, methods that use this paradigm have focused on optimizing the brain decoding and the acoustic speech separation independently. In this work, we propose a novel framework called brain-informed speech separation (BISS)1 in which the information about the attended speech, as decoded from the subject's brain, is directly used to perform speech separation in the front-end. We present a deep learning model that uses neural data to extract the clean audio signal that a listener is attending to from a multi-talker speech mixture. We show that the framework can be applied successfully to the decoded output from either invasive intracranial electroencephalography (iEEG) or non-invasive electroencephalography (EEG) recordings from hearing-impaired subjects. It also results in improved speech separation, even in scenes with background noise. The generalization capability of the system renders it a perfect candidate for neuro-steered hearing-assistive devices.


Assuntos
Encéfalo/fisiologia , Eletroencefalografia , Processamento de Sinais Assistido por Computador , Acústica da Fala , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Algoritmos , Aprendizado Profundo , Perda Auditiva/fisiopatologia , Humanos , Pessoa de Meia-Idade
8.
Front Neurosci ; 12: 531, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30131670

RESUMO

The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...