Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Alemão | MEDLINE | ID: mdl-38197925

RESUMO

Digital public health has received a significant boost in recent years, especially due to the demands associated with the COVID-19 pandemic. In this report, we provide an overview of the developments in digitalization in the field of public health in Germany since 2020 and illustrate these with examples from the Leibniz ScienceCampus Digital Public Health Bremen (LSC DiPH).The following topics are central: How do digital survey methods as well as digital biomarkers and artificial intelligence methods shape modern epidemiology and prevention research? What is the status of digitalization in public health offices? Which approaches to health economics evaluation of digital public health interventions have been utilized so far? What is the status of training and further education in digital public health?The first years of the Leibniz ScienceCampus Digital Public Health Bremen (LSC DiPH) were also strongly influenced by the COVID-19 pandemic. Repeated population-based digital surveys of the LSC indicated an increase in use of health apps in the population, for example, in applications to support physical activity. The COVID-19-pandemic has also shown that the digitalization of public health enhances the risk of misinformation and disinformation.


Assuntos
COVID-19 , Saúde Pública , Humanos , Inteligência Artificial , Pandemias/prevenção & controle , Alemanha , COVID-19/epidemiologia , COVID-19/prevenção & controle , Inquéritos e Questionários
2.
Neuroimage ; 269: 119913, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-36731812

RESUMO

Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.


Assuntos
Interfaces Cérebro-Computador , Fala , Humanos , Fala/fisiologia , Encéfalo/fisiologia , Boca , Face , Eletroencefalografia/métodos
3.
Artigo em Alemão | MEDLINE | ID: mdl-36650296

RESUMO

Artificial intelligence (AI) is becoming increasingly important in healthcare. This development triggers serious concerns that can be summarized by six major "worst-case scenarios". From AI spreading disinformation and propaganda, to a potential new arms race between major powers, to a possible rule of algorithms ("algocracy") based on biased gatekeeper intelligence, the real dangers of an uncontrolled development of AI are by no means to be underestimated, especially in the health sector. However, fear of AI could cause humanity to miss the opportunity to positively shape the development of our society together with an AI that is friendly to us.Use cases in healthcare play a primary role in this discussion, as both the risks and the opportunities of new AI-based systems become particularly clear here. For example, would older people with dementia (PWD) be allowed to entrust aspects of their autonomy to AI-based assistance systems so that they may continue to independently manage other aspects of their daily lives? In this paper, we argue that the classic balancing act between the dangers and opportunities of AI in healthcare can be at least partially overcome by taking a long-term ethical approach toward a symbiotic relationship between humans and AI. We exemplify this approach by showcasing our I­CARE system, an AI-based recommendation system for tertiary prevention of dementia. This system has been in development since 2015 as the I­CARE Project at the University of Bremen, where it is still being researched today.


Assuntos
Inteligência Artificial , Demência , Humanos , Idoso , Simbiose , Alemanha , Atenção à Saúde
4.
Mov Disord ; 37(9): 1798-1802, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-35947366

RESUMO

Task-specificity in isolated focal dystonias is a powerful feature that may successfully be targeted with therapeutic brain-computer interfaces. While performing a symptomatic task, the patient actively modulates momentary brain activity (disorder signature) to match activity during an asymptomatic task (target signature), which is expected to translate into symptom reduction.


Assuntos
Interfaces Cérebro-Computador , Distúrbios Distônicos , Distúrbios Distônicos/diagnóstico , Distúrbios Distônicos/terapia , Humanos
5.
Sensors (Basel) ; 23(1)2022 Dec 23.
Artigo em Inglês | MEDLINE | ID: mdl-36616723

RESUMO

Human activity recognition (HAR) and human behavior recognition (HBR) have been playing increasingly important roles in the digital age [...].


Assuntos
Atividades Humanas , Reconhecimento Psicológico , Humanos , Tecnologia
6.
Alzheimers Dement ; 17 Suppl 11: e050637, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34971048

RESUMO

BACKGROUND: Due to the ongoing pandemic and the resulting community lockdowns, people with dementia and their family might be at risk of social deprivation and increased relationship strains. Technological means have the potential to engage participants in meaningful positive interactions. The tablet-based activation system I-CARE offers social activities specifically designed for people with dementia and their caregivers, by offering user-specific contents adapted to their needs and sensitivities. Little is known about the impact of Covid-19 on social health for this population. The ongoing study, presented as a part of the Marie-Curie Innovative-Training-Network action, H2020-MSCA-ITN, grant agreement 813196, assesses how COVID-19 has impacted community-dwelling dementia caregiving dyads. Contextual factors of technology use and motivations for inviting technology into social interactions is explored. METHOD: As a part an ongoing pre-post mixed-methods feasibility study, baseline assessments through semi-structured interviews were conducted and subjected to inductive thematic statement analysis by two independent researchers. RESULT: Participants differed in how COVID-19 restrictions impacted their lives and how they coped with dementia, revealing different motivations for inviting technology into their lives. Dyads who were socially active pre-COVID-19, and who managed use technology to maintain social participation during COVID-19, reported to have been less negatively impacted by COVID-19 restrictions. Four subthemes within "Social technology during COVID-19" were identified. CONCLUSION: During and beyond this pandemic, social technology is a valuable tool to promote social participation in this population. Successful uptake of technology is dependent on customizing to individuals' needs and conditions.

7.
Neuroimage ; 125: 172-181, 2016 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-26458517

RESUMO

The retrieval of motor memory requires a previous memory encoding and subsequent consolidation of the specific motor memory. Previous work showed that motor memory seems to rely on different memory components (e.g., implicit, explicit). However, it is still unknown if explicit components contribute to the retrieval of motor memories formed by dynamic adaptation tasks and which neural correlates are linked to memory retrieval. We investigated the lower and higher gamma bands of subjects' electroencephalography during encoding and retrieval of a dynamic adaptation task. A total of 24 subjects were randomly assigned to a treatment and control group. Both groups adapted to a force field A on day 1 and were re-exposed to the same force field A on day 3 of the experiment. On day 2, treatment group learned an interfering force field B whereas control group had a day rest. Kinematic analyses showed that control group improved their initial motor performance from day 1 to day 3 but treatment group did not. This behavioral result coincided with an increased higher gamma band power in the electrodes over prefrontal areas on the initial trials of day 3 for control but not treatment group. Intriguingly, this effect vanished with the subsequent re-adaptation on day 3. We suggest that improved re-test performance in a dynamic motor adaptation task is contributed by explicit memory and that gamma bands in the electrodes over the prefrontal cortex are linked to these explicit components. Furthermore, we suggest that the contribution of explicit memory vanishes with the subsequent re-adaptation while task automaticity increases.


Assuntos
Aprendizagem/fisiologia , Movimento/fisiologia , Córtex Pré-Frontal/fisiologia , Mapeamento Encefálico , Eletroencefalografia , Humanos , Masculino , Memória/fisiologia , Consolidação da Memória/fisiologia , Adulto Jovem
8.
IEEE Trans Biomed Eng ; 71(1): 171-182, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37432835

RESUMO

OBJECTIVE: Despite recent advances, the decoding of auditory attention from brain signals remains a challenge. A key solution is the extraction of discriminative features from high-dimensional data, such as multi-channel electroencephalography (EEG). However, to our knowledge, topological relationships between individual channels have not yet been considered in any study. In this work, we introduced a novel architecture that exploits the topology of the human brain to perform auditory spatial attention detection (ASAD) from EEG signals. METHODS: We propose EEG-Graph Net, an EEG-graph convolutional network, which employs a neural attention mechanism. This mechanism models the topology of the human brain in terms of the spatial pattern of EEG signals as a graph. In the EEG-Graph, each EEG channel is represented by a node, while the relationship between two EEG channels is represented by an edge between the respective nodes. The convolutional network takes the multi-channel EEG signals as a time series of EEG-graphs and learns the node and edge weights from the contribution of the EEG signals to the ASAD task. The proposed architecture supports the interpretation of the experimental results by data visualization. RESULTS: We conducted experiments on two publicly available databases. The experimental results showed that EEG-Graph Net significantly outperforms the state-of-the-art methods in terms of decoding performance. In addition, the analysis of the learned weight patterns provides insights into the processing of continuous speech in the brain and confirms findings from neuroscientific studies. CONCLUSION: We showed that modeling brain topology with EEG-graphs yields highly competitive results for auditory spatial attention detection. SIGNIFICANCE: The proposed EEG-Graph Net is more lightweight and accurate than competing baselines and provides explanations for the results. Also, the architecture can be easily transferred to other brain-computer interface (BCI) tasks.


Assuntos
Interfaces Cérebro-Computador , Redes Neurais de Computação , Humanos , Algoritmos , Eletroencefalografia/métodos , Encéfalo
9.
JMIR Ment Health ; 11: e49222, 2024 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-38236637

RESUMO

BACKGROUND: The use of mobile devices to continuously monitor objectively extracted parameters of depressive symptomatology is seen as an important step in the understanding and prevention of upcoming depressive episodes. Speech features such as pitch variability, speech pauses, and speech rate are promising indicators, but empirical evidence is limited, given the variability of study designs. OBJECTIVE: Previous research studies have found different speech patterns when comparing single speech recordings between patients and healthy controls, but only a few studies have used repeated assessments to compare depressive and nondepressive episodes within the same patient. To our knowledge, no study has used a series of measurements within patients with depression (eg, intensive longitudinal data) to model the dynamic ebb and flow of subjectively reported depression and concomitant speech samples. However, such data are indispensable for detecting and ultimately preventing upcoming episodes. METHODS: In this study, we captured voice samples and momentary affect ratings over the course of 3 weeks in a sample of patients (N=30) with an acute depressive episode receiving stationary care. Patients underwent sleep deprivation therapy, a chronotherapeutic intervention that can rapidly improve depression symptomatology. We hypothesized that within-person variability in depressive and affective momentary states would be reflected in the following 3 speech features: pitch variability, speech pauses, and speech rate. We parametrized them using the extended Geneva Minimalistic Acoustic Parameter Set (eGeMAPS) from open-source Speech and Music Interpretation by Large-Space Extraction (openSMILE; audEERING GmbH) and extracted them from a transcript. We analyzed the speech features along with self-reported momentary affect ratings, using multilevel linear regression analysis. We analyzed an average of 32 (SD 19.83) assessments per patient. RESULTS: Analyses revealed that pitch variability, speech pauses, and speech rate were associated with depression severity, positive affect, valence, and energetic arousal; furthermore, speech pauses and speech rate were associated with negative affect, and speech pauses were additionally associated with calmness. Specifically, pitch variability was negatively associated with improved momentary states (ie, lower pitch variability was linked to lower depression severity as well as higher positive affect, valence, and energetic arousal). Speech pauses were negatively associated with improved momentary states, whereas speech rate was positively associated with improved momentary states. CONCLUSIONS: Pitch variability, speech pauses, and speech rate are promising features for the development of clinical prediction technologies to improve patient care as well as timely diagnosis and monitoring of treatment response. Our research is a step forward on the path to developing an automated depression monitoring system, facilitating individually tailored treatments and increased patient empowerment.


Assuntos
Transtorno Depressivo , Fala , Humanos , Projetos Piloto , Depressão/terapia , Privação do Sono
10.
Front Physiol ; 14: 1233341, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37900945

RESUMO

As an important technique for data pre-processing, outlier detection plays a crucial role in various real applications and has gained substantial attention, especially in medical fields. Despite the importance of outlier detection, many existing methods are vulnerable to the distribution of outliers and require prior knowledge, such as the outlier proportion. To address this problem to some extent, this article proposes an adaptive mini-minimum spanning tree-based outlier detection (MMOD) method, which utilizes a novel distance measure by scaling the Euclidean distance. For datasets containing different densities and taking on different shapes, our method can identify outliers without prior knowledge of outlier percentages. The results on both real-world medical data corpora and intuitive synthetic datasets demonstrate the effectiveness of the proposed method compared to state-of-the-art methods.

11.
IEEE Trans Biomed Eng ; 69(7): 2233-2242, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-34982671

RESUMO

OBJECTIVE: Humans are able to localize the source of a sound. This enables them to direct attention to a particular speaker in a cocktail party. Psycho-acoustic studies show that the sensory cortices of the human brain respond to the location of sound sources differently, and the auditory attention itself is a dynamic and temporally based brain activity. In this work, we seek to build a computational model which uses both spatial and temporal information manifested in EEG signals for auditory spatial attention detection (ASAD). METHODS: We propose an end-to-end spatiotemporal attention network, denoted as STAnet, to detect auditory spatial attention from EEG. The STAnet is designed to assign differentiated weights dynamically to EEG channels through a spatial attention mechanism, and to temporal patterns in EEG signals through a temporal attention mechanism. RESULTS: We report the ASAD experiments on two publicly available datasets. The STAnet outperforms other competitive models by a large margin under various experimental conditions. Its attention decision for 1-second decision window outperforms that of the state-of-the-art techniques for 10-second decision window. Experimental results also demonstrate that the STAnet achieves competitive performance on EEG signals ranging from 64 to as few as 16 channels. CONCLUSION: This study provides evidence suggesting that efficient low-density EEG online decoding is within reach. SIGNIFICANCE: This study also marks an important step towards the practical implementation of ASAD in real life applications.


Assuntos
Encéfalo , Eletroencefalografia , Acústica , Eletroencefalografia/métodos , Cabeça , Humanos , Som
12.
Biosensors (Basel) ; 12(12)2022 Dec 19.
Artigo em Inglês | MEDLINE | ID: mdl-36551149

RESUMO

Biosignal-based technology has been increasingly available in our daily life, being a critical information source. Wearable biosensors have been widely applied in, among others, biometrics, sports, health care, rehabilitation assistance, and edutainment. Continuous data collection from biodevices provides a valuable volume of information, which needs to be curated and prepared before serving machine learning applications. One of the universal preparation steps is data segmentation and labelling/annotation. This work proposes a practical and manageable way to automatically segment and label single-channel or multimodal biosignal data using a self-similarity matrix (SSM) computed with signals' feature-based representation. Applied to public biosignal datasets and a benchmark for change point detection, the proposed approach delivered lucid visual support in interpreting the biosignals with the SSM while performing accurate automatic segmentation of biosignals with the help of the novelty function and associating the segments grounded on their similarity measures with the similarity profiles. The proposed method performed superior to other algorithms in most cases of a series of automatic biosignal segmentation tasks; of equal appeal is that it provides an intuitive visualization for information retrieval of multimodal biosignals.


Assuntos
Algoritmos , Medicina , Aprendizado de Máquina , Armazenamento e Recuperação da Informação
13.
Artigo em Inglês | MEDLINE | ID: mdl-36121939

RESUMO

Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data. Incorporating such explainable feature extraction into the model furthers the goal of creating end-to-end architectures that enable automated subject-specific parameter tuning while yielding an interpretable result. The model is implemented using intracranial brain data collected during a speech task. Using raw, unprocessed timesamples, the model detects the presence of speech at every timesample in a causal manner, suitable for online application. Model performance is comparable or superior to existing approaches that require substantial signal preprocessing and the learned frequency bands were found to converge to ranges that are supported by previous studies.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Encéfalo , Eletrocorticografia , Humanos , Fala
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 5812-5815, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892441

RESUMO

Detecting auditory attention based on brain signals enables many everyday applications, and serves as part of the solution to the cocktail party effect in speech processing. Several studies leverage the correlation between brain signals and auditory stimuli to detect the auditory attention of listeners. Recently, studies show that the alpha band (8-13 Hz) EEG signals enable the localization of auditory stimuli. We believe that it is possible to detect auditory spatial attention without the need of auditory stimuli as references. In this work, we firstly propose a spectro-spatial feature extraction technique to detect auditory spatial attention (left/right) based on the topographic specificity of alpha power. Experiments show that the proposed neural approach achieves 81.7% and 94.6% accuracy for 1-second and 10-second decision windows, respectively. Our comparative results show that this neural approach outperforms other competitive models by a large margin in all test cases.


Assuntos
Percepção da Fala , Eletroencefalografia , Fala
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 6045-6048, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892495

RESUMO

Neurological disorders can lead to significant impairments in speech communication and, in severe cases, cause the complete loss of the ability to speak. Brain-Computer Interfaces have shown promise as an alternative communication modality by directly transforming neural activity of speech processes into a textual or audible representations. Previous studies investigating such speech neuroprostheses relied on electrocorticography (ECoG) or microelectrode arrays that acquire neural signals from superficial areas on the cortex. While both measurement methods have demonstrated successful speech decoding, they do not capture activity from deeper brain structures and this activity has therefore not been harnessed for speech-related BCIs. In this study, we bridge this gap by adapting a previously presented decoding pipeline for speech synthesis based on ECoG signals to implanted depth electrodes (sEEG). For this purpose, we propose a multi-input convolutional neural network that extracts speech-related activity separately for each electrode shaft and estimates spectral coefficients to reconstruct an audible waveform. We evaluate our approach on open-loop data from 5 patients who conducted a recitation task of Dutch utterances. We achieve correlations of up to 0.80 between original and reconstructed speech spectrograms, which are significantly above chance level for all patients (p < 0.001). Our results indicate that sEEG can yield similar speech decoding performance to prior ECoG studies and is a promising modality for speech BCIs.


Assuntos
Interfaces Cérebro-Computador , Fala , Eletrocorticografia , Eletrodos Implantados , Humanos , Redes Neurais de Computação
16.
Geriatrics (Basel) ; 6(2)2021 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-34068284

RESUMO

I-CARE is a hand-held activation system that allows professional and informal caregivers to cognitively and socially activate people with dementia in joint activation sessions without special training or expertise. I-CARE consists of an easy-to-use tablet application that presents activation content and a server-based backend system that securely manages the contents and events of activation sessions. It tracks various sources of explicit and implicit feedback from user interactions and different sensors to estimate which content is successful in activating individual users. Over the course of use, I-CARE's recommendation system learns about the individual needs and resources of its users and automatically personalizes the activation content. In addition, information about past sessions can be retrieved such that activations seamlessly build on previous sessions while eligible stakeholders are informed about the current state of care and daily form of their protegees. In addition, caregivers can connect with supervisors and professionals through the I-CARE remote calling feature, to get activation sessions tracked in real time via audio and video support. In this way, I-CARE provides technical support for a decentralized and spontaneous formation of ad hoc activation groups and fosters tight engagement of the social network and caring community. By these means, I-CARE promotes new care infrastructures in the community and the neighborhood as well as relieves professional and informal caregivers.

17.
Commun Biol ; 4(1): 1055, 2021 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-34556793

RESUMO

Speech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity. However, most prior research utilizes data collected during open-loop experiments of articulated speech, which might not directly translate to imagined speech processes. Here, we present an approach that synthesizes audible speech in real-time for both imagined and whispered speech conditions. Using a participant implanted with stereotactic depth electrodes, we were able to reliably generate audible speech in real-time. The decoding models rely predominately on frontal activity suggesting that speech processes have similar representations when vocalized, whispered, or imagined. While reconstructed audio is not yet intelligible, our real-time synthesis approach represents an essential step towards investigating how patients will learn to operate a closed-loop speech neuroprosthesis based on imagined speech.


Assuntos
Interfaces Cérebro-Computador , Eletrodos Implantados/estatística & dados numéricos , Próteses Neurais/estatística & dados numéricos , Qualidade de Vida , Fala , Feminino , Humanos , Adulto Jovem
18.
Front Neurosci ; 14: 400, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32410956

RESUMO

The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.

19.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3103-3106, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946544

RESUMO

Virtual Reality (VR) has emerged as a novel paradigm for immersive applications in training, entertainment, rehabilitation, and other domains. In this paper, we investigate the automatic classification of mental workload from brain activity measured through functional near-infrared spectroscopy (fNIRS) in VR. We present results from a study which implements the established n-back task in an immersive visual scene, including physical interaction. Our results show that user workload can be detected from fNIRS signals in immersive VR tasks both person-dependently and -adaptively.


Assuntos
Encéfalo/fisiologia , Espectroscopia de Luz Próxima ao Infravermelho , Realidade Virtual , Carga de Trabalho , Humanos , Processos Mentais
20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3111-3114, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946546

RESUMO

Millions of individuals suffer from impairments that significantly disrupt or completely eliminate their ability to speak. An ideal intervention would restore one's natural ability to physically produce speech. Recent progress has been made in decoding speech-related brain activity to generate synthesized speech. Our vision is to extend these recent advances toward the goal of restoring physical speech production using decoded speech-related brain activity to modulate the electrical stimulation of the orofacial musculature involved in speech. In this pilot study we take a step toward this vision by investigating the feasibility of stimulating orofacial muscles during vocalization in order to alter acoustic production. The results of our study provide necessary foundation for eventual orofacial stimulation controlled directly from decoded speech-related brain activity.


Assuntos
Estimulação Elétrica , Músculos Faciais/fisiologia , Movimento , Fala , Encéfalo/fisiologia , Humanos , Projetos Piloto
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa