Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 88
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Mult Scler ; 30(1): 103-112, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38084497

RESUMEN

INTRODUCTION: Multiple sclerosis (MS) is a leading cause of disability among young adults, but standard clinical scales may not accurately detect subtle changes in disability occurring between visits. This study aims to explore whether wearable device data provides more granular and objective measures of disability progression in MS. METHODS: Remote Assessment of Disease and Relapse in Central Nervous System Disorders (RADAR-CNS) is a longitudinal multicenter observational study in which 400 MS patients have been recruited since June 2018 and prospectively followed up for 24 months. Monitoring of patients included standard clinical visits with assessment of disability through use of the Expanded Disability Status Scale (EDSS), 6-minute walking test (6MWT) and timed 25-foot walk (T25FW), as well as remote monitoring through the use of a Fitbit. RESULTS: Among the 306 patients who completed the study (mean age, 45.6 years; females 67%), confirmed disability progression defined by the EDSS was observed in 74 patients, who had approximately 1392 fewer daily steps than patients without disability progression. However, the decrease in the number of steps experienced over time by patients with EDSS progression and stable patients was not significantly different. Similar results were obtained with disability progression defined by the 6MWT and the T25FW. CONCLUSION: The use of continuous activity monitoring holds great promise as a sensitive and ecologically valid measure of disability progression in MS.


Asunto(s)
Personas con Discapacidad , Esclerosis Múltiple , Dispositivos Electrónicos Vestibles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Evaluación de la Discapacidad , Esclerosis Múltiple/diagnóstico , Prueba de Paso , Caminata/fisiología , Adulto
2.
Pattern Recognit ; 122: 108289, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34483372

RESUMEN

The Coronavirus (COVID-19) pandemic impelled several research efforts, from collecting COVID-19 patients' data to screening them for virus detection. Some COVID-19 symptoms are related to the functioning of the respiratory system that influences speech production; this suggests research on identifying markers of COVID-19 in speech and other human generated audio signals. In this article, we give an overview of research on human audio signals using 'Artificial Intelligence' techniques to screen, diagnose, monitor, and spread the awareness about COVID-19. This overview will be useful for developing automated systems that can help in the context of COVID-19, using non-obtrusive and easy to use bio-signals conveyed in human non-speech and speech audio productions.

3.
Pattern Recognit ; 122: 108361, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34629550

RESUMEN

The sudden outbreak of COVID-19 has resulted in tough challenges for the field of biometrics due to its spread via physical contact, and the regulations of wearing face masks. Given these constraints, voice biometrics can offer a suitable contact-less biometric solution; they can benefit from models that classify whether a speaker is wearing a mask or not. This article reviews the Mask Sub-Challenge (MSC) of the INTERSPEECH 2020 COMputational PARalinguistics challengE (ComParE), which focused on the following classification task: Given an audio chunk of a speaker, classify whether the speaker is wearing a mask or not. First, we report the collection of the Mask Augsburg Speech Corpus (MASC) and the baseline approaches used to solve the problem, achieving a performance of 71.8 % Unweighted Average Recall (UAR). We then summarise the methodologies explored in the submitted and accepted papers that mainly used two common patterns: (i) phonetic-based audio features, or (ii) spectrogram representations of audio combined with Convolutional Neural Networks (CNNs) typically used in image processing. Most approaches enhance their models by adapting ensembles of different models and attempting to increase the size of the training data using various techniques. We review and discuss the results of the participants of this sub-challenge, where the winner scored a UAR of 80.1 % . Moreover, we present the results of fusing the approaches, leading to a UAR of 82.6 % . Finally, we present a smartphone app that can be used as a proof of concept demonstration to detect in real-time whether users are wearing a face mask; we also benchmark the run-time of the best models.

4.
J Acoust Soc Am ; 149(6): 4377, 2021 06.
Artículo en Inglés | MEDLINE | ID: mdl-34241490

RESUMEN

COVID-19 is a global health crisis that has been affecting our daily lives throughout the past year. The symptomatology of COVID-19 is heterogeneous with a severity continuum. Many symptoms are related to pathological changes in the vocal system, leading to the assumption that COVID-19 may also affect voice production. For the first time, the present study investigates voice acoustic correlates of a COVID-19 infection based on a comprehensive acoustic parameter set. We compare 88 acoustic features extracted from recordings of the vowels /i:/, /e:/, /u:/, /o:/, and /a:/ produced by 11 symptomatic COVID-19 positive and 11 COVID-19 negative German-speaking participants. We employ the Mann-Whitney U test and calculate effect sizes to identify features with prominent group differences. The mean voiced segment length and the number of voiced segments per second yield the most important differences across all vowels indicating discontinuities in the pulmonic airstream during phonation in COVID-19 positive participants. Group differences in front vowels are additionally reflected in fundamental frequency variation and the harmonics-to-noise ratio, group differences in back vowels in statistics of the Mel-frequency cepstral coefficients and the spectral slope. Our findings represent an important proof-of-concept contribution for a potential voice-based identification of individuals infected with COVID-19.


Asunto(s)
COVID-19 , Voz , Acústica , Humanos , Fonación , SARS-CoV-2 , Acústica del Lenguaje , Calidad de la Voz
5.
Methods ; 151: 41-54, 2018 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-30099083

RESUMEN

Due to the complex and intricate nature associated with their production, the acoustic-prosodic properties of a speech signal are modulated with a range of health related effects. There is an active and growing area of machine learning research in this speech and health domain, focusing on developing paradigms to objectively extract and measure such effects. Concurrently, deep learning is transforming intelligent signal analysis, such that machines are now reaching near human capabilities in a range of recognition and analysis tasks. Herein, we review current state-of-the-art approaches with speech-based health detection, placing a particular focus on the impact of deep learning within this domain. Based on this overview, it is evident while that deep learning based solutions be become more present in the literature, it has not had the same overall dominating effect seen in other related fields. In this regard, we suggest some possible research directions aimed at fully leveraging the advantages that deep learning can offer speech-based health detection.


Asunto(s)
Aprendizaje Profundo/tendencias , Habla , Acústica , Humanos , Redes Neurales de la Computación
6.
Patterns (N Y) ; 5(3): 100952, 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38487807

RESUMEN

In their recent publication in Patterns, the authors proposed a methodology based on sample-free Bayesian neural networks and label smoothing to improve both predictive and calibration performance on animal call detection. Such approaches have the potential to foster trust in algorithmic decision making and enhance policy making in applications about conservation using recordings made by on-site passive acoustic monitoring equipment. This interview is a companion to these authors' recent paper, "Propagating Variational Model Uncertainty for Bioacoustic Call Label Smoothing".

7.
Artículo en Inglés | MEDLINE | ID: mdl-38696290

RESUMEN

Due to the objectivity of emotional expression in the central nervous system, EEG-based emotion recognition can effectively reflect humans' internal emotional states. In recent years, convolutional neural networks (CNNs) and recurrent neural networks (RNNs) have made significant strides in extracting local features and temporal dependencies from EEG signals. However, CNNs ignore spatial distribution information from EEG electrodes; moreover, RNNs may encounter issues such as exploding/vanishing gradients and high time consumption. To address these limitations, we propose an attention-based temporal graph representation network (ATGRNet) for EEG-based emotion recognition. Firstly, a hierarchical attention mechanism is introduced to integrate feature representations from both frequency bands and channels ordered by priority in EEG signals. Second, a graph convolutional neural network with top-k operation is utilized to capture internal relationships between EEG electrodes under different emotion patterns. Next, a residual-based graph readout mechanism is applied to accumulate the EEG feature node-level representations into graph-level representations. Finally, the obtained graph-level representations are fed into a temporal convolutional network (TCN) to extract the temporal dependencies between EEG frames. We evaluated our proposed ATGRNet on the SEED, DEAP and FACED datasets. The experimental findings show that the proposed ATGRNet surpasses the state-of-the-art graph-based mehtods for EEG-based emotion recognition.

8.
IEEE Trans Pattern Anal Mach Intell ; 46(2): 805-822, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37851557

RESUMEN

Automatically recognising apparent emotions from face and voice is hard, in part because of various sources of uncertainty, including in the input data and the labels used in a machine learning framework. This paper introduces an uncertainty-aware multimodal fusion approach that quantifies modality-wise aleatoric or data uncertainty towards emotion prediction. We propose a novel fusion framework, in which latent distributions over unimodal temporal context are learned by constraining their variance. These variance constraints, Calibration and Ordinal Ranking, are designed such that the variance estimated for a modality can represent how informative the temporal context of that modality is w.r.t. emotion recognition. When well-calibrated, modality-wise uncertainty scores indicate how much their corresponding predictions are likely to differ from the ground truth labels. Well-ranked uncertainty scores allow the ordinal ranking of different frames across different modalities. To jointly impose both these constraints, we propose a softmax distributional matching loss. Our evaluation on AVEC 2019 CES, CMU-MOSEI, and IEMOCAP datasets shows that the proposed multimodal fusion method not only improves the generalisation performance of emotion recognition models and their predictive uncertainty estimates, but also makes the models robust to novel noise patterns encountered at test time.

9.
Artículo en Inglés | MEDLINE | ID: mdl-38809724

RESUMEN

This scoping review paper redefines the Artificial Intelligence-based Internet of Things (AIoT) driven Human Activity Recognition (HAR) field by systematically extrapolating from various application domains to deduce potential techniques and algorithms. We distill a general model with adaptive learning and optimization mechanisms by conducting a detailed analysis of human activity types and utilizing contact or non-contact devices. It presents various system integration mathematical paradigms driven by multimodal data fusion, covering predictions of complex behaviors and redefining valuable methods, devices, and systems for HAR. Additionally, this paper establishes benchmarks for behavior recognition across different application requirements, from simple localized actions to group activities. It summarizes open research directions, including data diversity and volume, computational limitations, interoperability, real-time recognition, data security, and privacy concerns. Finally, we aim to serve as a comprehensive and foundational resource for researchers delving into the complex and burgeoning realm of AIoT-enhanced HAR, providing insights and guidance for future innovations and developments.

10.
Heliyon ; 10(1): e23142, 2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38163154

RESUMEN

Among the 17 Sustainable Development Goals (SDGs) proposed within the 2030 Agenda and adopted by all the United Nations member states, the 13th SDG is a call for action to combat climate change. Moreover, SDGs 14 and 15 claim the protection and conservation of life below water and life on land, respectively. In this work, we provide a literature-founded overview of application areas, in which computer audition - a powerful but in this context so far hardly considered technology, combining audio signal processing and machine intelligence - is employed to monitor our ecosystem with the potential to identify ecologically critical processes or states. We distinguish between applications related to organisms, such as species richness analysis and plant health monitoring, and applications related to the environment, such as melting ice monitoring or wildfire detection. This work positions computer audition in relation to alternative approaches by discussing methodological strengths and limitations, as well as ethical aspects. We conclude with an urgent call to action to the research community for a greater involvement of audio intelligence methodology in future ecosystem monitoring approaches.

11.
Patterns (N Y) ; 5(3): 100932, 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38487806

RESUMEN

Along with propagating the input toward making a prediction, Bayesian neural networks also propagate uncertainty. This has the potential to guide the training process by rejecting predictions of low confidence, and recent variational Bayesian methods can do so without Monte Carlo sampling of weights. Here, we apply sample-free methods for wildlife call detection on recordings made via passive acoustic monitoring equipment in the animals' natural habitats. We further propose uncertainty-aware label smoothing, where the smoothing probability is dependent on sample-free predictive uncertainty, in order to downweigh data samples that should contribute less to the loss value. We introduce a bioacoustic dataset recorded in Malaysian Borneo, containing overlapping calls from 30 species. On that dataset, our proposed method achieves an absolute percentage improvement of around 1.5 points on area under the receiver operating characteristic (AU-ROC), 13 points in F1, and 19.5 points in expected calibration error (ECE) compared to the point-estimate network baseline averaged across all target classes.

12.
Digit Health ; 10: 20552076241258276, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38894942

RESUMEN

Objective: Millions of people in the UK have asthma, yet 70% do not access basic care, leading to the largest number of asthma-related deaths in Europe. Chatbots may extend the reach of asthma support and provide a bridge to traditional healthcare. This study evaluates 'Brisa', a chatbot designed to improve asthma patients' self-assessment and self-management. Methods: We recruited 150 adults with an asthma diagnosis to test our chatbot. Participants were recruited over three waves through social media and a research recruitment platform. Eligible participants had access to 'Brisa' via a WhatsApp or website version for 28 days and completed entry and exit questionnaires to evaluate user experience and asthma control. Weekly symptom tracking, user interaction metrics, satisfaction measures, and qualitative feedback were utilised to evaluate the chatbot's usability and potential effectiveness, focusing on changes in asthma control and self-reported behavioural improvements. Results: 74% of participants engaged with 'Brisa' at least once. High task completion rates were observed: asthma attack risk assessment (86%), voice recording submission (83%) and asthma control tracking (95.5%). Post use, an 8% improvement in asthma control was reported. User satisfaction surveys indicated positive feedback on helpfulness (80%), privacy (87%), trustworthiness (80%) and functionality (84%) but highlighted a need for improved conversational depth and personalisation. Conclusions: The study indicates that chatbots are effective for asthma support, demonstrated by the high usage of features like risk assessment and control tracking, as well as a statistically significant improvement in asthma control. However, lower satisfaction in conversational flexibility highlights rising expectations for chatbot fluency, influenced by advanced models like ChatGPT. Future health-focused chatbots must balance conversational capability with accuracy and safety to maintain engagement and effectiveness.

13.
IEEE Trans Biomed Eng ; PP2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38700959

RESUMEN

OBJECTIVE: Early diagnosis of cardiovascular diseases is a crucial task in medical practice. With the application of computer audition in the healthcare field, artificial intelligence (AI) has been applied to clinical non-invasive intelligent auscultation of heart sounds to provide rapid and effective pre-screening. However, AI models generally require large amounts of data which may cause privacy issues. Unfortunately, it is difficult to collect large amounts of healthcare data from a single centre. METHODS: In this study, we propose federated learning (FL) optimisation strategies for the practical application in multi-centre institutional heart sound databases. The horizontal FL is mainly employed to tackle the privacy problem by aligning the feature spaces of FL participating institutions without information leakage. In addition, techniques based on deep learning have poor interpretability due to their "black-box" property, which limits the feasibility of AI in real medical data. To this end, vertical FL is utilised to address the issues of model interpretability and data scarcity. CONCLUSION: Experimental results demonstrate that, the proposed FL framework can achieve good performance for heart sound abnormality detection by taking the personal privacy protection into account. Moreover, using the federated feature space is beneficial to balance the interpretability of the vertical FL and the privacy of the data. SIGNIFICANCE: This work realises the potential of FL from research to clinical practice, and is expected to have extensive application in the federated smart medical system.

14.
Cyborg Bionic Syst ; 5: 0075, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38440319

RESUMEN

Leveraging the power of artificial intelligence to facilitate an automatic analysis and monitoring of heart sounds has increasingly attracted tremendous efforts in the past decade. Nevertheless, lacking on standard open-access database made it difficult to maintain a sustainable and comparable research before the first release of the PhysioNet CinC Challenge Dataset. However, inconsistent standards on data collection, annotation, and partition are still restraining a fair and efficient comparison between different works. To this line, we introduced and benchmarked a first version of the Heart Sounds Shenzhen (HSS) corpus. Motivated and inspired by the previous works based on HSS, we redefined the tasks and make a comprehensive investigation on shallow and deep models in this study. First, we segmented the heart sound recording into shorter recordings (10 s), which makes it more similar to the human auscultation case. Second, we redefined the classification tasks. Besides using the 3 class categories (normal, moderate, and mild/severe) adopted in HSS, we added a binary classification task in this study, i.e., normal and abnormal. In this work, we provided detailed benchmarks based on both the classic machine learning and the state-of-the-art deep learning technologies, which are reproducible by using open-source toolkits. Last but not least, we analyzed the feature contributions of best performance achieved by the benchmark to make the results more convincing and interpretable.

15.
Sci Data ; 11(1): 700, 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38937483

RESUMEN

The UK COVID-19 Vocal Audio Dataset is designed for the training and evaluation of machine learning models that classify SARS-CoV-2 infection status or associated respiratory symptoms using vocal audio. The UK Health Security Agency recruited voluntary participants through the national Test and Trace programme and the REACT-1 survey in England from March 2021 to March 2022, during dominant transmission of the Alpha and Delta SARS-CoV-2 variants and some Omicron variant sublineages. Audio recordings of volitional coughs, exhalations, and speech were collected in the 'Speak up and help beat coronavirus' digital survey alongside demographic, symptom and self-reported respiratory condition data. Digital survey submissions were linked to SARS-CoV-2 test results. The UK COVID-19 Vocal Audio Dataset represents the largest collection of SARS-CoV-2 PCR-referenced audio recordings to date. PCR results were linked to 70,565 of 72,999 participants and 24,105 of 25,706 positive cases. Respiratory symptoms were reported by 45.6% of participants. This dataset has additional potential uses for bioacoustics research, with 11.3% participants self-reporting asthma, and 27.2% with linked influenza PCR test results.


Asunto(s)
COVID-19 , Humanos , Tos , COVID-19/diagnóstico , Espiración , Aprendizaje Automático , Reacción en Cadena de la Polimerasa , Habla , Reino Unido
16.
J Affect Disord ; 355: 40-49, 2024 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38552911

RESUMEN

BACKGROUND: Prior research has associated spoken language use with depression, yet studies often involve small or non-clinical samples and face challenges in the manual transcription of speech. This paper aimed to automatically identify depression-related topics in speech recordings collected from clinical samples. METHODS: The data included 3919 English free-response speech recordings collected via smartphones from 265 participants with a depression history. We transcribed speech recordings via automatic speech recognition (Whisper tool, OpenAI) and identified principal topics from transcriptions using a deep learning topic model (BERTopic). To identify depression risk topics and understand the context, we compared participants' depression severity and behavioral (extracted from wearable devices) and linguistic (extracted from transcribed texts) characteristics across identified topics. RESULTS: From the 29 topics identified, we identified 6 risk topics for depression: 'No Expectations', 'Sleep', 'Mental Therapy', 'Haircut', 'Studying', and 'Coursework'. Participants mentioning depression risk topics exhibited higher sleep variability, later sleep onset, and fewer daily steps and used fewer words, more negative language, and fewer leisure-related words in their speech recordings. LIMITATIONS: Our findings were derived from a depressed cohort with a specific speech task, potentially limiting the generalizability to non-clinical populations or other speech tasks. Additionally, some topics had small sample sizes, necessitating further validation in larger datasets. CONCLUSION: This study demonstrates that specific speech topics can indicate depression severity. The employed data-driven workflow provides a practical approach for analyzing large-scale speech data collected from real-world settings.


Asunto(s)
Aprendizaje Profundo , Habla , Humanos , Teléfono Inteligente , Depresión/diagnóstico , Software de Reconocimiento del Habla
17.
Cyborg Bionic Syst ; 4: 0005, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37040282

RESUMEN

The sound generated by body carries important information about our health status physically and psychologically. In the past decades, we have witnessed a plethora of successes achieved in the field of body sound analysis. Nevertheless, the fundamentals of this young field are still not well established. In particular, publicly accessible databases are rarely developed, which dramatically restrains a sustainable research. To this end, we are launching and continuously calling for participation from the global scientific community to contribute to the Voice of the Body (VoB) archive. We aim to build an open access platform to collect the well-established body sound databases in a well standardized way. Moreover, we hope to organize a series of challenges to promote the development of audio-driven methods for healthcare via the proposed VoB. We believe that VoB can help break the walls between different subjects toward an era of Medicine 4.0 enriched by audio intelligence.

18.
Artículo en Inglés | MEDLINE | ID: mdl-38082715

RESUMEN

Deep neural networks with attention mechanism have shown promising results in many computer vision and medical image processing applications. Attention mechanisms help to capture long range interactions. Recently, more sophisticated attention mechanisms like criss-cross attention have been proposed for efficient computation of attention blocks. In this paper, we introduce a simple and low-overhead approach of adding noise to the attention block which we discover to be very effective when using an attention mechanism. Our proposed methodology of introducing regularisation in the attention block by adding noise makes the network more robust and resilient, especially in scenarios where there is limited training data. We incorporate this regularisation mechanism in the criss-cross attention block. This criss-cross attention block enhanced with regularisation is integrated in the bottleneck layer of a U-Net for the task of medical image segmentation. We evaluate our proposed framework on a challenging subset of the NIH dataset for segmenting lung lobes. Our proposed methodology results in improving dice-scores by 2.5 % in this context of medical image segmentation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación
19.
Artículo en Inglés | MEDLINE | ID: mdl-38083410

RESUMEN

Human behavior expressions such as of confidence are time-varying entities. Both vocal and facial cues that convey the human confidence expressions keep varying throughout the duration of analysis. Although, the cues from these two modalities are not always in synchrony, they impact each other and the fused outcome as well. In this paper, we present a deep fusion technique to combine the two modalities and derive a single outcome to infer human confidence. Fused outcome improves the classification performance by capturing the temporal information from both the modalities. The analysis of time-varying nature of expressions in the conversations captured in an interview setup is also presented. We collected data from 51 speakers who participated in interview sessions. The average area under the curve (AUC) of uni-modal models using speech and facial expressions is 70.6% and 69.4%, respectively, for classifying confident videos from non-confident ones in 5-fold cross-validation analysis. Our deep fusion model improves the performance giving an average AUC of 76.8%.


Asunto(s)
Percepción del Habla , Voz , Humanos , Habla , Comunicación , Procesos Mentales
20.
Pilot Feasibility Stud ; 9(1): 155, 2023 Sep 07.
Artículo en Inglés | MEDLINE | ID: mdl-37679797

RESUMEN

BACKGROUND: Stress levels and thus the risk of developing related physical and mental health conditions are rising worldwide. Dysfunctional beliefs contribute to the development of stress. Potentially, such beliefs can be modified with approach-avoidance modification trainings (AAMT). As previous research indicates that effects of AAMTs are small, there is a need for innovative ways of increasing the efficacy of these interventions. For this purpose, we aim to evaluate the feasibility of the intervention and study design and explore the efficacy of an innovative emotion-based AAMT version (eAAMT) that uses the display of emotions to move stress-inducing beliefs away from and draw stress-reducing beliefs towards oneself. METHODS: We will conduct a parallel randomized controlled pilot study at the Friedrich-Alexander-Universität Erlangen-Nürnberg, Germany. Individuals with elevated stress levels will be randomized to one of eight study conditions (n = 10 per condition) - one of six variants of the eAAMT, an active control intervention (swipe-based AAMT), or an inactive control condition. Participants in the intervention groups will engage in four sessions of 20-30 min (e)AAMT training on consecutive days. Participants in the inactive control condition will complete the assessments via an online tool. Non-blinded assessments will be taken directly before and after the training and 1 week after training completion. The primary outcome will be perceived stress. Secondary outcomes will be dysfunctional beliefs, symptoms of depression, emotion regulation skills, and physiological stress measures. We will compute effect sizes and conduct mixed ANOVAs to explore differences in change in outcomes between the eAAMT and control conditions. DISCUSSION: The study will provide valuable information to improve the intervention and study design. Moreover, if shown to be effective, the approach can be used as an automated smartphone-based intervention. Future research needs to identify target groups benefitting from this intervention utilized either as stand-alone treatment or an add-on intervention that is combined with other evidence-based treatments. TRIAL REGISTRATION: The trial has been registered in the German Clinical Trials Register (Deutsches Register Klinischer Studien; DRKS00023007 ; September 7, 2020).

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA