Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
PeerJ Comput Sci ; 8: e1073, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36426239

RESUMO

In this article, we describe a reproduction of the Relational Graph Convolutional Network (RGCN). Using our reproduction, we explain the intuition behind the model. Our reproduction results empirically validate the correctness of our implementations using benchmark Knowledge Graph datasets on node classification and link prediction tasks. Our explanation provides a friendly understanding of the different components of the RGCN for both users and researchers extending the RGCN approach. Furthermore, we introduce two new configurations of the RGCN that are more parameter efficient. The code and datasets are available at https://github.com/thiviyanT/torch-rgcn.

2.
eNeuro ; 9(5)2022.
Artigo em Inglês | MEDLINE | ID: mdl-36104277

RESUMO

The development of validated algorithms for automated handling of artifacts is essential for reliable and fast processing of EEG signals. Recently, there have been methodological advances in designing machine-learning algorithms to improve artifact detection of trained professionals who usually meticulously inspect and manually annotate EEG signals. However, validation of these methods is hindered by the lack of a gold standard as data are mostly private and data annotation is time consuming and error prone. In the effort to circumvent these issues, we propose an iterative learning model to speed up and reduce errors of manual annotation of EEG. We use a convolutional neural network (CNN) to train on expert-annotated eyes-open and eyes-closed resting-state EEG data from typically developing children (n = 30) and children with neurodevelopmental disorders (n = 141). To overcome the circular reasoning of aiming to develop a new algorithm and benchmarking to a manually-annotated gold standard, we instead aim to improve the gold standard by revising the portion of the data that was incorrectly learned by the network. When blindly presented with the selected signals for re-assessment (23% of the data), the two independent expert-annotators changed the annotation in 25% of the cases. Subsequently, the network was trained on the expert-revised gold standard, which resulted in improved separation between artifacts and nonartifacts as well as an increase in balanced accuracy from 74% to 80% and precision from 59% to 76%. These results show that CNNs are promising to enhance manual annotation of EEG artifacts and can be improved further with better gold-standard data.


Assuntos
Eletroencefalografia , Redes Neurais de Computação , Algoritmos , Artefatos , Criança , Eletroencefalografia/métodos , Humanos , Aprendizado de Máquina
3.
Sci Rep ; 12(1): 16047, 2022 09 26.
Artigo em Inglês | MEDLINE | ID: mdl-36163232

RESUMO

Self-supervised language modeling is a rapidly developing approach for the analysis of protein sequence data. However, work in this area is heterogeneous and diverse, making comparison of models and methods difficult. Moreover, models are often evaluated only on one or two downstream tasks, making it unclear whether the models capture generally useful properties. We introduce the ProteinGLUE benchmark for the evaluation of protein representations: a set of seven per-amino-acid tasks for evaluating learned protein representations. We also offer reference code, and we provide two baseline models with hyperparameters specifically trained for these benchmarks. Pre-training was done on two tasks, masked symbol prediction and next sentence prediction. We show that pre-training yields higher performance on a variety of downstream tasks such as secondary structure and protein interaction interface prediction, compared to no pre-training. However, the larger base model does not outperform the smaller medium model. We expect the ProteinGLUE benchmark dataset introduced here, together with the two baseline pre-trained models and their performance evaluations, to be of great value to the field of protein sequence-based property prediction. Availability: code and datasets from https://github.com/ibivu/protein-glue .


Assuntos
Benchmarking , Proteínas , Sequência de Aminoácidos , Aminoácidos/química , Processamento de Linguagem Natural
4.
Front Neuroinform ; 16: 1025847, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36844437

RESUMO

Machine learning techniques such as deep learning have been increasingly used to assist EEG annotation, by automating artifact recognition, sleep staging, and seizure detection. In lack of automation, the annotation process is prone to bias, even for trained annotators. On the other hand, completely automated processes do not offer the users the opportunity to inspect the models' output and re-evaluate potential false predictions. As a first step toward addressing these challenges, we developed Robin's Viewer (RV), a Python-based EEG viewer for annotating time-series EEG data. The key feature distinguishing RV from existing EEG viewers is the visualization of output predictions of deep-learning models trained to recognize patterns in EEG data. RV was developed on top of the plotting library Plotly, the app-building framework Dash, and the popular M/EEG analysis toolbox MNE. It is an open-source, platform-independent, interactive web application, which supports common EEG-file formats to facilitate easy integration with other EEG toolboxes. RV includes common features of other EEG viewers, e.g., a view-slider, tools for marking bad channels and transient artifacts, and customizable preprocessing. Altogether, RV is an EEG viewer that combines the predictive power of deep-learning models and the knowledge of scientists and clinicians to optimize EEG annotation. With the training of new deep-learning models, RV could be developed to detect clinical patterns other than artifacts, for example sleep stages and EEG abnormalities.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA