RESUMO
The novel Coronavirus disease (COVID-19) is a highly contagious virus and has spread all over the world, posing an extremely serious threat to all countries. Automatic lung infection segmentation from computed tomography (CT) plays an important role in the quantitative analysis of COVID-19. However, the major challenge lies in the inadequacy of annotated COVID-19 datasets. Currently, there are several public non-COVID lung lesion segmentation datasets, providing the potential for generalizing useful information to the related COVID-19 segmentation task. In this paper, we propose a novel relation-driven collaborative learning model to exploit shared knowledge from non-COVID lesions for annotation-efficient COVID-19 CT lung infection segmentation. The model consists of a general encoder to capture general lung lesion features based on multiple non-COVID lesions, and a target encoder to focus on task-specific features based on COVID-19 infections. We develop a collaborative learning scheme to regularize feature-level relation consistency of given input and encourage the model to learn more general and discriminative representation of COVID-19 infections. Extensive experiments demonstrate that trained with limited COVID-19 data, exploiting shared knowledge from non-COVID lesions can further improve state-of-the-art performance with up to 3.0% in dice similarity coefficient and 4.2% in normalized surface dice. In addition, experimental results on large scale 2D dataset with CT slices show that our method significantly outperforms cutting-edge segmentation methods metrics. Our method promotes new insights into annotation-efficient deep learning and illustrates strong potential for real-world applications in the global fight against COVID-19 in the absence of sufficient high-quality annotations.
Assuntos
COVID-19 , Pulmão , Benchmarking , Humanos , Pulmão/diagnóstico por imagem , SARS-CoV-2 , Tomografia Computadorizada por Raios XRESUMO
Brain-computer interface (BCI) can provide a way for the disabled to interact with the outside world. Steady-state visual evoked potential (SSVEP), which evokes potential through visual stimulation is one of important BCI paradigms. In laboratory environment, the classification accuracy of SSVEPs is excellent. However, in motion state, the accuracy will be greatly affected and reduce quite a lot. In this paper, in order to improve the classification accuracy of the SSVEP signals in the motion state, we collected SSVEP data of five targets at three speeds of 0km/h, 2.5km/h and 5km/h. A compare network based on convolutional neural network (CNN) was proposed to learn the relationship between EEG signal and the template corresponding to each stimulus frequency and classify. Compared with traditional methods (i.e., CCA, FBCCA and SVM) and state-of-the-art method (CNN) on the collected SSVEP datasets of 20 subjects, the method we proposed always performed best at different speeds. Therefore, these results validated the effectiveness of the method. In addition, compared with the speed of 0 km / h, the accuracy of the compare network at a high walking rate (5km/h) did not decrease much, and it could still maintain a good performance.
Assuntos
Interfaces Cérebro-Computador , Caminhada , Eletroencefalografia , Potenciais Evocados Visuais , Humanos , Redes Neurais de ComputaçãoRESUMO
Motor imagery (MI) based Brain-Computer Interface (BCI) is an important active BCI paradigm for recognizing movement intention of severely disabled persons. There are extensive studies about MI-based intention recognition, most of which heavily rely on staged handcrafted EEG feature extraction and classifier design. For end-to-end deep learning methods, researchers encode spatial information with convolution neural networks (CNNs) from raw EEG data. Compared with CNNs, recurrent neural networks (RNNs) allow for long-range lateral interactions between features. In this paper, we proposed a pure RNNs-based parallel method for encoding spatial and temporal sequential raw data with bidirectional Long Short- Term Memory (bi-LSTM) and standard LSTM, respectively. Firstly, we rearranged the index of EEG electrodes considering their spatial location relationship. Secondly, we applied sliding window method over raw EEG data to obtain more samples and split them into training and testing sets according to their original trial index. Thirdly, we utilized the samples and their transposed matrix as input to the proposed pure RNNs- based parallel method, which encodes spatial and temporal information simultaneously. Finally, the proposed method was evaluated in the public MI-based eegmmidb dataset and compared with the other three methods (CSP+LDA, FBCSP+LDA, and CNN-RNN method). The experiment results demonstrated the superior performance of our proposed pure RNNs-based parallel method. In the multi-class trial-wise movement intention classification scenario, our approach obtained an average accuracy of 68.20% and significantly outperformed other three methods with an 8.25% improvement of relative accuracy on average, which proves the feasibility of our approach for the real-world BCI system.