Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38150344

RESUMEN

This article proposes a semi-supervised contrastive capsule transformer method with feature-based knowledge distillation (KD) that simplifies the existing semisupervised learning (SSL) techniques for wearable human activity recognition (HAR), called CapMatch. CapMatch gracefully hybridizes supervised learning and unsupervised learning to extract rich representations from input data. In unsupervised learning, CapMatch leverages the pseudolabeling, contrastive learning (CL), and feature-based KD techniques to construct similarity learning on lower and higher level semantic information extracted from two augmentation versions of the data", weak" and "timecut", to recognize the relationships among the obtained features of classes in the unlabeled data. CapMatch combines the outputs of the weak-and timecut-augmented models to form pseudolabeling and thus CL. Meanwhile, CapMatch uses the feature-based KD to transfer knowledge from the intermediate layers of the weak-augmented model to those of the timecut-augmented model. To effectively capture both local and global patterns of HAR data, we design a capsule transformer network consisting of four capsule-based transformer blocks and one routing layer. Experimental results show that compared with a number of state-of-the-art semi-supervised and supervised algorithms, the proposed CapMatch achieves decent performance on three commonly used HAR datasets, namely, HAPT, WISDM, and UCI_HAR. With only 10% of data labeled, CapMatch achieves F1 values of higher than 85.00% on these datasets, outperforming 14 semi-supervised algorithms. When the proportion of labeled data reaches 30%, CapMatch obtains F1 values of no lower than 88.00% on the datasets above, which is better than several classical supervised algorithms, e.g., decision tree and k -nearest neighbor (KNN).

2.
Biomed Signal Process Control ; 83: 104642, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36818992

RESUMEN

In light of the constantly changing terrain of the COVID outbreak, medical specialists have implemented proactive schemes for vaccine production. Despite the remarkable COVID-19 vaccine development, the virus has mutated into new variants, including delta and omicron. Currently, the situation is critical in many parts of the world, and precautions are being taken to stop the virus from spreading and mutating. Early identification and diagnosis of COVID-19 are the main challenges faced by emerging technologies during the outbreak. In these circumstances, emerging technologies to tackle Coronavirus have proven magnificent. Artificial intelligence (AI), big data, the internet of medical things (IoMT), robotics, blockchain technology, telemedicine, smart applications, and additive manufacturing are suspicious for detecting, classifying, monitoring, and locating COVID-19. Henceforth, this research aims to glance at these COVID-19 defeating technologies by focusing on their strengths and limitations. A CiteSpace-based bibliometric analysis of the emerging technology was established. The most impactful keywords and the ongoing research frontiers were compiled. Emerging technologies were unstable due to data inconsistency, redundant and noisy datasets, and the inability to aggregate the data due to disparate data formats. Moreover, the privacy and confidentiality of patient medical records are not guaranteed. Hence, Significant data analysis is required to develop an intelligent computational model for effective and quick clinical diagnosis of COVID-19. Remarkably, this article outlines how emerging technology has been used to counteract the virus disaster and offers ongoing research frontiers, directing readers to concentrate on the real challenges and thus facilitating additional explorations to amplify emerging technologies.

3.
Sensors (Basel) ; 19(16)2019 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-31416248

RESUMEN

Traffic sensing is one of the promising applications to guarantee safe and efficient traffic systems in vehicular networks. However, due to the unique characteristics of vehicular networks, such as limited wireless bandwidth and dynamic mobility of vehicles, traffic sensing always faces high estimation error based on collected traffic data with missing elements and over-high communication cost between terminal users and central server. Hence, this paper investigates the traffic sensing system in vehicular networks with mobile edge computing (MEC), where each MEC server enables traffic data collection and recovery in its local server. On this basis, we formulate the bandwidth-constrained traffic sensing (BCTS) problem, aiming at minimizing the estimation error based on the collected traffic data. To tackle the BCTS problem, we first propose the bandwidth-aware data collection (BDC) algorithm to select the optimal uploaded traffic data by evaluating the priority of each road segment covered by the MEC server. Then, we propose the convex-based data recovery (CDR) algorithm to minimize estimation error by transforming the BCTS into an l 2 -norm minimization problem. Last but not the least, we implement the simulation model and conduct performance evaluation. The comprehensive simulation results verify the superiority of the proposed algorithm.

4.
PLoS One ; 8(12): e81683, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24349110

RESUMEN

One of the most important applications of microarray data is the class prediction of biological samples. For this purpose, statistical tests have often been applied to identify the differentially expressed genes (DEGs), followed by the employment of the state-of-the-art learning machines including the Support Vector Machines (SVM) in particular. The SVM is a typical sample-based classifier whose performance comes down to how discriminant samples are. However, DEGs identified by statistical tests are not guaranteed to result in a training dataset composed of discriminant samples. To tackle this problem, a novel gene ranking method namely the Kernel Matrix Gene Selection (KMGS) is proposed. The rationale of the method, which roots in the fundamental ideas of the SVM algorithm, is described. The notion of ''the separability of a sample'' which is estimated by performing [Formula: see text]-like statistics on each column of the kernel matrix, is first introduced. The separability of a classification problem is then measured, from which the significance of a specific gene is deduced. Also described is a method of Kernel Matrix Sequential Forward Selection (KMSFS) which shares the KMGS method's essential ideas but proceeds in a greedy manner. On three public microarray datasets, our proposed algorithms achieved noticeably competitive performance in terms of the B.632+ error rate.


Asunto(s)
Algoritmos , Neoplasias del Colon/genética , Regulación Neoplásica de la Expresión Génica , Leucemia/genética , Neoplasias de la Próstata/genética , Máquina de Vectores de Soporte , Bases de Datos Genéticas , Perfilación de la Expresión Génica , Humanos , Masculino , Distribución Normal , Análisis de Secuencia por Matrices de Oligonucleótidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA