RESUMO
People all throughout the world have suffered from the COVID-19 pandemic. People can be infected after brief contact, so how to assess the risk of infection for everyone effectively is a tricky challenge. In view of this challenge, the combination of wireless networks with edge computing provides new possibilities for solving the COVID-19 prevention problem. With this observation, this paper proposed a game theory-based COVID-19 close contact detecting method with edge computing collaboration, named GCDM. The GCDM method is an efficient method for detecting COVID-19 close contact infection with users' location information. With the help of edge computing's feature, the GCDM can deal with the detecting requirements of computing and storage and relieve the user privacy problem. Technically, as the game reaches equilibrium, the GCDM method can maximize close contact detection completion rate while minimizing the latency and cost of the evaluation process in a decentralized manner. The GCDM is described in detail and the performance of GCDM is analyzed theoretically. Extensive experiments were conducted and experimental results demonstrate the superior performance of GCDM over other three representative methods through comprehensive analysis.
RESUMO
Unsupervised cluster detection in social network analysis involves grouping social actors into distinct groups, each distinct from the others. Users in the clusters are semantically very similar to those in the same cluster and dissimilar to those in different clusters. Social network clustering reveals a wide range of useful information about users and has many applications in daily life. Various approaches are developed to find social network users' clusters, using only links or attributes and links. This work proposes a method for detecting social network users' clusters based solely on their attributes. In this case, users' attributes are considered categorical values. The most popular clustering algorithm used for categorical data is the K-mode algorithm. However, it may suffer from local optimum due to its random initialization of centroids. To overcome this issue, this manuscript proposes a methodology named the Quantum PSO approach based on user similarity maximization. In the proposed approach, firstly, dimensionality reduction is conducted by performing the relevant attribute set selection followed by redundant attribute removal. Secondly, the QPSO technique is used to maximize the similarity score between users to get clusters. Three different similarity measures are used separately to perform the dimensionality reduction and similarity maximization processes. Experiments are conducted on two popular social network datasets; ego-Twitter, and ego-Facebook. The results show that the proposed approach performs better clustering results in terms of three different performance metrics than K-Mode and K-Mean algorithms.
RESUMO
Frequency estimation of physical symptoms for peoples is the most direct way to analyze and predict infectious diseases. In Internet of medical Things (IoMT), it is efficient and convenient for users to report their physical symptoms to hospitals or disease prevention departments by various mobile devices. Unfortunately, it usually brings leakage risk of these symptoms since data receivers may be untrusted. As a strong metric for health privacy, local differential privacy (LDP) requires that users should perturb their symptoms to prevent the risk. However, the widely-used data structure called sketch for frequency estimation does not satisfy the specified requirement. In this paper, we firstly define the problem of frequency estimation of physical symptoms under LDP. Then, we propose four different protocols, i.e., CMS-LDP, FCS-LDP, CS-LDP and FAS-LDP to solve the above problem. Next, we demonstrate that the designed protocols satisfy LDP and unbiased estimation. We also present two approaches to implement the key component (i.e., universal hash functions) of protocols. Finally, we conduct experiments to evaluate four protocols on two real-world datasets, representing two different distributions of physical symptoms. The results show that CMS-LDP and CS-LDP have relatively optimal utility for frequency estimation of physical symptoms in IoMT.
RESUMO
Service recommendation has become an effective way to quickly extract insightful information from massive data. However, in the cloud environment, the quality of service (QoS) data used to make recommendation decisions are often monitored by distributed sensors and stored in different cloud platforms. In this situation, integrating these distributed data (monitored by remote sensors) across different platforms while guaranteeing user privacy is an important but challenging task, for the successful service recommendation in the cloud environment. Locality-Sensitive Hashing (LSH) is a promising way to achieve the abovementioned data integration and privacy-preservation goals, while current LSH-based recommendation studies seldom consider the possible recommendation failures and hence reduce the robustness of recommender systems significantly. In view of this challenge, we develop a new LSH variant, named converse LSH, and then suggest an exception handling approach for recommendation failures based on the converse LSH technique. Finally, we conduct several simulated experiments based on the well-known dataset, i.e., Movielens to prove the effectiveness and efficiency of our approach.
RESUMO
With the development of the Internet of Things (IoT) technology, a vast amount of the IoT data is generated by mobile applications from mobile devices. Cloudlets provide a paradigm that allows the mobile applications and the generated IoT data to be offloaded from the mobile devices to the cloudlets for processing and storage through the access points (APs) in the Wireless Metropolitan Area Networks (WMANs). Since most of the IoT data is relevant to personal privacy, it is necessary to pay attention to data transmission security. However, it is still a challenge to realize the goal of optimizing the data transmission time, energy consumption and resource utilization with the privacy preservation considered for the cloudlet-enabled WMAN. In this paper, an IoT-oriented offloading method, named IOM, with privacy preservation is proposed to solve this problem. The task-offloading strategy with privacy preservation in WMANs is analyzed and modeled as a constrained multi-objective optimization problem. Then, the Dijkstra algorithm is employed to evaluate the shortest path between APs in WMANs, and the nondominated sorting differential evolution algorithm (NSDE) is adopted to optimize the proposed multi-objective problem. Finally, the experimental results demonstrate that the proposed method is both effective and efficient.
RESUMO
In the present era of the pandemic, vaccination is necessary to prevent severe infectious diseases, i.e., COVID-19. Specifically, vaccine safety is strongly linked to global health and security. However, the main concerns regarding vaccine record forgery and counterfeiting of vaccines are still common in the traditional vaccine supply chains. The conventional vaccine supply chains do not have proper authentication among all supply chain entities. Blockchain technology is an excellent contender to resolve the issues mentioned above. Although, blockchain based vaccine supply chains can potentially satisfy the objectives and functions of the next-generation supply chain model. However, its integration with the supply chain model is still constrained by substantial scalability and security issues. So, the current blockchain technology with traditional Proof-of-Work (PoW) consensus is incompatible with the next-generation vaccine supply chain framework. This paper introduces a model named "VaccineChain" - a novel checkpoint-assisted scalable blockchain based secure vaccine supply chain. VaccineChain guarantees the complete integrity and immutability of vaccine supply records to combat counterfeited vaccines over the supply chain. The dynamic consensus algorithm with various validating difficulty levels supports the efficient scalability of VaccineChain. Moreover, VaccineChain includes anonymous authentication among entities to provide selective revocation. This work also consists of a use case example of a secure vaccine supply chain using checkpoint assisted scalable blockchain with customized transaction generation-rule and smart contracts to demonstrate the application of VaccineChain. The comprehensive security analysis with standard theoretical proofs ensures the computational infeasibility of VaccineChain. Further, the detailed performance analysis with test simulations shows the practicability of VaccineChain.
RESUMO
Recently, machine/deep learning techniques are achieving remarkable success in a variety of intelligent control and management systems, promising to change the future of artificial intelligence (AI) scenarios. However, they still suffer from some intractable difficulty or limitations for model training, such as the out-of-distribution (OOD) issue, in modern smart manufacturing or intelligent transportation systems (ITSs). In this study, we newly design and introduce a deep generative model framework, which seamlessly incorporates the information theoretic learning (ITL) and causal representation learning (CRL) in a dual-generative adversarial network (Dual-GAN) architecture, aiming to enhance the robust OOD generalization in modern machine learning (ML) paradigms. In particular, an ITL-and CRL-enhanced Dual-GAN (ITCRL-DGAN) model is presented, which includes an autoencoder with CRL (AE-CRL) structure to aid the dual-adversarial training with causality-inspired feature representations and a Dual-GAN structure to improve the data augmentation in both feature and data levels. Following a newly designed feature separation strategy, a causal graph is built and improved based on the information theory, which can enhance the causally related factors among the separated core features and further enrich the feature representation with the counterfactual features via interventions based on the refined causal relationships. The ITL is incorporated to improve the extraction of low-dimensional feature representations and learn the optimized causal representations based on the idea of "information flow." A dual-adversarial training mechanism is then developed, which not only enables the generator to expand the boundary of feature distribution in accordance with the optimized feature representation from AE-CRL, but also allows the discriminator to further verify and improve the quality of the augmented data for OOD generalization. Experiment and evaluation results based on an open-source dataset demonstrate the outstanding learning efficiency and classification performance of our proposed model for robust OOD generalization in modern smart applications compared with three baseline methods.
RESUMO
Healthcare uses state-of-the-art technologies (such as wearable devices, blood glucose meters, electrocardiographs), which results in the generation of large amounts of data. Healthcare data is essential in patient management and plays a critical role in transforming healthcare services, medical scheme design, and scientific research. Missing data is a challenging problem in healthcare due to system failure and untimely filing, resulting in inaccurate diagnosis treatment anomalies. Therefore, there is a need to accurately predict and impute missing data as only complete data could provide a scientific and comprehensive basis for patients, doctors, and researchers. However, traditional approaches in this paradigm often neglect the effect of the time factor on forecasting results. This paper proposes a time-aware missing healthcare data prediction approach based on the autoregressive integrated moving average (ARIMA) model. We combine a truncated singular value decomposition (SVD) with the ARIMA model to improve the prediction efficiency of the ARIMA model and remove data redundancy and noise. Through the improved ARIMA model, our proposed approach (named MHDP SVD_ARIMA) can capture underlying pattern of healthcare data changes with time and accurately predict missing data. The experiments conducted on the WISDM dataset show that MHDP SVD_ARIMA approach is effective and efficient in predicting missing healthcare data.
RESUMO
Non-negative Matrix Factorization (NMF) is a dimensionality reduction approach for learning a parts-based and linear representation of non-negative data. It has attracted more attention because of that. In practice, NMF not only neglects the manifold structure of data samples, but also overlooks the priori label information of different classes. In this paper, a novel matrix decomposition method called Hyper-graph regularized Constrained Non-negative Matrix Factorization (HCNMF) is proposed for selecting differentially expressed genes and tumor sample classification. The advantage of hyper-graph learning is to capture local spatial information in high dimensional data. This method incorporates a hyper-graph regularization constraint to consider the higher order data sample relationships. The application of hyper-graph theory can effectively find pathogenic genes in cancer datasets. Besides, the label information is further incorporated in the objective function to improve the discriminative ability of the decomposition matrix. Supervised learning with label information greatly improves the classification effect. We also provide the iterative update rules and convergence proofs for the optimization problems of HCNMF. Experiments under The Cancer Genome Atlas (TCGA) datasets confirm the superiority of HCNMF algorithm compared with other representative algorithms through a set of evaluations.