RESUMO
Cryptography is very essential in our daily life, not only for confidentiality of information, but also for information integrity verification, non-repudiation, authentication, and other aspects. In modern society, cryptography is widely used; everything from personal life to national security is inseparable from it. With the emergence of quantum computing, traditional encryption methods are at risk of being cracked. People are beginning to explore methods for defending against quantum computer attacks. Among the methods currently developed, quantum key distribution is a technology that uses the principles of quantum mechanics to distribute keys. Post-quantum encryption algorithms are encryption methods that rely on mathematical challenges that quantum computers cannot solve quickly to ensure security. In this study, an integrated review of post-quantum encryption algorithms is conducted from the perspective of traditional cryptography. First, the concept and development background of post-quantum encryption are introduced. Then, the post-quantum encryption algorithm Kyber is studied. Finally, the achievements, difficulties and outstanding problems in this emerging field are summarized, and some predictions for the future are made.
RESUMO
With the rapid growth in wireless communication and IoT technologies, Radio Frequency Identification (RFID) is applied to the Internet of Vehicles (IoV) to ensure the security of private data and the accuracy of identification and tracking. However, in traffic congestion scenarios, frequent mutual authentication increases the overall computing and communication overhead of the network. For this reason, in this work, we propose a lightweight RFID security fast authentication protocol for traffic congestion scenarios, designing an ownership transfer protocol to transfer access rights to vehicle tags in non-congestion scenarios. The edge server is used for authentication, and the elliptic curve cryptography (ECC) algorithm and the hash function are combined to ensure the security of vehicles' private data. The Scyther tool is used for the formal analysis of the proposed scheme, and this analysis shows that the proposed scheme can resist typical attacks in mobile communication of the IoV. Experimental results show that, compared to other RFID authentication protocols, the calculation and communication overheads of the tags proposed in this work are reduced by 66.35% in congested scenarios and 66.67% in non-congested scenarios, while the lowest are reduced by 32.71% and 50%, respectively. The results of this study demonstrate a significant reduction in the computational and communication overhead of tags while ensuring security.
Assuntos
Dispositivo de Identificação por Radiofrequência , Dispositivo de Identificação por Radiofrequência/métodos , Segurança Computacional , Internet , Algoritmos , ComunicaçãoRESUMO
Federated learning is a machine learning method that can break the data island. Its inherent privacy-preserving property has an important role in training medical image models. However, federated learning requires frequent communication, which incur high communication costs. Moreover, the data is heterogeneous due to different users' preferences, which may degrade the performance of models. To address the problem of statistical heterogeneity, we propose FedUC, an algorithm to control the uploaded updates for federated learning, where a client scheduling method is made on the basis of weight divergence, update increment, and loss. We also balance the local data of the clients by image augmentation to mitigate the impact of the non-independently identically distribution. The server assigns compression thresholds to the clients based on the weight divergence and update increment of the models for gradient compression to reduce the wireless communication costs. Finally, based on the weight divergence, update increment and accuracy, the server dynamically assigns weights to the model parameters for the aggregation. Simulation and analysis utilizing a publicly available chest disease dataset containing COVID-19 are compared with existing federated learning methods. Experimental results show that our proposed strategy has better training performance in improving model accuracy and reducing wireless communication costs.
RESUMO
BACKGROUND: Isaacs' syndrome is a peripheral nerve hyperexcitability (PNH) syndrome due to peripheral motor nerve instability. Acquired Isaacs' syndrome is recognized as a paraneoplastic autoimmune disease with possible pathogenic voltage-gated potassium channel (VGKC) complex antibodies. However, the longitudinal correlation between clinical symptoms, VGKC antibodies level, and drug response is still unclear. CASE PRESENTATION: A 45-year-old man had progressive four limbs soreness, muscle twitching, cramps, and pain 4 months before admission. Electromyography (EMG) studies showed myokymic discharges, neuromyotonia, and an incremental response in the high-rate (50 Hz) repetitive nerve stimulation (RNS) test. Isaacs' syndrome was diagnosed based on clinical presentations and EMG reports. Serum studies showed positive VGKC complex antibodies, including leucine-rich glioma-inactivated 1 (LGI1) and contactin-associated protein-like 2 (CASPR2) antibodies. The acetylcholine receptor antibody was negative. Whole-body computed tomography (CT) and positron emission tomography revealed a mediastinal tumor with the great vessels encasement, right pleura, and diaphragm seeding. Biopsy confirmed a World Health Organization type B2 thymoma, with Masaoka stage IVa. His symptoms gradually improved and both LGI1 and CASPR2 antibodies titer became undetectable after concurrent chemoradiotherapy (CCRT) and high dose steroid treatment. However, his Isaacs' syndrome recurred after the steroid was reduced 5 months later. Follow-up chest CT showed probable thymoma progression. LGI1 antibody turned positive again while CASPR2 antibody remained undetectable. CONCLUSIONS: Our patient demonstrates that Isaacs' syndrome could be the initial and only neuromuscular manifestation of malignant thymoma. His Isaacs' syndrome is correlated well with the LGI1 antibody level. With an unresectable thymoma, long-term immunosuppressant therapy may be necessary for the management of Isaacs' syndrome in addition to CCRT for thymoma.
Assuntos
Síndrome de Isaacs , Canais de Potássio de Abertura Dependente da Tensão da Membrana , Timoma , Neoplasias do Timo , Autoanticorpos , Humanos , Síndrome de Isaacs/complicações , Síndrome de Isaacs/diagnóstico , Masculino , Pessoa de Meia-Idade , Recidiva Local de Neoplasia , Canais de Potássio de Abertura Dependente da Tensão da Membrana/uso terapêutico , Timoma/complicações , Timoma/diagnóstico , Timoma/terapia , Neoplasias do Timo/complicações , Neoplasias do Timo/diagnósticoRESUMO
The original pattern recognition and classification of crop diseases needs to collect a large amount of data in the field and send them next to a computer server through the network for recognition and classification. This method usually takes a long time, is expensive, and is difficult to carry out for timely monitoring of crop diseases, causing delays to diagnosis and treatment. With the emergence of edge computing, one can attempt to deploy the pattern recognition algorithm to the farmland environment and monitor the growth of crops promptly. However, due to the limited resources of the edge device, the original deep recognition model is challenging to apply. Due to this, in this article, a recognition model based on a depthwise separable convolutional neural network (DSCNN) is proposed, which operation particularities include a significant reduction in the number of parameters and the amount of computation, making the proposed design well suited for the edge. To show its effectiveness, simulation results are compared with the main convolution neural network (CNN) models LeNet and Visual Geometry Group Network (VGGNet) and show that, based on high recognition accuracy, the recognition time of the proposed model is reduced by 80.9% and 94.4%, respectively. Given its fast recognition speed and high recognition accuracy, the model is suitable for the real-time monitoring and recognition of crop diseases by provisioning remote embedded equipment and deploying the proposed model using edge computing.
Assuntos
Algoritmos , Redes Neurais de Computação , Simulação por ComputadorRESUMO
The Distance Vector-Hop (DV-Hop) algorithm is the most well-known range-free localization algorithm based on the distance vector routing protocol in wireless sensor networks; however, it is widely known that its localization accuracy is limited. In this paper, DEIDV-Hop is proposed, an enhanced wireless sensor node localization algorithm based on the differential evolution (DE) and improved DV-Hop algorithms, which improves the problem of potential error about average distance per hop. Introduced into the random individuals of mutation operation that increase the diversity of the population, random mutation is infused to enhance the search stagnation and premature convergence of the DE algorithm. On the basis of the generated individual, the social learning part of the Particle Swarm (PSO) algorithm is embedded into the crossover operation that accelerates the convergence speed as well as improves the optimization result of the algorithm. The improved DE algorithm is applied to obtain the global optimal solution corresponding to the estimated location of the unknown node. Among the four different network environments, the simulation results show that the proposed algorithm has smaller localization errors and more excellent stability than previous ones. Still, it is promising for application scenarios with higher localization accuracy and stability requirements.
RESUMO
The Time-based One-Time Password (TOTP) algorithm is commonly used for two-factor authentication. In this algorithm, a shared secret is used to derive a One-Time Password (OTP). However, in TOTP, the client and the server need to agree on a shared secret (i.e., a key). As a consequence, an adversary can construct an OTP through the compromised key if the server is hacked. To solve this problem, Kogan et al. proposed T/Key, an OTP algorithm based on a hash chain. However, the efficiency of OTP generation and verification is low in T/Key. In this article, we propose a novel and efficient Merkle tree-based One-Time Password (MOTP) algorithm to overcome such limitations. Compared to T/Key, this proposal reduces the number of hash operations to generate and verify the OTP, at the cost of small server storage and tolerable client storage. Experimental analysis and security evaluation show that MOTP can resist leakage attacks against the server and bring a tiny delay to two-factor authentication and verification time.
RESUMO
The trustworthiness of data is vital data analysis in the age of big data. In cyber-physical systems, most data is collected by sensors. With the increase of sensors as Internet of Things (IoT) nodes in the network, the security risk of data tampering, unauthorized access, false identify, and others are overgrowing because of vulnerable nodes, which leads to the great economic and social loss. This paper proposes a security scheme, Securing Nodes in IoT Perception Layer (SNPL), for protecting nodes in the perception layer. The SNPL is constructed by novel lightweight algorithms to ensure security and satisfy performance requirements, as well as safety technologies to provide security isolation for sensitive operations. A series of experiments with different types and numbers of nodes are presented. Experimental results and performance analysis show that SNPL is efficient and effective at protecting IoT from faulty or malicious nodes. Some potential practical application scenarios are also discussed to motivate the implementation of the proposed scheme in the real world.
RESUMO
Rapid advances in the Internet-of-Things (IoT) have exposed the underlying hardware devices to security threats. As the major component of hardware devices, the integrated circuit (IC) chip also suffers the threat of illegal, malicious attacks. To protect against attacks and vulnerabilities of a chip, a credible authentication is of fundamental importance. In this paper, we propose a Hausdorff distance-based method to authenticate the identity of IC chips in IoT environments, where the structure is analyzed, and the lookup table (LUT) resources are treated as a set of reconfigurable nodes in field programmable gate array (FPGA)-based IC design. Unused LUT resources are selected for insertion of the copyright information by using the depth-first search algorithm, and the random positions are reordered with the Hausdorff distance matching function next, so these positions are mapped to satisfy the specific constraints of the optimal watermark positions. If the authentication process is activated, virtual positions are mapped to the initial key file, yet the identity of the IC designed can be authenticated using the mapping relationship of the Hausdorff distance function. Experimental results show that the proposed method achieves good randomness and secrecy in watermark embedding, as well the extra resource overhead caused by watermarks are promising.
RESUMO
Device-to-device (D2D) communication is a promising technique for direct communication to enhance the performance of cellular networks. In order to improve the system throughput and utilization of spectrum resource, a resource allocation mechanism for D2D underlaid communication is proposed in this paper where D2D pairs reuse the resource blocks (RBs) of cellular uplink users, adopting a matching matrix to disclose the results of resource allocation. Details of the proposed resource allocation mechanism focused are listed as: the transmit power of D2D pairs are determined by themselves with the distributed power control method, and D2D pairs are assigned to different clusters that are the intended user sets of RBs, according to the threshold of the signal-to-interference-plus-noise ratio (SINR). The weighted efficiency interference-aware (WE-I-A) algorithm is proposed and applied subsequently to promote the system throughput by optimizing the matching of D2D pairs and RBs, where each D2D pair is weighted based on the SINR to compete for the priority of RBs fairly. Simulation results demonstrate that the proposed algorithm contributes to a good performance on the system throughput even if the uplink state is limited.
RESUMO
Task assignment is a crucial problem in wireless sensor networks (WSNs) that may affect the completion quality of sensing tasks. From the perspective of global optimization, a transmission-oriented reliable and energy-efficient task allocation (TRETA) is proposed, which is based on a comprehensive multi-level view of the network and an evaluation model for transmission in WSNs. To deliver better fault tolerance, TRETA dynamically adjusts in event-driven mode. Aiming to solve the reliable and efficient distributed task allocation problem in WSNs, two distributed task assignments for WSNs based on TRETA are proposed. In the former, the sink assigns reliability to all cluster heads according to the reliability requirements, so the cluster head performs local task allocation according to the assigned phase target reliability constraints. Simulation results show the reduction of the communication cost and latency of task allocation compared to centralized task assignments. Like the latter, the global view is obtained by fetching local views from multiple sink nodes, as well as multiple sinks having a consistent comprehensive view for global optimization. The way to respond to local task allocation requirements without the need to communicate with remote nodes overcomes the disadvantages of centralized task allocation in large-scale sensor networks with significant communication overheads and considerable delay, and has better scalability.
RESUMO
Node position information is critical in wireless sensor networks (WSN). However, existing positioning algorithms commonly have the issue of low positioning accuracy due to noise interferences in communication. Hence, proposed in this paper is an iterative positioning algorithm based on distance correction to improve the positioning accuracy of target nodes in WSNs, with contributions including (1) a log-distance distribution model of received signal strength indication (RSSI) ranging which is built and from which is derived a noise impact factor based on the model, (2) the initial position coordinates of the target node obtained using a triangle centroid localization algorithm, via which the distance deviation coefficient under the influence of noise is calculated, and (3) the ratio of the distance measured by the log-distance distribution model to the median distance deviation coefficient which is taken as the new distance between the target node and the anchor node. Based on the new distance, the triangular centroid positioning algorithm is applied to calculate the coordinates of the target node, after which the iterative positioning model is constructed and the distance deviation coefficient updated repeatedly to update the positioning result until the criteria of iterations are reached. Experiment results show that the proposed iterative positioning algorithm is promising and effectively improves positioning accuracy.
RESUMO
Drug target interaction prediction is a crucial stage in drug discovery. However, brute-force search over a compound database is financially infeasible. We have witnessed the increasing measured drug-target interactions records in recent years, and the rich drug/protein-related information allows the usage of graph machine learning. Despite the advances in deep learning-enabled drug-target interaction, there are still open challenges: (1) rich and complex relationship between drugs and proteins can be explored; (2) the intermediate node is not calibrated in the heterogeneous graph. To tackle with above issues, this paper proposed a framework named DSG-DTI. Specifically, DSG-DTI has the heterogeneous graph autoencoder and heterogeneous attention network-based Matrix Completion. Our framework ensures that the known types of nodes (e.g., drug, target, side effects, diseases) are precisely embedded into high-dimensional space with our pretraining skills. Also, the attention-based heterogeneous graph-based matrix completion achieves highly competitive results via effective long-range dependencies extraction. We verify our model on two public benchmarks. The result of two publicly available benchmark application programs show that the proposed scheme effectively predicts drug-target interactions and can generalize to newly registered drugs and targets with slight performance degradation, outperforming the best accuracy compared with other baselines.
RESUMO
Federated learning (FL) is a promising decentralized deep learning technology, which allows users to update models cooperatively without sharing their data. FL is reshaping existing industry paradigms for mathematical modeling and analysis, enabling an increasing number of industries to build privacy-preserving, secure distributed machine learning models. However, the inherent characteristics of FL have led to problems such as privacy protection, communication cost, systems heterogeneity, and unreliability model upload in actual operation. Interestingly, the integration with Blockchain technology provides an opportunity to further improve the FL security and performance, besides increasing its scope of applications. Therefore, we denote this integration of Blockchain and FL as the Blockchain-based federated learning (BCFL) framework. This paper introduces an in-depth survey of BCFL and discusses the insights of such a new paradigm. In particular, we first briefly introduce the FL technology and discuss the challenges faced by such technology. Then, we summarize the Blockchain ecosystem. Next, we highlight the structural design and platform of BCFL. Furthermore, we present the attempts ins improving FL performance with Blockchain and several combined applications of incentive mechanisms in FL. Finally, we summarize the industrial application scenarios of BCFL.
RESUMO
Recently, brain-machine interfacing is very popular that link humans and artificial devices through brain signals which lead to corresponding mobile application as supplementary. The Android platform has developed rapidly because of its good user experience and openness. Meanwhile, these characteristics of this platform, which cause the amazing pace of Android malware, pose a great threat to this platform and data correction during signal transmission of brain-machine interfacing. Many previous works employ various behavioral characteristics to analyze Android application (or app) and detect Android malware to protect signal data secure. However, with the development of Android app, category of Android app tends to be diverse, and the Android malware behavior tends to be complex. This situation makes existing Android malware detections complicated and inefficient. In this paper, we propose a broad analysis, gathering as many behavior characteristics of an app as possible and compare these behavior characteristics in several metrics. First, we extract static and dynamic behavioral characteristic from Android app in an automatic manner. Second, we explain the decision we made in each kind of behavioral characteristic we choose for Android app analysis and Android malware detection. Third, we design a detailed experiment, which compare the efficiency of each kind of behavior characteristic in different aspects. The results of experiment also show Android malware detection performance of these behavior characteristics combine with well-known machine learning algorithms.