Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 20(9)2020 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-32357404

RESUMO

Log anomaly detection is an efficient method to manage modern large-scale Internet of Things (IoT) systems. More and more works start to apply natural language processing (NLP) methods, and in particular word2vec, in the log feature extraction. Word2vec can extract the relevance between words and vectorize the words. However, the computing cost of training word2vec is high. Anomalies in logs are dependent on not only an individual log message but also on the log message sequence. Therefore, the vector of words from word2vec can not be used directly, which needs to be transformed into the vector of log events and further transformed into the vector of log sequences. To reduce computational cost and avoid multiple transformations, in this paper, we propose an offline feature extraction model, named LogEvent2vec, which takes the log event as input of word2vec to extract the relevance between log events and vectorize log events directly. LogEvent2vec can work with any coordinate transformation methods and anomaly detection models. After getting the log event vector, we transform log event vector to log sequence vector by bary or tf-idf and three kinds of supervised models (Random Forests, Naive Bayes, and Neural Networks) are trained to detect the anomalies. We have conducted extensive experiments on a real public log dataset from BlueGene/L (BGL). The experimental results demonstrate that LogEvent2vec can significantly reduce computational time by 30 times and improve accuracy, comparing with word2vec. LogEvent2vec with bary and Random Forest can achieve the best F1-score and LogEvent2vec with tf-idf and Naive Bayes needs the least computational time.

2.
Sensors (Basel) ; 20(3)2020 Jan 30.
Artigo em Inglês | MEDLINE | ID: mdl-32019128

RESUMO

The development of the Internet of Things (IoT) plays a very important role for processing data at the edge of a network. Therefore, it is very important to protect the privacy of IoT devices when these devices process and transfer data. A mesh signature (MS) is a useful cryptographic tool, which makes a signer sign any message anonymously. As a result, the signer can hide his specific identity information to the mesh signature, namely his identifying information (such as personal public key) may be hidden to a list of tuples that consist of public key and message. Therefore, we propose an improved mesh signature scheme for IoT devices in this paper. The IoT devices seen as the signers may sign their publishing data through our proposed mesh signature scheme, and their specific identities can be hidden to a list of possible signers. Additionally, mesh signature consists of some atomic signatures, where the atomic signatures can be reusable. Therefore, for a large amount of data published by the IoT devices, the atomic signatures on the same data can be reusable so as to decrease the number of signatures generated by the IoT devices in our proposed scheme. Compared with the original mesh signature scheme, the proposed scheme has less computational costs on generating final mesh signature and signature verification. Since atomic signatures are reusable, the proposed scheme has more advantages on generating final mesh signature by reconstructing atomic signatures. Furthermore, according to our experiment, when the proposed scheme generates a mesh signature on 10 MB message, the memory consumption is only about 200 KB. Therefore, it is feasible that the proposed scheme is used to protect the identity privacy of IoT devices.

3.
Sensors (Basel) ; 19(20)2019 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-31635107

RESUMO

Groundwater is an important source of human activities, agriculture and industry. Underwater Acoustic Sensor Networks (UASNs) is one of the important technologies for marine environmental monitoring. Therefore, it is of great significance to study the node self- localization technology of underwater acoustic sensor network. This paper mainly studies the node localization algorithm based on range-free. In order to save cost and energy consumption, only a small number of sensing nodes in sensor networks usually know their own location. How to locate all nodes accurately through these few nodes is the focus of our research. In this paper, combined with the compressive sensing algorithm, a range-free node localization algorithm based on node hop information is proposed. Aiming at the problem that connection information collected by the algorithm is an integer, the hop is modified to further improve the localization performance. The simulation analysis shows that the improved algorithm is effective to improve the localization accuracy without additional cost and energy consumption compared with the traditional method.

4.
Sensors (Basel) ; 20(1)2019 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-31905910

RESUMO

In the IoT (Internet of Things) environment, smart homes, smart grids, and telematics constantly generate data with complex attributes. These data have low heterogeneity and poor interoperability, which brings difficulties to data management and value mining. The promising combination of blockchain and the Internet of things as BCoT (blockchain of things) can solve these problems. This paper introduces an innovative method DCOMB (dual combination Bloom filter) to firstly convert the computational power of bitcoin mining into the computational power of query. Furthermore, this article uses the DCOMB method to build blockchain-based IoT data query model. DCOMB can implement queries only through mining hash calculation. This model combines the data stream of the IoT with the timestamp of the blockchain, improving the interoperability of data and the versatility of the IoT database system. The experiment results show that the random reading performance of DCOMB query is higher than that of COMB (combination Bloom filter), and the error rate of DCOMB is lower. Meanwhile, both DCOMB and COMB query performance are better than MySQL (My Structured Query Language).

5.
Artigo em Inglês | MEDLINE | ID: mdl-38060354

RESUMO

With the rapid development of the Internet-of-Medical-Things (IoMT) in recent years, it has emerged as a promising solution to alleviate the workload of medical staff, particularly in the field of Medical Image Quality Assessment (MIQA). By deploying MIQA based on IoMT, it proves to be highly valuable in assisting the diagnosis and treatment of various types of medical images, such as fundus images, ultrasound images, and dermoscopic images. However, traditional MIQA models necessitate a substantial number of labeled medical images to be effective, which poses a challenge in acquiring a sufficient training dataset. To address this issue, we present a label-free MIQA model developed through a zero-shot learning approach. This paper introduces a Semantics-Aware Contrastive Learning (SCL) model that can effectively generalise quality assessment to diverse medical image types. The proposed method integrates features extracted from zero-shot learning, the spatial domain, and the frequency domain. Zero-shot learning is achieved through a tailored Contrastive Language-Image Pre-training (CLIP) model. Natural Scene Statistics (NSS) and patch-based features are extracted in the spatial domain, while frequency features are hierarchically extracted from both local and global levels. All of this information is utilised to derive a final quality score for a medical image. To ensure a comprehensive evaluation, we not only utilise two existing datasets, EyeQ and LiverQ, but also create a dataset specifically for skin image quality assessment. As a result, our SCL method undergoes extensive evaluation using all three medical image quality datasets, demonstrating its superiority over advanced models.

6.
Artigo em Inglês | MEDLINE | ID: mdl-37695962

RESUMO

Biomedical image segmentation plays an important role in Diabetic Retinopathy (DR)-related biomarker detection. DR is an ocular disease that affects the retina in people with diabetes and could lead to visual impairment if management measures are not taken in a timely manner. In DR screening programs, the presence and severity of DR are identified and classified based on various microvascular lesions detected by qualified ophthalmic screeners. Such a detection process is time-consuming and error-prone, given the small size of the microvascular lesions and the volume of images, especially with the increasing prevalence of diabetes. Automated image processing using deep learning methods is recognized as a promising approach to support diabetic retinopathy screening. In this paper, we propose a novel compound scaling encoder-decoder network architecture to improve the accuracy and running efficiency of microvascular lesion segmentation. In the encoder phase, we develop a lightweight encoder to speed up the training process, where the encoder network is scaled up in depth, width, and resolution dimensions. In the decoder phase, an attention mechanism is introduced to yield higher accuracy. Specifically, we employ Concurrent Spatial and Channel Squeeze and Channel Excitation (scSE) blocks to fully utilise both spatial and channel-wise information. Additionally, a compound loss function is incorporated with transfer learning to handle the problem of imbalanced data and further improve performance. To assess performance, our method is evaluated on two large-scale lesion segmentation datasets: DDR and FGADR datasets. Experimental results demonstrate the superiority of our method compared to other competent methods. Our codes are available at https://github.com/DeweiYi/CoSED-Net.

7.
Sci Rep ; 13(1): 2746, 2023 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-36797342

RESUMO

With the rapid development of Industry 4.0, the data security of Industrial Internet of Things in the Industry 4.0 environment has received widespread attention. Blockchain has the characteristics of decentralization and tamper-proof. Therefore, it has a natural advantage in solving the data security problem of Industrial Internet of Things. However, current blockchain technologies face challenges in providing consistency, scalability and data security at the same time in Industrial Internet of Things. To address the scalability problem and data security problem of Industrial Internet of Things, this paper constructs a highly scalable data storage mechanism for Industrial Internet of Things based on coded sharding blockchain. The mechanism uses coded sharding technology for data processing to improve the fault tolerance and storage load of the blockchain to solve the scalability problem. Then a cryptographic accumulator-based data storage scheme is designed which connects the cryptographic accumulator with the sharding nodes to save storage overhead and solve the security problem of data storage and verification. Finally, the scheme is proved to be security and the performance of the scheme is evaluated.

8.
Artigo em Inglês | MEDLINE | ID: mdl-37594867

RESUMO

The issue of data privacy protection must be considered in distributed federated learning (FL) so as to ensure that sensitive information is not leaked. In this paper, we propose a two-stage differential privacy (DP) framework for FL based on edge intelligence. Various levels of privacy preservation can be provided according to the degree of data sensitivity. In the first stage, the randomized response mechanism is used to perturb the original feature data by the user terminal for data desensitization, and the user can self-regulate the level of privacy preservation. In the second stage, noise is added to the local models by the edge server to further guarantee the privacy of the models. Finally, the model updates are aggregated in the cloud. In order to evaluate the performance of the proposed end-edge-cloud FL framework in terms of training accuracy and convergence, extensive experiments are conducted on a real electrocardiogram (ECG) signal dataset. Bi-directional long-short-term memory (BiLSTM) neural network is adopted to training classification model. The effect of different combinations of feature perturbation and noise addition on the model accuracy is analyzed depending on different privacy budgets and parameters. The experimental results demonstrate that the proposed privacy-preserving framework provides good accuracy and convergence while ensuring privacy.

9.
IEEE J Biomed Health Inform ; 25(10): 3794-3803, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34111016

RESUMO

The rapid development of the Internet of Things (IoTs), 5 G and artificial intelligence (AI) technology have been dramatically incentivizing the advancement of Internet of Medical Things (IoMT) in recent years. Profile matching technology can be used to realize the sharing of medical information between patients by matching similar symptom attributes. However, the symptom attributes are associated with patients' sensitive information such as gender, age, physiological data, and other personal health information, thus the privacy of patients will be revealed during the matching process in the IoMT. To solve the problem, this paper proposes a verifiable private set intersection scheme to achieve fine-grained profile matching. On the one hand, the privacy data of patients can be divided by multi-tag to implement fine-grained operations. On the other hand, re-encryption technique is utilized to protect the privacy of patients. In addition, the cloud server may violate the scheme, thus a verifiable mechanism is leveraged to check the correctness of computation. The analysis of security indicates that our proposed scheme can resist the untrusted cloud server and the performance simulation demonstrates that our scheme improves efficiency by reducing the use of bilinear pairs.


Assuntos
Registros de Saúde Pessoal , Internet das Coisas , Inteligência Artificial , Segurança Computacional , Humanos , Privacidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA