Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Sci Rep ; 13(1): 14183, 2023 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-37648738

RESUMO

In the recent couple of years, due to the accelerated popularity of the internet, various organizations such as government offices, military, private companies, etc. use different transferring methods for exchanging their information. The Internet has various benefits and some demerits, but the primary bad mark is security of information transmission over an unreliable network, and widely uses of images. So, Steganography is the state of the art of implanting a message in the cover objects, that nobody can suspect or identify it. Therefore, in the field of cover steganography, it is very critical to track down a mechanism for concealing data by utilizing different blends of compression strategies. Amplifying the payload limit, and robustness, and working on the visual quality are the vital factors of this research to make a reliable mechanism. Different cover steganography research strategies have been recommended, and each adores its benefits and impediments but there is a need to foster some better cover steganography implements to accomplish dependability between the essential model of cover steganography. To handle these issues, in this paper we proposed a method in view of Huffman code, Least Significant Bits (LSB) based cover steganography utilizing Multi-Level Encryption (MLE) and colorless part (HC-LSBIS-MLE-AC) of the picture. It also used different substitution and flicking concepts, MLE, Magic matrix, and achromatic concepts for proving the proficiency, and significance of the method. The algorithm was also statistically investigated based on some Statistical Assessment Metrics (SAM) such as Mean Square Error (MSE), Peak Signal Noise Ratio (PSNR), Normalized Cross Correlation (NCC), Structural Similarity Index Metric (SSIM), etc. and different perspectives. The observational outcomes show the likelihood of the proposed algorithm and the capacity to give unwavering quality between security, payload, perception, computation, and temper protection.

3.
Sci Rep ; 12(1): 21177, 2022 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-36477447

RESUMO

In image segmentation and in general in image processing, noise and outliers distort contained information posing in this way a great challenge for accurate image segmentation results. To ensure a correct image segmentation in presence of noise and outliers, it is necessary to identify the outliers and isolate them during a denoising pre-processing or impose suitable constraints into a segmentation framework. In this paper, we impose suitable removing outliers constraints supported by a well-designed theory in a variational framework for accurate image segmentation. We investigate a novel approach based on the power mean function equipped with a well established theoretical base. The power mean function has the capability to distinguishes between true image pixels and outliers and, therefore, is robust against outliers. To deploy the novel image data term and to guaranteed unique segmentation results, a fuzzy-membership function is employed in the proposed energy functional. Based on qualitative and quantitative extensive analysis on various standard data sets, it has been observed that the proposed model works well in images having multi-objects with high noise and in images with intensity inhomogeneity in contrast with the latest and state-of-the-art models.


Assuntos
Processamento de Imagem Assistida por Computador
4.
Sci Rep ; 12(1): 15949, 2022 09 24.
Artigo em Inglês | MEDLINE | ID: mdl-36153339

RESUMO

Segmentation of noisy images having light in the background it is a challenging task for the existing segmentation approaches and methods. In this paper, we suggest a novel variational method for joint restoration and segmentation of noisy images which are having intensity and inhomogeneity in the existence of high contrast light in the background. The proposed model combines statistical local region information of circular regions centered at each pixel with a multi-phase segmentation technique enabling inhomogeneous image restoration. The proposed model is written in the fuzzy set framework and resolved through alternating direction minimization approach of multipliers. Through experiments, we have tested the performance of the suggested approach on diverse types of synthetic and real images in the existence of intensity and in-homogeneity; and evaluate the precision, as well as, the robustness of the suggested model. Furthermore, the outcomes are, then, compared with other state-of-the-art models including two-phase and multi-phase approaches and show that our method has superiority for images in the existence of noise and inhomogeneity. Our empirical evaluation and experiments, using real images, evaluate and assess the efficiency of the suggested model against several other closest rivals. We observed that the suggested model can precisely segment all the images having brightness, diffuse edges, high contrast light in the background, and inhomogeneity.


Assuntos
Fenômenos Biológicos , Processamento de Imagem Assistida por Computador , Algoritmos , Encéfalo , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos
5.
Sensors (Basel) ; 22(2)2022 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-35062530

RESUMO

The IoT refers to the interconnection of things to the physical network that is embedded with software, sensors, and other devices to exchange information from one device to the other. The interconnection of devices means there is the possibility of challenges such as security, trustworthiness, reliability, confidentiality, and so on. To address these issues, we have proposed a novel group theory (GT)-based binary spring search (BSS) algorithm which consists of a hybrid deep neural network approach. The proposed approach effectively detects the intrusion within the IoT network. Initially, the privacy-preserving technology was implemented using a blockchain-based methodology. Security of patient health records (PHR) is the most critical aspect of cryptography over the Internet due to its value and importance, preferably in the Internet of Medical Things (IoMT). Search keywords access mechanism is one of the typical approaches used to access PHR from a database, but it is susceptible to various security vulnerabilities. Although blockchain-enabled healthcare systems provide security, it may lead to some loopholes in the existing state of the art. In literature, blockchain-enabled frameworks have been presented to resolve those issues. However, these methods have primarily focused on data storage and blockchain is used as a database. In this paper, blockchain as a distributed database is proposed with a homomorphic encryption technique to ensure a secure search and keywords-based access to the database. Additionally, the proposed approach provides a secure key revocation mechanism and updates various policies accordingly. As a result, a secure patient healthcare data access scheme is devised, which integrates blockchain and trust chain to fulfill the efficiency and security issues in the current schemes for sharing both types of digital healthcare data. Hence, our proposed approach provides more security, efficiency, and transparency with cost-effectiveness. We performed our simulations based on the blockchain-based tool Hyperledger Fabric and OrigionLab for analysis and evaluation. We compared our proposed results with the benchmark models, respectively. Our comparative analysis justifies that our proposed framework provides better security and searchable mechanism for the healthcare system.


Assuntos
Blockchain , Registros de Saúde Pessoal , Atenção à Saúde , Humanos , Redes Neurais de Computação , Reprodutibilidade dos Testes
6.
Neural Netw ; 145: 233-247, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34773899

RESUMO

The prediction of crowd flows is an important urban computing issue whose purpose is to predict the future number of incoming and outgoing people in regions. Measuring the complicated spatial-temporal dependencies with external factors, such as weather conditions and surrounding point-of-interest (POI) distribution is the most difficult aspect of predicting crowd flows movement. To overcome the above issue, this paper advises a unified dynamic deep spatio-temporal neural network model based on convolutional neural networks and long short-term memory, termed as (DHSTNet) to simultaneously predict crowd flows in every region of a city. The DHSTNet model is made up of four separate components: a recent, daily, weekly, and an external branch component. Our proposed approach simultaneously assigns various weights to different branches and integrates the four properties' outputs to generate final predictions. Moreover, to verify the generalization and scalability of the proposed model, we apply a Graph Convolutional Network (GCN) based on Long Short Term Memory (LSTM) with the previously published model, termed as GCN-DHSTNet; to capture the spatial patterns and short-term temporal features; and to illustrate its exceptional accomplishment in predicting the traffic crowd flows. The GCN-DHSTNet model not only depicts the spatio-temporal dependencies but also reveals the influence of different time granularity, which are recent, daily, weekly periodicity and external properties, respectively. Finally, a fully connected neural network is utilized to fuse the spatio-temporal features and external properties together. Using two different real-world traffic datasets, our evaluation suggests that the proposed GCN-DHSTNet method is approximately 7.9%-27.2% and 11.2%-11.9% better than the AAtt-DHSTNet method in terms of RMSE and MAPE metrics, respectively. Furthermore, AAtt-DHSTNet outperforms other state-of-the-art methods.


Assuntos
Redes Neurais de Computação , Tempo (Meteorologia) , Humanos , Análise Espacial
7.
Future Gener Comput Syst ; 122: 40-51, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34393306

RESUMO

In the densely populated Internet of Things (IoT) applications, sensing range of the nodes might overlap frequently. In these applications, the nodes gather highly correlated and redundant data in their vicinity. Processing these data depletes the energy of nodes and their upstream transmission towards remote datacentres, in the fog infrastructure, may result in an unbalanced load at the network gateways and edge servers. Due to heterogeneity of edge servers, few of them might be overwhelmed while others may remain less-utilized. As a result, time-critical and delay-sensitive applications may experience excessive delays, packet loss, and degradation in their Quality of Service (QoS). To ensure QoS of IoT applications, in this paper, we eliminate correlation in the gathered data via a lightweight data fusion approach. The buffer of each node is partitioned into strata that broadcast only non-correlated data to edge servers via the network gateways. Furthermore, we propose a dynamic service migration technique to reconfigure the load across various edge servers. We assume this as an optimization problem and use two meta-heuristic algorithms, along with a migration approach, to maintain an optimal Gateway-Edge configuration in the network. These algorithms monitor the load at each server, and once it surpasses a threshold value (which is dynamically computed with a simple machine learning method), an exhaustive search is performed for an optimal and balanced periodic reconfiguration. The experimental results of our approach justify its efficiency for large-scale and densely populated IoT applications.

8.
Sensors (Basel) ; 19(1)2019 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-30621241

RESUMO

Multivariate data sets are common in various application areas, such as wireless sensor networks (WSNs) and DNA analysis. A robust mechanism is required to compute their similarity indexes regardless of the environment and problem domain. This study describes the usefulness of a non-metric-based approach (i.e., longest common subsequence) in computing similarity indexes. Several non-metric-based algorithms are available in the literature, the most robust and reliable one is the dynamic programming-based technique. However, dynamic programming-based techniques are considered inefficient, particularly in the context of multivariate data sets. Furthermore, the classical approaches are not powerful enough in scenarios with multivariate data sets, sensor data or when the similarity indexes are extremely high or low. To address this issue, we propose an efficient algorithm to measure the similarity indexes of multivariate data sets using a non-metric-based methodology. The proposed algorithm performs exceptionally well on numerous multivariate data sets compared with the classical dynamic programming-based algorithms. The performance of the algorithms is evaluated on the basis of several benchmark data sets and a dynamic multivariate data set, which is obtained from a WSN deployed in the Ghulam Ishaq Khan (GIK) Institute of Engineering Sciences and Technology. Our evaluation suggests that the proposed algorithm can be approximately 39.9% more efficient than its counterparts for various data sets in terms of computational time.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...