Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
1.
Sensors (Basel) ; 23(3)2023 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-36772143

RESUMEN

Graph neural networks have been widely used by multivariate time series-based anomaly detection algorithms to model the dependencies of system sensors. Previous studies have focused on learning the fixed dependency patterns between sensors. However, they ignore that the inter-sensor and temporal dependencies of time series are highly nonlinear and dynamic, leading to inevitable false alarms. In this paper, we propose a novel disentangled dynamic deviation transformer network (D3TN) for anomaly detection of multivariate time series, which jointly exploits multiscale dynamic inter-sensor dependencies and long-term temporal dependencies to improve the accuracy of multivariate time series prediction. Specifically, to disentangle the multiscale graph convolution, we design a novel disentangled multiscale aggregation scheme to better represent the hidden dependencies between sensors to learn fixed inter-sensor dependencies based on static topology. To capture dynamic inter-sensor dependencies determined by real-time monitoring situations and unexpected anomalies, we introduce a self-attention mechanism to model dynamic directed interactions in various potential subspaces influenced by various factors. In addition, complex temporal correlations across multiple time steps are simulated by processing the time series in parallel. Experiments on three real datasets show that the proposed D3TN significantly outperforms the state-of-the-art methods.

2.
Sensors (Basel) ; 23(21)2023 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-37960442

RESUMEN

Cryptography is very essential in our daily life, not only for confidentiality of information, but also for information integrity verification, non-repudiation, authentication, and other aspects. In modern society, cryptography is widely used; everything from personal life to national security is inseparable from it. With the emergence of quantum computing, traditional encryption methods are at risk of being cracked. People are beginning to explore methods for defending against quantum computer attacks. Among the methods currently developed, quantum key distribution is a technology that uses the principles of quantum mechanics to distribute keys. Post-quantum encryption algorithms are encryption methods that rely on mathematical challenges that quantum computers cannot solve quickly to ensure security. In this study, an integrated review of post-quantum encryption algorithms is conducted from the perspective of traditional cryptography. First, the concept and development background of post-quantum encryption are introduced. Then, the post-quantum encryption algorithm Kyber is studied. Finally, the achievements, difficulties and outstanding problems in this emerging field are summarized, and some predictions for the future are made.

3.
IEEE Trans Industr Inform ; 17(9): 6510-6518, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37981910

RESUMEN

Due to the fast transmission speed and severe health damage, COVID-19 has attracted global attention. Early diagnosis and isolation are effective and imperative strategies for epidemic prevention and control. Most diagnostic methods for the COVID-19 is based on nucleic acid testing (NAT), which is expensive and time-consuming. To build an efficient and valid alternative of NAT, this article investigates the feasibility of employing computed tomography images of lungs as the diagnostic signals. Unlike normal lungs, parts of the lungs infected with the COVID-19 developed lesions, ground-glass opacity, and bronchiectasis became apparent. Through a public dataset, in this article, we propose an advanced residual learning diagnosis detection (RLDD) scheme for the COVID-19 technique, which is designed to distinguish positive COVID-19 cases from heterogeneous lung images. Besides the advantage of high diagnosis effectiveness, the designed residual-based COVID-19 detection network can efficiently extract the lung features through small COVID-19 samples, which removes the pretraining requirement on other medical datasets. In the test set, we achieve an accuracy of 91.33%, a precision of 91.30%, and a recall of 90%. For the batch of 150 samples, the assessment time is only 4.7 s. Therefore, RLDD can be integrated into the application programming interface and embedded into the medical instrument to improve the detection efficiency of COVID-19.

4.
Sensors (Basel) ; 19(9)2019 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-31052585

RESUMEN

With the development of intelligent transportation system (ITS) and vehicle to X (V2X), the connected vehicle is capable of sensing a great deal of useful traffic information, such as queue length at intersections. Aiming to solve the problem of existing models' complexity and information redundancy, this paper proposes a queue length sensing model based on V2X technology, which consists of two sub-models based on shockwave sensing and back propagation (BP) neural network sensing. First, the model obtains state information of the connected vehicles and analyzes the formation process of the queue, and then it calculates the velocity of the shockwave to predict the queue length of the subsequent unconnected vehicles. Then, the neural network is trained with historical connected vehicle data, and a sub-model based on the BP neural network is established to predict the real-time queue length. Finally, the final queue length at the intersection is determined by combining the sub-models by variable weight. Simulation results show that the sensing accuracy of the combined model is proportional to the penetration rate of connected vehicles, and sensing of queue length can be achieved even in low penetration rate environments. In mixed traffic environments of connected vehicles and unconnected vehicles, the queuing length sensing model proposed in this paper has higher performance than the probability distribution (PD) model when the penetration rate is low, and it has an almost equivalent performance with higher penetration rate while the penetration rate is not needed. The proposed sensing model is more applicable for mixed traffic scenarios with much looser conditions.

5.
Sensors (Basel) ; 19(9)2019 Apr 29.
Artículo en Inglés | MEDLINE | ID: mdl-31035697

RESUMEN

Crowd counting, which is widely used in disaster management, traffic monitoring, and other fields of urban security, is a challenging task that is attracting increasing interest from researchers. For better accuracy, most methods have attempted to handle the scale variation explicitly. which results in huge scale changes of the object size. However, earlier methods based on convolutional neural networks (CNN) have focused primarily on improving accuracy while ignoring the complexity of the model. This paper proposes a novel method based on a lightweight CNN-based network for estimating crowd counting and generating density maps under resource constraints. The network is composed of three components: a basic feature extractor (BFE), a stacked à trous convolution module (SACM), and a context fusion module (CFM). The BFE encodes basic feature information with reduced spatial resolution for further refining. Various pieces of contextual information are generated through a short pipeline in SACM. To generate a context fusion density map, CFM distills feature maps from the above components. The whole network is trained in an end-to-end fashion and uses a compression factor to restrict its size. Experiments on three highly-challenging datasets demonstrate that the proposed method delivers attractive performance.

6.
Sensors (Basel) ; 19(19)2019 Sep 27.
Artículo en Inglés | MEDLINE | ID: mdl-31569737

RESUMEN

Car-following is an essential trajectory control strategy for the autonomous vehicle, which not only improves traffic efficiency, but also reduces fuel consumption and emissions. However, the prediction of lane change intentions in adjacent lanes is problematic, and will significantly affect the car-following control of the autonomous vehicle, especially when the vehicle changing lanes is only a connected unintelligent vehicle without expensive and accurate sensors. Autonomous vehicles suffer from adjacent vehicles' abrupt lane changes, which may reduce ride comfort and increase energy consumption, and even lead to a collision. A machine learning-based lane change intention prediction and real time autonomous vehicle controller is proposed to respond to this problem. First, an interval-based support vector machine is designed to predict the vehicles' lane change intention utilizing limited low-level vehicle status through vehicle-to-vehicle communication. Then, a conditional artificial potential field method is used to design the car-following controller by incorporating the lane-change intentions of the vehicle. Experimental results reveal that the proposed method can estimate a vehicle's lane change intention more accurately. The autonomous vehicle avoids collisions with a lane-changing connected unintelligent vehicle with reliable safety and favorable dynamic performance.

7.
Sensors (Basel) ; 19(12)2019 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-31234375

RESUMEN

Cellular-based networks keep large buffers at base stations to smooth out the bursty data traffic, which has a negative impact on the user's Quality of Experience (QoE). With the boom of smart vehicles and phones, this has drawn growing attention. For this paper, we first conducted experiments to reveal the large delays, thus long flow completion time (FCT), caused by the large buffer in the cellular networks. Then, a receiver-side transmission control protocol (TCP) countermeasure named Delay-based Flow Control algorithm with Service Differentiation (DFCSD) was proposed to target interactive applications requiring high throughput and low delay in cellular networks by limiting the standing queue size and decreasing the amount of packets that are dropped in the eNodeB in Long Term Evolution (LTE). DFCSD stems from delay-based congestion control algorithms but works at the receiver side to avoid the performance degradation of the delay-based algorithms when competing with loss-based mechanisms. In addition, it is derived based on the TCP fluid model to maximize the network utility. Furthermore, DFCSD also takes service differentiation into consideration based on the size of competing flows to shorten their completion time, thus improving user QoE. Simulation results confirmed that DFCSD is compatible with existing TCP algorithms, significantly reduces the latency of TCP flows, and increases network throughput.

8.
Sensors (Basel) ; 18(2)2018 Feb 03.
Artículo en Inglés | MEDLINE | ID: mdl-29401668

RESUMEN

One of the remarkable challenges about Wireless Sensor Networks (WSN) is how to transfer the collected data efficiently due to energy limitation of sensor nodes. Network coding will increase network throughput of WSN dramatically due to the broadcast nature of WSN. However, the network coding usually propagates a single original error over the whole network. Due to the special property of error propagation in network coding, most of error correction methods cannot correct more than C/2 corrupted errors where C is the max flow min cut of the network. To maximize the effectiveness of network coding applied in WSN, a new error-correcting mechanism to confront the propagated error is urgently needed. Based on the social network characteristic inherent in WSN and L1 optimization, we propose a novel scheme which successfully corrects more than C/2 corrupted errors. What is more, even if the error occurs on all the links of the network, our scheme also can correct errors successfully. With introducing a secret channel and a specially designed matrix which can trap some errors, we improve John and Yi's model so that it can correct the propagated errors in network coding which usually pollute exactly 100% of the received messages. Taking advantage of the social characteristic inherent in WSN, we propose a new distributed approach that establishes reputation-based trust among sensor nodes in order to identify the informative upstream sensor nodes. With referred theory of social networks, the informative relay nodes are selected and marked with high trust value. The two methods of L1 optimization and utilizing social characteristic coordinate with each other, and can correct the propagated error whose fraction is even exactly 100% in WSN where network coding is performed. The effectiveness of the error correction scheme is validated through simulation experiments.

9.
Sensors (Basel) ; 18(6)2018 May 29.
Artículo en Inglés | MEDLINE | ID: mdl-29844285

RESUMEN

Hundreds of thousands of ubiquitous sensing (US) devices have provided an enormous number of data for Information-Centric Networking (ICN), which is an emerging network architecture that has the potential to solve a great variety of issues faced by the traditional network. A Caching Joint Shortcut Routing (CJSR) scheme is proposed in this paper to improve the Quality of service (QoS) for ICN. The CJSR scheme mainly has two innovations which are different from other in-network caching schemes: (1) Two routing shortcuts are set up to reduce the length of routing paths. Because of some inconvenient transmission processes, the routing paths of previous schemes are prolonged, and users can only request data from Data Centers (DCs) until the data have been uploaded from Data Producers (DPs) to DCs. Hence, the first kind of shortcut is built from DPs to users directly. This shortcut could release the burden of whole network and reduce delay. Moreover, in the second shortcut routing method, a Content Router (CR) which could yield shorter length of uploading routing path from DPs to DCs is chosen, and then data packets are uploaded through this chosen CR. In this method, the uploading path shares some segments with the pre-caching path, thus the overall length of routing paths is reduced. (2) The second innovation of the CJSR scheme is that a cooperative pre-caching mechanism is proposed so that QoS could have a further increase. Besides being used in downloading routing, the pre-caching mechanism can also be used when data packets are uploaded towards DCs. Combining uploading and downloading pre-caching, the cooperative pre-caching mechanism exhibits high performance in different situations. Furthermore, to address the scarcity of storage size, an algorithm that could make use of storage from idle CRs is proposed. After comparing the proposed scheme with five existing schemes via simulations, experiments results reveal that the CJSR scheme could reduce the total number of processed interest packets by 54.8%, enhance the cache hits of each CR and reduce the number of total hop counts by 51.6% and cut down the length of routing path for users to obtain their interested data by 28.6⁻85.7% compared with the traditional NDN scheme. Moreover, the length of uploading routing path could be decreased by 8.3⁻33.3%.

10.
Sensors (Basel) ; 18(5)2018 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-29748525

RESUMEN

Due to the Software Defined Network (SDN) technology, Wireless Sensor Networks (WSNs) are getting wider application prospects for sensor nodes that can get new functions after updating program codes. The issue of disseminating program codes to every node in the network with minimum delay and energy consumption have been formulated and investigated in the literature. The minimum-transmission broadcast (MTB) problem, which aims to reduce broadcast redundancy, has been well studied in WSNs where the broadcast radius is assumed to be fixed in the whole network. In this paper, an Adaption Broadcast Radius-based Code Dissemination (ABRCD) scheme is proposed to reduce delay and improve energy efficiency in duty cycle-based WSNs. In the ABCRD scheme, a larger broadcast radius is set in areas with more energy left, generating more optimized performance than previous schemes. Thus: (1) with a larger broadcast radius, program codes can reach the edge of network from the source in fewer hops, decreasing the number of broadcasts and at the same time, delay. (2) As the ABRCD scheme adopts a larger broadcast radius for some nodes, program codes can be transmitted to more nodes in one broadcast transmission, diminishing the number of broadcasts. (3) The larger radius in the ABRCD scheme causes more energy consumption of some transmitting nodes, but radius enlarging is only conducted in areas with an energy surplus, and energy consumption in the hot-spots can be reduced instead due to some nodes transmitting data directly to sink without forwarding by nodes in the original hot-spot, thus energy consumption can almost reach a balance and network lifetime can be prolonged. The proposed ABRCD scheme first assigns a broadcast radius, which doesn’t affect the network lifetime, to nodes having different distance to the code source, then provides an algorithm to construct a broadcast backbone. In the end, a comprehensive performance analysis and simulation result shows that the proposed ABRCD scheme shows better performance in different broadcast situations. Compared to previous schemes, the transmission delay is reduced by 41.11~78.42%, the number of broadcasts is reduced by 36.18~94.27% and the energy utilization ratio is improved up to 583.42%, while the network lifetime can be prolonged up to 274.99%.

11.
Sensors (Basel) ; 18(7)2018 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-30041441

RESUMEN

In this paper, a novel imperceptible, fragile and blind watermark scheme is proposed for speech tampering detection and self-recovery. The embedded watermark data for content recovery is calculated from the original discrete cosine transform (DCT) coefficients of host speech. The watermark information is shared in a frames-group instead of stored in one frame. The scheme trades off between the data waste problem and the tampering coincidence problem. When a part of a watermarked speech signal is tampered with, one can accurately localize the tampered area, the watermark data in the area without any modification still can be extracted. Then, a compressive sensing technique is employed to retrieve the coefficients by exploiting the sparseness in the DCT domain. The smaller the tampered the area, the better quality of the recovered signal is. Experimental results show that the watermarked signal is imperceptible, and the recovered signal is intelligible for high tampering rates of up to 47.6%. A deep learning-based enhancement method is also proposed and implemented to increase the SNR of recovered speech signal.

12.
Sensors (Basel) ; 17(10)2017 Sep 21.
Artículo en Inglés | MEDLINE | ID: mdl-28934171

RESUMEN

Because mobile ad hoc networks have characteristics such as lack of center nodes, multi-hop routing and changeable topology, the existing checkpoint technologies for normal mobile networks cannot be applied well to mobile ad hoc networks. Considering the multi-frequency hierarchy structure of ad hoc networks, this paper proposes a hybrid checkpointing strategy which combines the techniques of synchronous checkpointing with asynchronous checkpointing, namely the checkpoints of mobile terminals in the same cluster remain synchronous, and the checkpoints in different clusters remain asynchronous. This strategy could not only avoid cascading rollback among the processes in the same cluster, but also avoid too many message transmissions among the processes in different clusters. What is more, it can reduce the communication delay. In order to assure the consistency of the global states, this paper discusses the correctness criteria of hybrid checkpointing, which includes the criteria of checkpoint taking, rollback recovery and indelibility. Based on the designed Intra-Cluster Checkpoint Dependence Graph and Inter-Cluster Checkpoint Dependence Graph, the elimination rules for different kinds of checkpoints are discussed, and the algorithms for the same cluster checkpoints, different cluster checkpoints, and rollback recovery are also given. Experimental results demonstrate the proposed hybrid checkpointing strategy is a preferable trade-off method, which not only synthetically takes all kinds of resource constraints of Ad hoc networks into account, but also outperforms the existing schemes in terms of the dependence to cluster heads, the recovery time compared to the pure synchronous, and the pure asynchronous checkpoint advantage.

13.
Sensors (Basel) ; 17(11)2017 Nov 03.
Artículo en Inglés | MEDLINE | ID: mdl-29099793

RESUMEN

Sensors are increasingly used in mobile environments with wireless network connections. Multiple sensor types measure distinct aspects of the same event. Their measurements are then combined to produce integrated, reliable results. As the number of sensors in networks increases, low energy requirements and changing network connections complicate event detection and measurement. We present a data fusion scheme for use in mobile wireless sensor networks with high energy efficiency and low network delays, that still produces reliable results. In the first phase, we used a network simulation where mobile agents dynamically select the next hop migration node based on the stability parameter of the link, and perform the data fusion at the migration node. Agents use the fusion results to decide if it should return the fusion results to the processing center or continue to collect more data. In the second phase. The feasibility of data fusion at the node level is confirmed by an experimental design where fused data from color sensors show near-identical results to actual physical temperatures. These results are potentially important for new large-scale sensor network applications.

14.
Sensors (Basel) ; 17(7)2017 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-28714886

RESUMEN

Parsimony, including sparsity and low-rank, has shown great importance for data mining in social networks, particularly in tasks such as segmentation and recognition. Traditionally, such modeling approaches rely on an iterative algorithm that minimizes an objective function with convex l1-norm or nuclear norm constraints. However, the obtained results by convex optimization are usually suboptimal to solutions of original sparse or low-rank problems. In this paper, a novel robust subspace segmentation algorithm has been proposed by integrating lp-norm and Schatten p-norm constraints. Our so-obtained affinity graph can better capture local geometrical structure and the global information of the data. As a consequence, our algorithm is more generative, discriminative and robust. An efficient linearized alternating direction method is derived to realize our model. Extensive segmentation experiments are conducted on public datasets. The proposed algorithm is revealed to be more effective and robust compared to five existing algorithms.

15.
Sensors (Basel) ; 17(3)2017 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-28282962

RESUMEN

With the rapid development of virtual machine technology and cloud computing, distributed denial of service (DDoS) attacks, or some peak traffic, poses a great threat to the security of the network. In this paper, a novel topology link control technique and mitigation attacks in real-time environments is proposed. Firstly, a non-invasive method of deploying virtual sensors in the nodes is built, which uses the resource manager of each monitored node as a sensor. Secondly, a general topology-controlling approach of resisting the tolerant invasion is proposed. In the proposed approach, a prediction model is constructed by using copula functions for predicting the peak of a resource through another resource. The result of prediction determines whether or not to initiate the active defense. Finally, a minority game with incomplete strategy is employed to suppress attack flows and improve the permeability of the normal flows. The simulation results show that the proposed approach is very effective in protecting nodes.

16.
Sensors (Basel) ; 17(8)2017 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-28777353

RESUMEN

The Shipboard Automatic Identification System (AIS) is crucial for navigation safety and maritime surveillance, data mining and pattern analysis of AIS information have attracted considerable attention in terms of both basic research and practical applications. Clustering of spatio-temporal AIS trajectories can be used to identify abnormal patterns and mine customary route data for transportation safety. Thus, the capacities of navigation safety and maritime traffic monitoring could be enhanced correspondingly. However, trajectory clustering is often sensitive to undesirable outliers and is essentially more complex compared with traditional point clustering. To overcome this limitation, a multi-step trajectory clustering method is proposed in this paper for robust AIS trajectory clustering. In particular, the Dynamic Time Warping (DTW), a similarity measurement method, is introduced in the first step to measure the distances between different trajectories. The calculated distances, inversely proportional to the similarities, constitute a distance matrix in the second step. Furthermore, as a widely-used dimensional reduction method, Principal Component Analysis (PCA) is exploited to decompose the obtained distance matrix. In particular, the top k principal components with above 95% accumulative contribution rate are extracted by PCA, and the number of the centers k is chosen. The k centers are found by the improved center automatically selection algorithm. In the last step, the improved center clustering algorithm with k clusters is implemented on the distance matrix to achieve the final AIS trajectory clustering results. In order to improve the accuracy of the proposed multi-step clustering algorithm, an automatic algorithm for choosing the k clusters is developed according to the similarity distance. Numerous experiments on realistic AIS trajectory datasets in the bridge area waterway and Mississippi River have been implemented to compare our proposed method with traditional spectral clustering and fast affinity propagation clustering. Experimental results have illustrated its superior performance in terms of quantitative and qualitative evaluations.

17.
Sensors (Basel) ; 17(3)2017 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-28273827

RESUMEN

Dynamic magnetic resonance imaging (MRI) has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t)-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM) is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.


Asunto(s)
Imagen por Resonancia Magnética , Algoritmos , Corazón , Humanos
18.
Sensors (Basel) ; 17(1)2017 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-28106764

RESUMEN

Single-image blind deblurring for imaging sensors in the Internet of Things (IoT) is a challenging ill-conditioned inverse problem, which requires regularization techniques to stabilize the image restoration process. The purpose is to recover the underlying blur kernel and latent sharp image from only one blurred image. Under many degraded imaging conditions, the blur kernel could be considered not only spatially sparse, but also piecewise smooth with the support of a continuous curve. By taking advantage of the hybrid sparse properties of the blur kernel, a hybrid regularization method is proposed in this paper to robustly and accurately estimate the blur kernel. The effectiveness of the proposed blur kernel estimation method is enhanced by incorporating both the L 1 -norm of kernel intensity and the squared L 2 -norm of the intensity derivative. Once the accurate estimation of the blur kernel is obtained, the original blind deblurring can be simplified to the direct deconvolution of blurred images. To guarantee robust non-blind deconvolution, a variational image restoration model is presented based on the L 1 -norm data-fidelity term and the total generalized variation (TGV) regularizer of second-order. All non-smooth optimization problems related to blur kernel estimation and non-blind deconvolution are effectively handled by using the alternating direction method of multipliers (ADMM)-based numerical methods. Comprehensive experiments on both synthetic and realistic datasets have been implemented to compare the proposed method with several state-of-the-art methods. The experimental comparisons have illustrated the satisfactory imaging performance of the proposed method in terms of quantitative and qualitative evaluations.

19.
Sensors (Basel) ; 17(4)2017 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-28383496

RESUMEN

Measurement of time series complexity and predictability is sometimes the cornerstone for proposing solutions to topology and congestion control problems in sensor networks. As a method of measuring time series complexity and predictability, multiscale entropy (MSE) has been widely applied in many fields. However, sample entropy, which is the fundamental component of MSE, measures the similarity of two subsequences of a time series with either zero or one, but without in-between values, which causes sudden changes of entropy values even if the time series embraces small changes. This problem becomes especially severe when the length of time series is getting short. For solving such the problem, we propose flexible multiscale entropy (FMSE), which introduces a novel similarity function measuring the similarity of two subsequences with full-range values from zero to one, and thus increases the reliability and stability of measuring time series complexity. The proposed method is evaluated on both synthetic and real time series, including white noise, 1/f noise and real vibration signals. The evaluation results demonstrate that FMSE has a significant improvement in reliability and stability of measuring complexity of time series, especially when the length of time series is short, compared to MSE and composite multiscale entropy (CMSE). The proposed method FMSE is capable of improving the performance of time series analysis based topology and traffic congestion control techniques.

20.
Sensors (Basel) ; 17(6)2017 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-28587304

RESUMEN

Unlike conventional scalar sensors, camera sensors at different positions can capture a variety of views of an object. Based on this intrinsic property, a novel model called full-view coverage was proposed. We study the problem that how to select the minimum number of sensors to guarantee the full-view coverage for the given region of interest (ROI). To tackle this issue, we derive the constraint condition of the sensor positions for full-view neighborhood coverage with the minimum number of nodes around the point. Next, we prove that the full-view area coverage can be approximately guaranteed, as long as the regular hexagons decided by the virtual grid are seamlessly stitched. Then we present two solutions for camera sensor networks in two different deployment strategies. By computing the theoretically optimal length of the virtual grids, we put forward the deployment pattern algorithm (DPA) in the deterministic implementation. To reduce the redundancy in random deployment, we come up with a local neighboring-optimal selection algorithm (LNSA) for achieving the full-view coverage. Finally, extensive simulation results show the feasibility of our proposed solutions.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA