Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
1.
Stat Med ; 43(9): 1726-1742, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38381059

RESUMEN

Current status data are a type of failure time data that arise when the failure time of study subject cannot be determined precisely but is known only to occur before or after a random monitoring time. Variable selection methods for the failure time data have been discussed extensively in the literature. However, the statistical inference of the model selected based on the variable selection method ignores the uncertainty caused by model selection. To enhance the prediction accuracy for risk quantities such as survival probability, we propose two optimal model averaging methods under semiparametric additive hazards models. Specifically, based on martingale residuals processes, a delete-one cross-validation (CV) process is defined, and two new CV functional criteria are derived for choosing model weights. Furthermore, we present a greedy algorithm for the implementation of the techniques, and the asymptotic optimality of the proposed model averaging approaches is established, along with the convergence of the greedy averaging algorithms. A series of simulation experiments demonstrate the effectiveness and superiority of the proposed methods. Finally, a real-data example is provided as an illustration.


Asunto(s)
Algoritmos , Modelos Estadísticos , Humanos , Modelos de Riesgos Proporcionales , Simulación por Computador , Probabilidad
2.
Risk Anal ; 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39166706

RESUMEN

As urbanization continues to accelerate worldwide, urban flooding is becoming increasingly destructive, making it important to improve emergency scheduling capabilities. Compared to other scheduling problems, the urban flood emergency rescue scheduling problem is more complicated. Considering the impact of a disaster on the road network passability, a single type of vehicle cannot complete all rescue tasks. A reasonable combination of multiple vehicle types for cooperative rescue can improve the efficiency of rescue tasks. This study focuses on the urban flood emergency rescue scheduling problem considering the actual road network inundation situation. First, the progress and shortcomings of related research are analyzed. Then, a four-level emergency transportation network based on the collaborative water-ground multimodal transport transshipment mode is established. It is shown that the transshipment points have random locations and quantities according to the actual inundation situation. Subsequently, an interactive model based on hierarchical optimization is constructed considering the travel length, travel time, and waiting time as hierarchical optimization objectives. Next, an improved A* algorithm based on the quantity of specific extension nodes is proposed, and a scheduling scheme decision-making algorithm is proposed based on the improved A* and greedy algorithms. Finally, the proposed decision-making algorithm is applied in a practical example for solving and comparative analysis, and the results show that the improved A* algorithm is faster and more accurate. The results also verify the effectiveness of the scheduling model and decision-making algorithm. Finally, a scheduling scheme with the shortest travel time for the proposed emergency scheduling problem is obtained.

3.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38544141

RESUMEN

The last-mile logistics in cities have become an indispensable part of the urban logistics system. This study aims to explore the effective selection of last-mile logistics nodes to enhance the efficiency of logistics distribution, strengthen the image of corporate distribution, further reduce corporate operating costs, and alleviate urban traffic congestion. This paper proposes a clustering-based approach to identify urban logistics nodes from the perspective of geographic information fusion. This method comprehensively considers several key indicators, including the coverage, balance, and urban traffic conditions of logistics distribution. Additionally, we employed a greedy algorithm to identify secondary nodes around primary nodes, thus constructing an effective nodal network. To verify the practicality of this model, we conducted an empirical simulation study using the logistics demand and traffic conditions in the Xianlin District of Nanjing. This research not only identifies the locations of primary and secondary logistics nodes but also provides a new perspective for constructing urban last-mile logistics systems, enriching the academic research related to the construction of logistics nodes. The results of this study are of significant theoretical and practical importance for optimizing urban logistics networks, enhancing logistics efficiency, and promoting the improvement of urban traffic conditions.

4.
Sensors (Basel) ; 24(9)2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38733003

RESUMEN

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

5.
Sensors (Basel) ; 23(3)2023 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-36772527

RESUMEN

In the Information Age, the widespread usage of blackbox algorithms makes it difficult to understand how data is used. The practice of sensor fusion to achieve results is widespread, as there are many tools to further improve the robustness and performance of a model. In this study, we demonstrate the utilization of a Long Short-Term Memory (LSTM-CCA) model for the fusion of Passive RF (P-RF) and Electro-Optical (EO) data in order to gain insights into how P-RF data are utilized. The P-RF data are constructed from the in-phase and quadrature component (I/Q) data processed via histograms, and are combined with enhanced EO data via dense optical flow (DOF). The preprocessed data are then used as training data with an LSTM-CCA model in order to achieve object detection and tracking. In order to determine the impact of the different data inputs, a greedy algorithm (explainX.ai) is implemented to determine the weight and impact of the canonical variates provided to the fusion model on a scenario-by-scenario basis. This research introduces an explainable LSTM-CCA framework for P-RF and EO sensor fusion, providing novel insights into the sensor fusion process that can assist in the detection and differentiation of targets and help decision-makers to determine the weights for each input.

6.
Philos Trans A Math Phys Eng Sci ; 380(2214): 20210128, 2022 Jan 10.
Artículo en Inglés | MEDLINE | ID: mdl-34802269

RESUMEN

Human immunodeficiency virus self-testing (HIVST) is an innovative and effective strategy important to the expansion of HIV testing coverage. Several innovative implementations of HIVST have been developed and piloted among some HIV high-risk populations like men who have sex with men (MSM) to meet the global testing target. One innovative strategy is the secondary distribution of HIVST, in which individuals (defined as indexes) were given multiple testing kits for both self-use (i.e.self-testing) and distribution to other people in their MSM social network (defined as alters). Studies about secondary HIVST distribution have mainly concentrated on developing new intervention approaches to further increase the effectiveness of this relatively new strategy from the perspective of traditional public health discipline. There are many points of HIVST secondary distribution in which mathematical modelling can play an important role. In this study, we considered secondary HIVST kits distribution in a resource-constrained situation and proposed two data-driven integer linear programming models to maximize the overall economic benefits of secondary HIVST kits distribution based on our present implementation data from Chinese MSM. The objective function took expansion of normal alters and detection of positive and newly-tested 'alters' into account. Based on solutions from solvers, we developed greedy algorithms to find final solutions for our linear programming models. Results showed that our proposed data-driven approach could improve the total health economic benefit of HIVST secondary distribution. This article is part of the theme issue 'Data science approaches to infectious disease surveillance'.


Asunto(s)
Infecciones por VIH , Minorías Sexuales y de Género , China , Infecciones por VIH/diagnóstico , Infecciones por VIH/epidemiología , Homosexualidad Masculina , Humanos , Masculino , Asignación de Recursos , Autoevaluación
7.
Sensors (Basel) ; 22(6)2022 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-35336578

RESUMEN

In on-grid microgrids, electric vehicles (EVs) have to be efficiently scheduled for cost-effective electricity consumption and network operation. The stochastic nature of the involved parameters along with their large number and correlations make such scheduling a challenging task. This paper aims at identifying pertinent innovative solutions for reducing the relevant total costs of the on-grid EVs within hybrid microgrids. To optimally scale the EVs, a heuristic greedy approach is considered. Unlike most existing scheduling methodologies in the literature, the proposed greedy scheduler is model-free, training-free, and yet efficient. The proposed approach considers different factors such as the electricity price, on-grid EVs state of arrival and departure, and the total revenue to meet the load demands. The greedy-based approach behaves satisfactorily in terms of fulfilling its objective for the hybrid microgrid system, which is established of photovoltaic, wind turbine, and a local utility grid. Meanwhile, the on-grid EVs are being utilized as an energy storage exchange location. A real time hardware-in-the-loop experimentation is comprehensively conducted to maximize the earned profit. Through different uncertainty scenarios, the ability of the proposed greedy approach to obtain a global optimal solution is assessed. A data simulator was developed for the purposes of generating evaluation datasets, which captures uncertainties in the behaviors of the system's parameters. The greedy-based strategy is considered applicable, scalable, and efficient in terms of total operating expenditures. Furthermore, as EVs penetration became more versatile, total expenses decreased significantly. Using simulated data of an effective operational duration of 500 years, the proposed approach succeeded in cutting down the energy consumption costs by about 50-85%, beating existing state-of-the-arts results. The proposed approach is proved to be tolerant to the large amounts of uncertainties that are involved in the system's operational data.


Asunto(s)
Electricidad , Heurística , Costos y Análisis de Costo
8.
Entropy (Basel) ; 24(12)2022 Nov 30.
Artículo en Inglés | MEDLINE | ID: mdl-36554158

RESUMEN

In this study, the performance of intelligent reflecting surfaces (IRSs) with a discrete phase shift strategy is examined in multiple-antenna systems. Considering the IRS network overhead, the achievable rate model is newly designed to evaluate the practical IRS system performance. Finding the optimal resolution of the IRS discrete phase shifts and a corresponding phase shift vector is an NP-hard combinatorial problem with an extremely large search complexity. Recognizing the performance trade-off between the IRS passive beamforming gain and IRS signaling overheads, the incremental search method is proposed to present the optimal resolution of the IRS discrete phase shift. Moreover, two low-complexity sub-algorithms are suggested to obtain the IRS discrete phase shift vector during the incremental search algorithms. The proposed incremental search-based discrete phase shift method can efficiently obtain the optimal resolution of the IRS discrete phase shift that maximizes the overhead-aware achievable rate. Simulation results show that the discrete phase shift with the incremental search method outperforms the conventional analog phase shift by choosing the optimal resolution of the IRS discrete phase shift. Furthermore, the cumulative distribution function comparison shows the superiority of the proposed method over the entire coverage area. Specifically, it is shown that more than 20% of coverage extension can be accomplished by deploying IRS with the proposed method.

9.
Molecules ; 26(23)2021 Nov 27.
Artículo en Inglés | MEDLINE | ID: mdl-34885781

RESUMEN

Chemical features of small molecules can be abstracted to 3D pharmacophore models, which are easy to generate, interpret, and adapt by medicinal chemists. Three-dimensional pharmacophores can be used to efficiently match and align molecules according to their chemical feature pattern, which facilitates the virtual screening of even large compound databases. Existing alignment methods, used in computational drug discovery and bio-activity prediction, are often not suitable for finding matches between pharmacophores accurately as they purely aim to minimize RMSD or maximize volume overlap, when the actual goal is to match as many features as possible within the positional tolerances of the pharmacophore features. As a consequence, the obtained alignment results are often suboptimal in terms of the number of geometrically matched feature pairs, which increases the false-negative rate, thus negatively affecting the outcome of virtual screening experiments. We addressed this issue by introducing a new alignment algorithm, Greedy 3-Point Search (G3PS), which aims at finding optimal alignments by using a matching-feature-pair maximizing search strategy while at the same time being faster than competing methods.


Asunto(s)
Algoritmos , Preparaciones Farmacéuticas/química , Bases de Datos como Asunto , Modelos Moleculares , Factores de Tiempo
10.
Entropy (Basel) ; 23(7)2021 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-34201971

RESUMEN

In this paper, we consider decision trees that use both conventional queries based on one attribute each and queries based on hypotheses of values of all attributes. Such decision trees are similar to those studied in exact learning, where membership and equivalence queries are allowed. We present greedy algorithm based on entropy for the construction of the above decision trees and discuss the results of computer experiments on various data sets and randomly generated Boolean functions.

11.
Entropy (Basel) ; 23(11)2021 Nov 18.
Artículo en Inglés | MEDLINE | ID: mdl-34828234

RESUMEN

When confronted with massive data streams, summarizing data with dimension reduction methods such as PCA raises theoretical and algorithmic pitfalls. A principal curve acts as a nonlinear generalization of PCA, and the present paper proposes a novel algorithm to automatically and sequentially learn principal curves from data streams. We show that our procedure is supported by regret bounds with optimal sublinear remainder terms. A greedy local search implementation (called slpc, for sequential learning principal curves) that incorporates both sleeping experts and multi-armed bandit ingredients is presented, along with its regret computation and performance on synthetic and real-life data.

12.
Sensors (Basel) ; 20(24)2020 Dec 11.
Artículo en Inglés | MEDLINE | ID: mdl-33322537

RESUMEN

The Global Navigation Satellite System (GNSS)-based Bistatic Synthetic Aperture Radar (SAR) is getting more and more attention in remote sensing for its all-weather and real-time global observation capability. Its low range resolution results from the narrow signal bandwidth limits in its development. The configuration difference caused by the illumination angle and movement direction of the different satellites makes it possible to improve resolution by multi-satellite fusion. However, this also introduces new problems with the resolution-enhancing efficiency and increased computation brought about by the fusion. In this paper, we aim at effectively improving the resolution of the multi-satellite fusion system. To this purpose, firstly, the Point Spread Function (PSF) of the multi-satellite fusion system is analyzed, and focusing on the relationship between the fusion resolution and the geometric configuration and the number of satellites. Numerical simulation results show that, compared with multi-satellite fusion, dual-satellite fusion is a combination with higher resolution enhancement efficiency. Secondly, a method for dual-satellite fusion imaging based on optimized satellite selection is proposed. With the greedy algorithm, the selection is divided into two steps: in the first step, according to geometry configuration, the single-satellite with the optimal 2-D resolution is selected as the reference satellite; in the second step, the angles between the azimuthal vector of the reference satellite and the azimuthal vector of the other satellites were calculated by the traversal method, the satellite corresponding to the intersection angle which is closest to 90° is selected as the auxiliary satellite. The fused image was obtained by non-coherent addition of the images generated by the reference satellite and the auxiliary satellite, respectively. Finally, the GPS L1 real orbit multi-target simulation and experimental validation were conducted, respectively. The simulation results show that the 2-D resolution of the images produced by our proposed method is globally optimal 15 times and suboptimal 8 times out of 24 data sets. The experimental results show that the 2-D resolution of our proposed method is optimal in the scene, and the area of the resolution unit is reduced by 70.1% compared to the single-satellite's images. In the experiment, there are three navigation satellites for imaging, the time taken to the proposed method was 66.6% that of the traversal method. Simulations and experiments fully demonstrate the feasibility of the method.

13.
Sensors (Basel) ; 19(18)2019 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-31505866

RESUMEN

The privacy and security of the Internet of Things (IoT) are emerging as popular issues in the IoT. At present, there exist several pieces of research on network analysis on the IoT network, and malicious network analysis may threaten the privacy and security of the leader in the IoT networks. With this in mind, we focus on how to avoid malicious network analysis by modifying the topology of the IoT network and we choose closeness centrality as the network analysis tool. This paper makes three key contributions toward this problem: (1) An optimization problem of removing k edges to minimize (maximize) the closeness value (rank) of the leader; (2) A greedy (greedy and simulated annealing) algorithm to solve the closeness value (rank) case of the proposed optimization problem in polynomial time; and (3)UpdateCloseness (FastTopRank)-algorithm for computing closeness value (rank) efficiently. Experimental results prove the efficiency of our pruning algorithms and show that our heuristic algorithms can obtain accurate solutions compared with the optimal solution (the approximation ratio in the worst case is 0.85) and outperform the solutions obtained by other baseline algorithms (e.g., choose k edges with the highest degree sum).

14.
Genet Epidemiol ; 41(8): 756-768, 2017 12.
Artículo en Inglés | MEDLINE | ID: mdl-28875524

RESUMEN

A genome-wide association study (GWAS) correlates marker and trait variation in a study sample. Each subject is genotyped at a multitude of SNPs (single nucleotide polymorphisms) spanning the genome. Here, we assume that subjects are randomly collected unrelateds and that trait values are normally distributed or can be transformed to normality. Over the past decade, geneticists have been remarkably successful in applying GWAS analysis to hundreds of traits. The massive amount of data produced in these studies present unique computational challenges. Penalized regression with the ℓ1 penalty (LASSO) or minimax concave penalty (MCP) penalties is capable of selecting a handful of associated SNPs from millions of potential SNPs. Unfortunately, model selection can be corrupted by false positives and false negatives, obscuring the genetic underpinning of a trait. Here, we compare LASSO and MCP penalized regression to iterative hard thresholding (IHT). On GWAS regression data, IHT is better at model selection and comparable in speed to both methods of penalized regression. This conclusion holds for both simulated and real GWAS data. IHT fosters parallelization and scales well in problems with large numbers of causal markers. Our parallel implementation of IHT accommodates SNP genotype compression and exploits multiple CPU cores and graphics processing units (GPUs). This allows statistical geneticists to leverage commodity desktop computers in GWAS analysis and to avoid supercomputing. AVAILABILITY: Source code is freely available at https://github.com/klkeys/IHT.jl.


Asunto(s)
Estudio de Asociación del Genoma Completo , Modelos Genéticos , Algoritmos , Índice de Masa Corporal , HDL-Colesterol/genética , LDL-Colesterol/genética , Humanos , Fenotipo , Polimorfismo de Nucleótido Simple , Triglicéridos/genética
15.
Int J Equity Health ; 17(1): 183, 2018 12 12.
Artículo en Inglés | MEDLINE | ID: mdl-30541553

RESUMEN

BACKGROUND: As the proportion of elderly residents living in large-scale affordable housing communities (LAHCs) increases in China, serious problems have become apparent related to the spatial allocation of elderly healthcare facilities (EHFs), e.g., insufficient provision and inaccessibility. To address these issues, this study developed a location allocation model for EHFs to ensure equitable and efficient access to healthcare services for the elderly in LAHCs. METHODS: Based on discrete location theory, this paper develops a two-stage optimization model for the spatial allocation of EHFs in LAHCs. In the first stage, the candidate locations of EHFs are specified using geographic information system (GIS) techniques. In the second stage, the optimal location and size of each EHF are determined based on the greedy algorithm (GA). Finally, the proposed two-stage optimization model is tested using the Daishan LAHC in Nanjing, Eastern China. RESULTS: The demand of the elderly for accessibility to EHFs is in line with Nanjing's planning standards. Deep insights into spatial data are revealed by GIS techniques that enable candidate locations of EHFs to be obtained. In addition, the model helps EHF planners achieve equity and efficiency simultaneously. Two optimal locations for EHFs in the Daishan LAHC are identified, which in turn verifies the validity of the model. CONCLUSIONS: As a strategy for allocating EHFs, this two-stage model improves the equity and efficiency of access to healthcare services for the elderly by optimizing the potential sites for EHFs. It can also be used to assist policymakers in providing adequate healthcare services for the low-income elderly. Furthermore, the model can be extended to the allocation of other public-service facilities in different countries or regions.


Asunto(s)
Instituciones de Salud , Vivienda/economía , Pobreza , Anciano , Algoritmos , China , Sistemas de Información Geográfica , Accesibilidad a los Servicios de Salud , Humanos , Modelos Teóricos , Estudios de Casos Organizacionales , Análisis Espacial
16.
Sensors (Basel) ; 18(7)2018 Jul 14.
Artículo en Inglés | MEDLINE | ID: mdl-30011938

RESUMEN

Utilizing the data obtained from both scanning and counting sensors is critical for efficiently managing traffic flow on roadways. Past studies mainly focused on the optimal layout of one type of sensor, and how to optimize the arrangement of more than one type of sensor has not been fully researched. This paper develops a methodology that optimizes the deployment of different types of sensors to solve the well-recognized network sensors location problem (NSLP). To answer the questions of how many, where and what types of sensors should be deployed on each particular link of the network, a novel bi-level programming model for full route observability is presented to strategically locate scanning and counting sensors in a network. The methodology works in two steps. First, a mathematical program is formulated to determine the minimum number of scanning sensors. To solve this program, a new 'differentiating matrix' is introduced and the corresponding greedy algorithm of 'differentiating first' is put forward. In the second step, a scanning map and an incidence matrix are incorporated into the program, which extends the theoretical model for multiple sensors' deployment and provides the replacement method to reduce total cost of sensors without loss of observability. The algorithm developed at the second step involved in two coefficient matrixes from scanning map and incidence parameter enumerate all possibilities of replacement schemes so that cost of different combination schemes can be compared. Finally, the proposed approach is demonstrated by comparison of Nguyen-Dupuis network and real network, which indicates the proposed method is capable to evaluate the trade-off between cost and all routes observability.

17.
Biomed Eng Online ; 16(1): 49, 2017 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-28438178

RESUMEN

BACKGROUND: In the active shape model framework, principal component analysis (PCA) based statistical shape models (SSMs) are widely employed to incorporate high-level a priori shape knowledge of the structure to be segmented to achieve robustness. A crucial component of building SSMs is to establish shape correspondence between all training shapes, which is a very challenging task, especially in three dimensions. METHODS: We propose a novel mesh-to-volume registration based shape correspondence establishment method to improve the accuracy and reduce the computational cost. Specifically, we present a greedy algorithm based deformable simplex mesh that uses vector field convolution as the external energy. Furthermore, we develop an automatic shape initialization method by using a Gaussian mixture model based registration algorithm, to derive an initial shape that has high overlap with the object of interest, such that the deformable models can then evolve more locally. We apply the proposed deformable surface model to the application of femur statistical shape model construction to illustrate its accuracy and efficiency. RESULTS: Extensive experiments on ten femur CT scans show that the quality of the constructed femur shape models via the proposed method is much better than that of the classical spherical harmonics (SPHARM) method. Moreover, the proposed method achieves much higher computational efficiency than the SPHARM method. CONCLUSIONS: The experimental results suggest that our method can be employed for effective statistical shape model construction.


Asunto(s)
Algoritmos , Fémur/anatomía & histología , Fémur/diagnóstico por imagen , Modelos Estadísticos , Reconocimiento de Normas Patrones Automatizadas/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Adulto , Anciano , Simulación por Computador , Femenino , Humanos , Aumento de la Imagen/métodos , Imagenología Tridimensional/métodos , Masculino , Persona de Mediana Edad , Modelos Anatómicos , Modelos Biológicos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
Acta Biotheor ; 64(4): 359-374, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-27761675

RESUMEN

This document presents a non-rigid registration algorithm for the use of brain magnetic resonance (MR) images comparison. More precisely, we want to compare pre-operative and post-operative MR images in order to assess the deformation due to a surgical removal. The proposed algorithm has been studied in Chesseboeuf et al. ((Non-rigid registration of magnetic resonance imaging of brain. IEEE, 385-390. doi: 10.1109/IPTA.2015.7367172 , 2015), following ideas of Trouvé (An infinite dimensional group approach for physics based models in patterns recognition. Technical Report DMI Ecole Normale Supérieure, Cachan, 1995), in which the author introduces the algorithm within a very general framework. Here we recalled this theory from a practical point of view. The emphasis is on illustrations and description of the numerical procedure. Our version of the algorithm is associated with a particular matching criterion. Then, a section is devoted to the description of this object. In the last section we focus on the construction of a statistical method of evaluation.


Asunto(s)
Algoritmos , Encéfalo/anatomía & histología , Encéfalo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Modelos Teóricos , Simulación por Computador , Humanos
19.
Sensors (Basel) ; 16(7)2016 Jun 24.
Artículo en Inglés | MEDLINE | ID: mdl-27347967

RESUMEN

The conventional channel estimation methods based on a preamble for filter bank multicarrier with offset quadrature amplitude modulation (FBMC/OQAM) systems in mobile-to-mobile sensor networks are inefficient. By utilizing the intrinsicsparsity of wireless channels, channel estimation is researched as a compressive sensing (CS) problem to improve the estimation performance. In this paper, an AdaptiveRegularized Compressive Sampling Matching Pursuit (ARCoSaMP) algorithm is proposed. Unlike anterior greedy algorithms, the new algorithm can achieve the accuracy of reconstruction by choosing the support set adaptively, and exploiting the regularization process, which realizes the second selecting of atoms in the support set although the sparsity of the channel is unknown. Simulation results show that CS-based methods obtain significant channel estimation performance improvement compared to that of conventional preamble-based methods. The proposed ARCoSaMP algorithm outperforms the conventional sparse adaptive matching pursuit (SAMP) algorithm. ARCoSaMP provides even more interesting results than the mostadvanced greedy compressive sampling matching pursuit (CoSaMP) algorithm without a prior sparse knowledge of the channel.

20.
Sensors (Basel) ; 16(9)2016 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-27657075

RESUMEN

In this paper, a simple and flexible method for increasing the lifetime of fixed or mobile wireless sensor networks is proposed. Based on past residual energy information reported by the sensor nodes, the sink node or another central node dynamically optimizes the communication activity levels of the sensor nodes to save energy without sacrificing the data throughput. The activity levels are defined to represent portions of time or time-frequency slots in a frame, during which the sensor nodes are scheduled to communicate with the sink node to report sensory measurements. Besides node mobility, it is considered that sensors' batteries may be recharged via a wireless power transmission or equivalent energy harvesting scheme, bringing to the optimization problem an even more dynamic character. We report large increased lifetimes over the non-optimized network and comparable or even larger lifetime improvements with respect to an idealized greedy algorithm that uses both the real-time channel state and the residual energy information.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda