Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 26(4)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38667860

RESUMO

The main focus of this paper is the derivation of the structural properties of the test channels of Wyner's operational information rate distortion function (RDF), R¯(ΔX), for arbitrary abstract sources and, subsequently, the derivation of additional properties for a tuple of multivariate correlated, jointly independent, and identically distributed Gaussian random variables, {Xt,Yt}t=1∞, Xt:Ω→Rnx, Yt:Ω→Rny, with average mean-square error at the decoder and the side information, {Yt}t=1∞, available only at the decoder. For the tuple of multivariate correlated Gaussian sources, we construct optimal test channel realizations which achieve the informational RDF, R¯(ΔX)=▵infM(ΔX)I(X;Z|Y), where M(ΔX) is the set of auxiliary RVs Z such that PZ|X,Y=PZ|X, X^=f(Y,Z), and E{||X-X^||2}≤ΔX. We show the following fundamental structural properties: (1) Optimal test channel realizations that achieve the RDF and satisfy conditional independence, PX|X^,Y,Z=PX|X^,Y=PX|X^,EX|X^,Y,Z=EX|X^=X^. (2) Similarly, for the conditional RDF, RX|Y(ΔX), when the side information is available to both the encoder and the decoder, we show the equality R¯(ΔX)=RX|Y(ΔX). (3) We derive the water-filling solution for RX|Y(ΔX).

2.
Sensors (Basel) ; 23(20)2023 Oct 10.
Artigo em Inglês | MEDLINE | ID: mdl-37896453

RESUMO

Backscatter communication (BC) systems are a promising technology for internet of things (IoT) applications that allow devices to transmit information by modulating ambient radio signals without the need for a dedicated power source. However, the security of BC systems is a critical concern due to the vulnerability of the wireless channel. This paper investigates the impact of side information (SI) on the secrecy performance of BC systems. SI mainly refers to the additional knowledge that is available to the communicating parties beyond transmitted data, which can be used to enhance reliability, efficiency, security, and quality of service in various communication systems. In particular, in this paper, by considering a non-causally known SI at the transmitter, we derive compact analytical expressions of average secrecy capacity (ASC) and secrecy outage probability (SOP) for the proposed system model to analyze how SI affects the secrecy performance of BC systems. Moreover, a Monte Carlo simulation validates the accuracy of our analytical results and reveals that considering such knowledge at the transmitter has constructive effects on the system performance and ensures reliable communication with higher rates than the conventional BC systems without SI, namely, lower SOP and higher ASC are achievable.

3.
Entropy (Basel) ; 25(5)2023 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-37238483

RESUMO

Generalized mutual information (GMI) is used to compute achievable rates for fading channels with various types of channel state information at the transmitter (CSIT) and receiver (CSIR). The GMI is based on variations of auxiliary channel models with additive white Gaussian noise (AWGN) and circularly-symmetric complex Gaussian inputs. One variation uses reverse channel models with minimum mean square error (MMSE) estimates that give the largest rates but are challenging to optimize. A second variation uses forward channel models with linear MMSE estimates that are easier to optimize. Both model classes are applied to channels where the receiver is unaware of the CSIT and for which adaptive codewords achieve capacity. The forward model inputs are chosen as linear functions of the adaptive codeword's entries to simplify the analysis. For scalar channels, the maximum GMI is then achieved by a conventional codebook, where the amplitude and phase of each channel symbol are modified based on the CSIT. The GMI increases by partitioning the channel output alphabet and using a different auxiliary model for each partition subset. The partitioning also helps to determine the capacity scaling at high and low signal-to-noise ratios. A class of power control policies is described for partial CSIR, including a MMSE policy for full CSIT. Several examples of fading channels with AWGN illustrate the theory, focusing on on-off fading and Rayleigh fading. The capacity results generalize to block fading channels with in-block feedback, including capacity expressions in terms of mutual and directed information.

4.
Entropy (Basel) ; 24(12)2022 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-36554121

RESUMO

We extend the problem of secure source coding by considering a remote source whose noisy measurements are correlated random variables used for secure source reconstruction. The main additions to the problem are as follows: (1) all terminals noncausally observe a noisy measurement of the remote source; (2) a private key is available to all legitimate terminals; (3) the public communication link between the encoder and decoder is rate-limited; and (4) the secrecy leakage to the eavesdropper is measured with respect to the encoder input, whereas the privacy leakage is measured with respect to the remote source. Exact rate regions are characterized for a lossy source coding problem with a private key, remote source, and decoder side information under security, privacy, communication, and distortion constraints. By replacing the distortion constraint with a reliability constraint, we obtain the exact rate region for the lossless case as well. Furthermore, the lossy rate region for scalar discrete-time Gaussian sources and measurement channels is established. An achievable lossy rate region that can be numerically computed is also provided for binary-input multiple additive discrete-time Gaussian noise measurement channels.

5.
Entropy (Basel) ; 23(9)2021 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-34573809

RESUMO

The setting of the measurement number for each block is very important for a block-based compressed sensing system. However, in practical applications, we only have the initial measurement results of the original signal on the sampling side instead of the original signal itself, therefore, we cannot directly allocate the appropriate measurement number for each block without the sparsity of the original signal. To solve this problem, we propose an adaptive block-based compressed video sensing scheme based on saliency detection and side information. According to the Johnson-Lindenstrauss lemma, we can use the initial measurement results to perform saliency detection and then obtain the saliency value for each block. Meanwhile, a side information frame which is an estimate of the current frame is generated on the reconstruction side by the proposed probability fusion model, and the significant coefficient proportion of each block is estimated through the side information frame. Both the saliency value and significant coefficient proportion can reflect the sparsity of the block. Finally, these two estimates of block sparsity are fused, so that we can simultaneously use intra-frame and inter-frame correlation for block sparsity estimation. Then the measurement number of each block can be allocated according to the fusion sparsity. Besides, we propose a global recovery model based on weighting, which can reduce the block effect of reconstructed frames. The experimental results show that, compared with existing schemes, the proposed scheme can achieve a significant improvement in peak signal-to-noise ratio (PSNR) at the same sampling rate.

6.
Entropy (Basel) ; 23(10)2021 Sep 30.
Artigo em Inglês | MEDLINE | ID: mdl-34682011

RESUMO

We consider the problem of Private Information Retrieval with Private Side Information (PIR-PSI), wherein the privacy of the demand and the side information are jointly preserved. Although the capacity of the PIR-PSI setting is known, we observe that the underlying capacity-achieving code construction uses Maximum Distance Separable (MDS) codes therefore contributing to high computational complexity when retrieving the demand. Pointing at this drawback of MDS-based PIR-PSI codes, we propose XOR-based PIR-PSI codes for a simple yet non-trivial setting of two non-colluding databases and two side information files at the user. Although our codes offer substantial reduction in complexity when compared to MDS-based codes, the code-rate marginally falls short of the capacity of the PIR-PSI setting. Nevertheless, we show that our code-rate is strictly higher than that of XOR-based codes for PIR with no side information. As a result, our codes can be useful when privately downloading a file especially after having downloaded a few other messages privately from the same database at an earlier time-instant.

7.
Entropy (Basel) ; 23(12)2021 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-34946000

RESUMO

We consider the problem of encoding a deterministic source sequence (i.e., individual sequence) for the degraded wiretap channel by means of an encoder and decoder that can both be implemented as finite-state machines. Our first main result is a necessary condition for both reliable and secure transmission in terms of the given source sequence, the bandwidth expansion factor, the secrecy capacity, the number of states of the encoder and the number of states of the decoder. Equivalently, this necessary condition can be presented as a converse bound (i.e., a lower bound) on the smallest achievable bandwidth expansion factor. The bound is asymptotically achievable by Lempel-Ziv compression followed by good channel coding for the wiretap channel. Given that the lower bound is saturated, we also derive a lower bound on the minimum necessary rate of purely random bits needed for local randomness at the encoder in order to meet the security constraint. This bound too is achieved by the same achievability scheme. Finally, we extend the main results to the case where the legitimate decoder has access to a side information sequence, which is another individual sequence that may be related to the source sequence, and a noisy version of the side information sequence leaks to the wiretapper.

8.
Sensors (Basel) ; 20(14)2020 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-32664398

RESUMO

Underwater sensing and remote telemetry tasks necessitate the accurate geo-location of sensor data series, which often requires underwater acoustic arrays. These are ensembles of hydrophones that can be jointly operated in order to, e.g., direct acoustic energy towards a given direction, or to estimate the direction of arrival of a desired signal. When the available equipment does not provide the required level of accuracy, it may be convenient to merge multiple transceivers into a larger acoustic array, in order to achieve better processing performance. In this paper, we name such a structure an "array of opportunity" to signify the often inevitable sub-optimality of the resulting array design, e.g., a distance between nearest array elements larger than half the shortest acoustic wavelength that the array would receive. The most immediate consequence is that arrays of opportunity may be affected by spatial ambiguity, and may require additional processing to avoid large errors in wideband direction of arrival (DoA) estimation, especially as opposed to narrowband processing. We consider the design of practical algorithms to achieve accurate detections, DoA estimates, and position estimates using wideband arrays of opportunity. For this purpose, we rely jointly on DoA and rough multilateration estimates to eliminate spatial ambiguities arising from the array layout. By means of emulations that realistically reproduce underwater noise and acoustic clutter, we show that our algorithm yields accurate DoA and location estimates, and in some cases it allows arrays of opportunity to outperform properly designed arrays. For example, at a signal-to-noise ratio of -20 dB, a 15-element array of opportunity achieves lower average and median localization error (27 m and 12 m, respectively) than a 30-element array with proper λ / 2 element spacing (33 m and 15 m, respectively). We confirm the good accuracy of our approach via emulation results, and through a proof-of-concept lake experiment, where our algorithm applied to a 10-element array of opportunity achieves a 90th-percentile DoA estimation error of 4 ∘ and a 90th-percentile total location error of 5 m when applied to a real 10-element array of opportunity.

9.
Entropy (Basel) ; 22(12)2020 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-33348883

RESUMO

In order to effectively improve the quality of side information in distributed video coding, we propose a side information generation scheme based on a coefficient matrix improvement model. The discrete cosine transform coefficient bands of the Wyner-Ziv frame at the encoder side are divided into entropy coding coefficient bands and distributed video coding coefficient bands, and then the coefficients of entropy coding coefficient bands are sampled, which are divided into sampled coefficients and unsampled coefficients. For sampled coefficients, an adaptive arithmetic encoder is used for lossless compression. For unsampled coefficients and the coefficients of distributed video coding coefficient bands, the low density parity check accumulate encoder is used to calculate the parity bits, which are stored in the buffer and transmitted in small amount upon decoder request. At the decoder side, the optical flow method is used to generate the initial side information, and the initial side information is improved according to the sampled coefficients by using the coefficient matrix improvement model. The experimental results demonstrate that the proposed side information generation scheme based on the coefficient matrix improvement model can effectively improve the quality of side information, and the quality of the generated side information is improved by about 0.2-0.4 dB, thereby improving the overall performance of the distributed video coding system.

10.
Entropy (Basel) ; 22(6)2020 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-33286477

RESUMO

The problem of determining the best achievable performance of arbitrary lossless compression algorithms is examined, when correlated side information is available at both the encoder and decoder. For arbitrary source-side information pairs, the conditional information density is shown to provide a sharp asymptotic lower bound for the description lengths achieved by an arbitrary sequence of compressors. This implies that for ergodic source-side information pairs, the conditional entropy rate is the best achievable asymptotic lower bound to the rate, not just in expectation but with probability one. Under appropriate mixing conditions, a central limit theorem and a law of the iterated logarithm are proved, describing the inevitable fluctuations of the second-order asymptotically best possible rate. An idealised version of Lempel-Ziv coding with side information is shown to be universally first- and second-order asymptotically optimal, under the same conditions. These results are in part based on a new almost-sure invariance principle for the conditional information density, which may be of independent interest.

11.
Entropy (Basel) ; 21(4)2019 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-33267124

RESUMO

We consider the k-user successive refinement problem with causal decoder side information and derive an exponential strong converse theorem. The rate-distortion region for the problem can be derived as a straightforward extension of the two-user case by Maor and Merhav (2008). We show that for any rate-distortion tuple outside the rate-distortion region of the k-user successive refinement problem with causal decoder side information, the joint excess-distortion probability approaches one exponentially fast. Our proof follows by judiciously adapting the recently proposed strong converse technique by Oohama using the information spectrum method, the variational form of the rate-distortion region and Hölder's inequality. The lossy source coding problem with causal decoder side information considered by El Gamal and Weissman is a special case ( k = 1 ) of the current problem. Therefore, the exponential strong converse theorem for the El Gamal and Weissman problem follows as a corollary of our result.

12.
J Chem Inf Model ; 58(2): 225-233, 2018 02 26.
Artigo em Inglês | MEDLINE | ID: mdl-29286651

RESUMO

Incorporating experimental restraints is a powerful method of increasing accuracy in computational protein small molecule docking simulations. Different algorithms integrate distinct forms of biochemical data during the docking and/or scoring stages. These so-called hybrid methods make use of receptor-based information such as nuclear magnetic resonance (NMR) restraints or small molecule-based information such as structure-activity relationships (SARs). A third class of methods directly interrogates contacts between the protein receptor and the small molecule. This work reviews the current state of using such restraints in docking simulations, evaluates their feasibility across broad systems, and identifies potential areas of algorithm development.


Assuntos
Simulação de Acoplamento Molecular , Bibliotecas de Moléculas Pequenas/química , Algoritmos , Desenho de Fármacos , Descoberta de Drogas , Ligantes , Espectroscopia de Ressonância Magnética , Proteínas/química , Relação Estrutura-Atividade , Interface Usuário-Computador
13.
Sensors (Basel) ; 18(6)2018 May 31.
Artigo em Inglês | MEDLINE | ID: mdl-29857543

RESUMO

This work explores an innovative strategy for increasing the efficiency of compressed sensing applied on mm-wave SAR sensing using multiple weighted side information. The approach is tested on synthetic and on real non-destructive testing measurements performed on a 3D-printed object with defects while taking advantage of multiple previous SAR images of the object with different degrees of similarity. The tested algorithm attributes autonomously weights to the side information at two levels: (1) between the components inside the side information and (2) between the different side information. The reconstruction is thereby almost immune to poor quality side information while exploiting the relevant components hidden inside the added side information. The presented results prove that, in contrast to common compressed sensing, good SAR image reconstruction is achieved at subsampling rates far below the Nyquist rate. Moreover, the algorithm is shown to be much more robust for low quality side information compared to coherent background subtraction.

14.
Entropy (Basel) ; 20(5)2018 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-33265442

RESUMO

We consider the rate distortion problem with side information at the decoder posed and investigated by Wyner and Ziv. Using side information and encoded original data, the decoder must reconstruct the original data with an arbitrary prescribed distortion level. The rate distortion region indicating the trade-off between a data compression rate R and a prescribed distortion level Δ was determined by Wyner and Ziv. In this paper, we study the error probability of decoding for pairs of ( R , Δ ) outside the rate distortion region. We evaluate the probability of decoding such that the estimation of source outputs by the decoder has a distortion not exceeding a prescribed distortion level Δ . We prove that, when ( R , Δ ) is outside the rate distortion region, this probability goes to zero exponentially and derive an explicit lower bound of this exponent function. On the Wyner-Ziv source coding problem the strong converse coding theorem has not been established yet. We prove this as a simple corollary of our result.

15.
Entropy (Basel) ; 20(1)2017 Dec 22.
Artigo em Inglês | MEDLINE | ID: mdl-33265094

RESUMO

In this work, we establish a full single-letter characterization of the rate-distortion region of an instance of the Gray-Wyner model with side information at the decoders. Specifically, in this model, an encoder observes a pair of memoryless, arbitrarily correlated, sources ( S 1 n , S 2 n ) and communicates with two receivers over an error-free rate-limited link of capacity R 0 , as well as error-free rate-limited individual links of capacities R 1 to the first receiver and R 2 to the second receiver. Both receivers reproduce the source component S 2 n losslessly; and Receiver 1 also reproduces the source component S 1 n lossily, to within some prescribed fidelity level D 1 . In addition, Receiver 1 and Receiver 2 are equipped, respectively, with memoryless side information sequences Y 1 n and Y 2 n . Important in this setup, the side information sequences are arbitrarily correlated among them, and with the source pair ( S 1 n , S 2 n ) ; and are not assumed to exhibit any particular ordering. Furthermore, by specializing the main result to two Heegard-Berger models with successive refinement and scalable coding, we shed light on the roles of the common and private descriptions that the encoder should produce and the role of each of the common and private links. We develop intuitions by analyzing the developed single-letter rate-distortion regions of these models, and discuss some insightful binary examples.

16.
J Stat Comput Simul ; 83(7): 1191-1209, 2013 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24532860

RESUMO

We propose a semiparametric approach for the analysis of case-control genome-wide association study. Parametric components are used to model both the conditional distribution of the case status given the covariates and the distribution of genotype counts, whereas the distribution of the covariates are modeled nonparametrically. This yields a direct and joint modeling of the case status, covariates and genotype counts, and gives better understanding of the disease mechanism and results in more reliable conclusions. Side information, such as the disease prevalence, can be conveniently incorporated into the model by empirical likelihood approach and leads to more efficient estimates and powerful test in the detection of disease-associated SNPs. Profiling is used to eliminate a nuisance nonparametric component, and the resulting profile empirical likelihood estimates are shown to be consistent and asymptotically normal. For the hypothesis test on disease association, we apply the approximate Bayes factor (ABF) which is computationally simple and most desirable in genome-wide association studies where hundreds of thousands to a million genetic markers are tested. We treat the approximate Bayes factor as a hybrid Bayes factor which replaces the full data by the maximum likelihood estimates of the parameters of interest in the full model and derive it under a general setting. The deviation from Hardy-Weinberg Equilibrium (HWE) is also taken into account and the ABF for HWE using cases is shown to provide evidence of association between a disease and a genetic marker. Simulation studies and an application are further provided to illustrate the utility of the proposed methodology.

17.
Stat Methods Med Res ; 32(11): 2270-2282, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37823384

RESUMO

In this work, we develop a novel Bayesian regression framework that can be used to complete variable selection in high dimensional settings. Unlike existing techniques, the proposed approach can leverage side information to inform about the sparsity structure of the regression coefficients. This is accomplished by replacing the usual inclusion probability in the spike and slab prior with a binary regression model which assimilates this extra source of information. To facilitate model fitting, a computationally efficient and easy to implement Markov chain Monte Carlo posterior sampling algorithm is developed via carefully chosen priors and data augmentation steps. The finite sample performance of our methodology is assessed through numerical simulations, and we further illustrate our approach by using it to identify genetic markers associated with the nicotine metabolite ratio; a key biological marker associated with nicotine dependence and smoking cessation treatment.


Assuntos
Algoritmos , Teorema de Bayes , Marcadores Genéticos , Cadeias de Markov
18.
Sensors (Basel) ; 11(10): 9717-31, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-22163722

RESUMO

With the widespread use of identification systems, establishing authenticity with sensors has become an important research issue. Among the schemes for making authenticity verification based on information security possible, reversible data hiding has attracted much attention during the past few years. With its characteristics of reversibility, the scheme is required to fulfill the goals from two aspects. On the one hand, at the encoder, the secret information needs to be embedded into the original image by some algorithms, such that the output image will resemble the input one as much as possible. On the other hand, at the decoder, both the secret information and the original image must be correctly extracted and recovered, and they should be identical to their embedding counterparts. Under the requirement of reversibility, for evaluating the performance of the data hiding algorithm, the output image quality, named imperceptibility, and the number of bits for embedding, called capacity, are the two key factors to access the effectiveness of the algorithm. Besides, the size of side information for making decoding possible should also be evaluated. Here we consider using the characteristics of original images for developing our method with better performance. In this paper, we propose an algorithm that has the ability to provide more capacity than conventional algorithms, with similar output image quality after embedding, and comparable side information produced. Simulation results demonstrate the applicability and better performance of our algorithm.


Assuntos
Algoritmos , Identificação Biométrica/métodos , Segurança Computacional , Registros , Simulação por Computador , Humanos , Processamento de Imagem Assistida por Computador
19.
Math Biosci Eng ; 16(5): 4559-4580, 2019 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-31499677

RESUMO

In this paper, we propose a robust video steganographic method, which can efficiently hide confidential messages in video sequences, and ensure that these messages are perfectly reconstructed by recipient. To apply proposed scheme to video sequences, we must be faced with two nontrivial problems: (a) how to effectively minimize the total steganographic distortion for each video frame? (b) how to recover the hidden messages if some frames are lost or damaged? We tackle the first question by designing a new distortion function, which employs two continuous adjacent frames with the same scene as side-information. The second question is addressed by data sharing. In this mechanism, the original data is expanded and split into multiple shares by using multi-ary Vandermonde matrix. Since these shares contain a lot of data redundancy, the recipient can recover the hidden data even if some frames are damaged or lost during delivery. Extensive experiments show that proposed scheme outperforms the state-of-the-arts in terms of robustness and diverse attacks.

20.
Nucl Med Mol Imaging ; 50(1): 13-23, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26941855

RESUMO

PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa