Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Entropy (Basel) ; 26(7)2024 Jun 30.
Artigo em Inglês | MEDLINE | ID: mdl-39056932

RESUMO

The capacity of a memoryless state-dependent channel is derived for a setting in which the encoder is provided with rate-limited assistance from a cribbing helper that observes the state sequence causally and the past channel inputs strictly causally. Said cribbing may increase capacity but not to the level achievable by a message-cognizant helper.

2.
Entropy (Basel) ; 25(9)2023 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-37761613

RESUMO

The gain in the identification capacity afforded by a rate-limited description of the noise sequence corrupting a modulo-additive noise channel is studied. Both the classical Ahlswede-Dueck version and the Ahlswede-Cai-Ning-Zhang version, which does not allow for missed identifications, are studied. Irrespective of whether the description is provided to the receiver, to the transmitter, or to both, the two capacities coincide and both equal the helper-assisted Shannon capacity.

3.
Entropy (Basel) ; 24(1)2021 Dec 24.
Artigo em Inglês | MEDLINE | ID: mdl-35052055

RESUMO

The listsize capacity is computed for the Gaussian channel with a helper that-cognizant of the channel-noise sequence but not of the transmitted message-provides the decoder with a rate-limited description of said sequence. This capacity is shown to equal the sum of the cutoff rate of the Gaussian channel without help and the rate of help. In particular, zero-rate help raises the listsize capacity from zero to the cutoff rate. This is achieved by having the helper provide the decoder with a sufficiently fine quantization of the normalized squared Euclidean norm of the noise sequence.

4.
Entropy (Basel) ; 22(3)2020 Mar 11.
Artigo em Inglês | MEDLINE | ID: mdl-33286090

RESUMO

Motivated by a horse betting problem, a new conditional Rényi divergence is introduced. It is compared with the conditional Rényi divergences that appear in the definitions of the dependence measures by Csiszár and Sibson, and the properties of all three are studied with emphasis on their behavior under data processing. In the same way that Csiszár's and Sibson's conditional divergence lead to the respective dependence measures, so does the new conditional divergence lead to the Lapidoth-Pfister mutual information. Moreover, the new conditional divergence is also related to the Arimoto-Rényi conditional entropy and to Arimoto's measure of dependence. In the second part of the paper, the horse betting problem is analyzed where, instead of Kelly's expected log-wealth criterion, a more general family of power-mean utility functions is considered. The key role in the analysis is played by the Rényi divergence, and in the setting where the gambler has access to side information, the new conditional Rényi divergence is key. The setting with side information also provides another operational meaning to the Lapidoth-Pfister mutual information. Finally, a universal strategy for independent and identically distributed races is presented that-without knowing the winning probabilities or the parameter of the utility function-asymptotically maximizes the gambler's utility function.

5.
Entropy (Basel) ; 21(8)2019 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-33267491

RESUMO

Two families of dependence measures between random variables are introduced. They are based on the Rényi divergence of order α and the relative α -entropy, respectively, and both dependence measures reduce to Shannon's mutual information when their order α is one. The first measure shares many properties with the mutual information, including the data-processing inequality, and can be related to the optimal error exponents in composite hypothesis testing. The second measure does not satisfy the data-processing inequality, but appears naturally in the context of distributed task encoding.

6.
Entropy (Basel) ; 21(3)2019 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-33267013

RESUMO

Two correlated sources emit a pair of sequences, each of which is observed by a different encoder. Each encoder produces a rate-limited description of the sequence it observes, and the two descriptions are presented to a guessing device that repeatedly produces sequence pairs until correct. The number of guesses until correct is random, and it is required that it have a moment (of some prespecified order) that tends to one as the length of the sequences tends to infinity. The description rate pairs that allow this are characterized in terms of the Rényi entropy and the Arimoto-Rényi conditional entropy of the joint law of the sources. This solves the guessing analog of the Slepian-Wolf distributed source-coding problem. The achievability is based on random binning, which is analyzed using a technique by Rosenthal.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA