Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Entropy (Basel) ; 26(3)2024 Feb 20.
Article in English | MEDLINE | ID: mdl-38539690

ABSTRACT

The celebrated Blahut-Arimoto algorithm computes the capacity of a discrete memoryless point-to-point channel by alternately maximizing the objective function of a maximization problem. This algorithm has been applied to degraded broadcast channels, in which the supporting hyperplanes of the capacity region are again cast as maximization problems. In this work, we consider general broadcast channels and extend this algorithm to compute inner and outer bounds on the capacity regions. Our main contributions are as follows: first, we show that the optimization problems are max-min problems and that the exchange of minimum and maximum holds; second, we design Blahut-Arimoto algorithms for the maximization part and gradient descent algorithms for the minimization part; third, we provide convergence analysis for both parts. Numerical experiments validate the effectiveness of our algorithms.

2.
Entropy (Basel) ; 21(5)2019 May 01.
Article in English | MEDLINE | ID: mdl-33267170

ABSTRACT

Inspired by the pioneering work of the information bottleneck (IB) principle for Deep Neural Networks' (DNNs) analysis, we thoroughly study the relationship among the model accuracy, I ( X ; T ) and I ( T ; Y ) , where I ( X ; T ) and I ( T ; Y ) are the mutual information of DNN's output T with input X and label Y. Then, we design an information plane-based framework to evaluate the capability of DNNs (including CNNs) for image classification. Instead of each hidden layer's output, our framework focuses on the model output T. We successfully apply our framework to many application scenarios arising in deep learning and image classification problems, such as image classification with unbalanced data distribution, model selection, and transfer learning. The experimental results verify the effectiveness of the information plane-based framework: Our framework may facilitate a quick model selection and determine the number of samples needed for each class in the unbalanced classification problem. Furthermore, the framework explains the efficiency of transfer learning in the deep learning area.

3.
Entropy (Basel) ; 20(3)2018 Mar 09.
Article in English | MEDLINE | ID: mdl-33265273

ABSTRACT

Let Z be a standard Gaussian random variable, X be independent of Z, and t be a strictly positive scalar. For the derivatives in t of the differential entropy of X + t Z , McKean noticed that Gaussian X achieves the extreme for the first and second derivatives, among distributions with a fixed variance, and he conjectured that this holds for general orders of derivatives. This conjecture implies that the signs of the derivatives alternate. Recently, Cheng and Geng proved that this alternation holds for the first four orders. In this work, we employ the technique of linear matrix inequalities to show that: firstly, Cheng and Geng's method may not generalize to higher orders; secondly, when the probability density function of X + t Z is log-concave, McKean's conjecture holds for orders up to at least five. As a corollary, we also recover Toscani's result on the sign of the third derivative of the entropy power of X + t Z , using a much simpler argument.

SELECTION OF CITATIONS
SEARCH DETAIL
...