In information theory, the data processing inequality (DPI) is a powerful concept. Informally, it tells us that processing data cannot increase the amount of contained information. In this two-part blog post, we will explore the DPI and its applications to function-space variational inference (FSVI).

In the first part of this blog post, we will provide intuitive explanations and present mathematical proofs of the DPI. Then, in the second part, we will explore the application of the data processing inequality to function-space variational inference and its relationship to variational inference in general.

The goal of this post is to look at the data processing inequality from different angles to better understand it. We will also consider the equality case (which is arguably the best way to understand inequalities).

$$\require{mathtools} \DeclareMathOperator{\opExpectation}{\mathbb{E}} \newcommand{\E}[2]{\opExpectation_{#1} \left [ #2 \right ]} \newcommand{\simpleE}[1]{\opExpectation_{#1}} \newcommand\MidSymbol[1][]{% \:#1\:} \newcommand{\given}{\MidSymbol[\vert]} \DeclareMathOperator{\opmus}{\mu^*} \newcommand{\IMof}[1]{\opmus[#1]} \DeclareMathOperator{\opInformationContent}{H} \newcommand{\ICof}[1]{\opInformationContent[#1]} \newcommand{\xICof}[1]{\opInformationContent(#1)} \DeclareMathOperator{\opEntropy}{H} \newcommand{\Hof}[1]{\opEntropy[#1]} \newcommand{\xHof}[1]{\opEntropy(#1)} \DeclareMathOperator{\opMI}{I} \newcommand{\MIof}[1]{\opMI[#1]} \DeclareMathOperator{\opTC}{TC} \newcommand{\TCof}[1]{\opTC[#1]} \newcommand{\CrossEntropy}[2]{\opEntropy(#1 \MidSymbol[\Vert] #2)} \DeclareMathOperator{\opKale}{D_\mathrm{KL}} \newcommand{\Kale}[2]{\opKale(#1 \MidSymbol[\Vert] #2)} \DeclareMathOperator{\opJSD}{D_\mathrm{JSD}} \newcommand{\JSD}[2]{\opJSD(#1 \MidSymbol[\Vert] #2)} \DeclareMathOperator{\opp}{p} \newcommand{\pof}[1]{\opp(#1)} \newcommand{\pcof}[2]{\opp_{#1}(#2)} \newcommand{\hpcof}[2]{\hat\opp_{#1}(#2)} \DeclareMathOperator{\opq}{q} \newcommand{\qof}[1]{\opq(#1)} \newcommand{\qcof}[2]{\opq_{#1}(#2)} \newcommand{\varHof}[2]{\opEntropy_{#1}[#2]} \newcommand{\xvarHof}[2]{\opEntropy_{#1}(#2)} \newcommand{\varMIof}[2]{\opMI_{#1}[#2]} \newcommand{\w}{\boldsymbol{\omega}} \newcommand{\W}{\boldsymbol{\Omega}} \DeclareMathOperator{\opf}{f} \newcommand{\fof}[1]{\opf(#1)} $$

Background: Information-Theoretic Notation

Details
This blog post uses the same background as the previous blog post “Bayesian Appropriation: General Likelihood for Loss Functions”, so you can skip it if you have read the previous blog posts.

Information theory deals with the communication of information. In this blog post, we use information-theoretic notation to express various quantities related to probability distributions and their relationships. Here are some key concepts we will use:

The information content of an event \(x\) is denoted as \(\Hof{x}\) and is defined as \(-\log \pof{x}\). It represents the amount of information needed to describe the occurrence of \(x\) given an underlying probability distribution. In machine learning, this information content is often used as a minimization objective, represented as the negative log-likelihood or cross-entropy when averaged over a dataset.

The entropy \(\Hof{X}\) of a random variable \(X\) is the expectation of its information content:

\[ \Hof{X} \triangleq \E{\pof{x}}{\Hof{x}} = \E{\pof{x}}{-\log \pof{x}}. \]

The entropy measures the average amount of information needed to describe the random variable \(X\). It provides a measure of uncertainty or randomness associated with \(X\).

We will also use the Kullback-Leibler divergence (🥬) \(\Kale{\pof{X}}{\qof{X}}\) and the cross-entropy \(\CrossEntropy{\pof{X}}{\qof{X}}\):

\[ \begin{align} \CrossEntropy{\pof{X}}{\qof{X}} & = \E{\pof{x}}{-\log \qof{x}}\\ \Kale{\pof{X}}{\qof{X}} & = \CrossEntropy{\pof{X}}{\qof{X}} - \Hof{X} \end{align} \]

The cross-entropy quantifies the average number of bits needed to encode samples drawn from the true distribution \(\pof{X}\) using a different distribution \(\qof{X}\). The Kullback-Leibler divergence is a measure of the difference between two probability distributions and captures the additional bits needed to encode samples from \(\pof{X}\) compared to encoding them using the true distribution \(\qof{X}\).

Now that we have refreshed our notation, let’s delve into the data processing inequality.

Data Processing Inequality

The data processing inequality (DPI) is a fundamental inequality in information theory that states the mutual information between two random variables cannot increase through processing. The original DPI is typically stated for a Markov chain of random variables \(X \rightarrow Y \rightarrow Z\) and relates the mutual information terms as follows:

\[ \MIof{X;Y} \ge \MIof{X;Z}. \]

We can view \(\rightarrow\) as a processing or transition step, that maps \(X\) to \(Y\) and \(Y\) to \(Z\), whereas the mapping can be deterministic or stochastic. The inequality tells us that processing the random variable \(X\) to obtain \(Y\) and further processing \(Y\) to obtain \(Z\) cannot increase the mutual information between \(X\) and \(Z\) compared to the mutual information between \(X\) and \(Y\).

Example 1

The following three scenarios illustrate the data processing inequality using different mappings.

Image Processing Pipeline

Consider an image processing pipeline with the following steps. Let:

  • \(X\) be the original image data;
  • \(Y\) be a compressed version of the image; and
  • \(Z\) be \(Y\) after adding blur and pixelation.

In this case, \(X\) has more mutual information with \(Y\) than with \(Z\). The compression reduces information, but the image is still recognizable. However, after the additional processing of blurring and pixelating, the mutual information between \(X\) and \(Z\) is further reduced. This gives an intuitive example of how additional processing on data reduces the mutual information with the original data. Each processing step results in some loss of information.

Supervised Learning

Consider a supervised learning pipeline with the following steps. Let

  • \(X\) be the input features;
  • \(Y\) be the intermediate representations learned by the model; and
  • \(Z\) be the model predictions.

Here, \(X \rightarrow Y \rightarrow Z\) forms a Markov chain. The data processing inequality tells us that the mutual information between the inputs \(X\) and predictions \(Z\) cannot exceed the mutual information between the inputs \(X\) and intermediate representations \(Y\):

\[\MIof{X; Y} \geq \MIof{X; Z}.\]

This makes intuitive sense—the intermediate representations \(Y\) are obtained by processing the raw inputs \(X\), so they cannot contain more information about \(X\) than \(X\) itself. The predictions \(Z\) are obtained by further processing \(Y\), so additional information may be lost, reducing the mutual information with the original inputs \(X\).

As a more concrete example, consider an image classification model. Let:

  • \(X\) be the input images;
  • \(Y\) be the activations of the convolutional layers; and
  • \(Z\) be predicted image labels.

The convolutional layers will extract features from the input images, but cannot extract more information than present in the original images. The predicted labels are obtained by further processing these convolutional features, so may lose some fine-grained information about the original inputs.

Autoencoders

An autoencoder compresses the input \(X\) into a latent code \(Y\) and then tries to reconstruct the original input from the code, producing \(\hat{X}\). Let:

  • \(X\) be the input;
  • \(Y\) be the latent code; and
  • \(\hat{X}\) be the reconstruction;

The data processing inequality tells us again:

\[\MIof{X; Y} \geq \MIof{X; \hat{X}}.\]

The latent code \(Y\) is obtained by compressing \(X\), so cannot contain more information. The reconstruction \(\hat{X}\) tries to recover \(X\) from \(Y\), but some information may be lost, reducing the mutual information with \(X\).

Intuitively, autoencoders try to preserve as much mutual information between inputs \(X\) and reconstructions \(\hat{X}\) as possible by learning latent representations \(Y\) that compress inputs without losing too much information. The data processing inequality quantifies this information bottleneck.

Proof of the Data Processing Inequality

The proof is simple and connects the DPI to another important inequality.

First we note that the Markov Chain implies that the following factorization of the joint distribution: \[ \pof{x, y, z} = \pof{x} \pof{y \given x} \pof{z \given y}. \] Using this factorization, we can express the mutual information terms:

\[ \begin{align} \MIof{X;Y} &= \Hof{X} - \Hof{X \given Y} \\ &\ge \Hof{X} - \Hof{X \given Z} \\ &= \MIof{X;Z}. \end{align} \]

This relies on \(\Hof{X \given Y} \le \Hof{X \given Z}\). Why is this true?

We have the following chain of inequalities: \[\Hof{X \given Y} = \underbrace{\MIof{X ; Z \given Y}}_{\overset{(1)}{=}0} + \Hof{X \given Y, Z} \overset{(2)}{\le} \Hof{X \given Z}.\] (1) follows from the Markov chain property: when \(X \rightarrow Y \rightarrow Z\), \(X\) does not depend on \(Z\) at all when conditioned on \(Y\); and (2) follows from the fact that conditioning reduces entropy, i.e. \(\Hof{A \given B} \le \Hof{A}.\)

The equality gap \(\Hof{X \given Y, Z} - \Hof{X \given Z}\) corresponds to the mutual information \(\MIof{X ; Y \given Z}\). This mutual information measures the extra information about \(X\) contained in \(Y\) that is not already conveyed by \(Z\). It is zero if and only if \(X \rightarrow Z \rightarrow Y\) forms a Markov chain, indicating that \(Z\) is a sufficient statistic for \(X\).

Proof of (2) “Conditioning Reduces Entropy”: We can easily show that conditioning reduces entropy by using the non-negative property of the mutual information: \[\begin{aligned} 0 &\le \Kale{\pof{X,Y}}{\pof{X}\pof{Y}} \\ &= \MIof{X;Y} \\ &= \Hof{X} - \Hof{X \given Y} \\ \implies \Hof{X \given Y} &\le \Hof{X}. \end{aligned} \]

The fact that conditioning reduces entropy, \(\Hof{X} \ge \Hof{X \given Y}\) is a very important property by itself and can be seen as another form of the data processing inequality. \(\Hof{X \given Y}\) tells us how much uncertainty is left after knowing \(Y\). For example, if the mapping from \(X\) to \(Y\) is injective (“1:1”), then \(Y\) contains everything about \(X\). Vice-versa, worst-case, if \(Y\) is independent of \(X\), \(\Hof{X} = \Hof{X \given Y}\). Intuitively, this says that processing \(X\) can only (“at best”) reduce the uncertainty about \(X\) and not increase it.

Let’s move on and consider the 🥬 data processing inequality.

🥬 Data Processing Inequality

A similar DPI can be expressed for different distributions \(\pof{x}\) and \(\qof{x}\) and the KL divergence (🥬) between them. It states that if we evolve two distributions using the same transition function, they cannot become less similar. The KL divergence (🥬) is sometimes also referred to as “relative entropy”, so we could also call this the “relative data processing inequality”.

This can be formalized for distributions \(\pof{x}\) and \(\qof{x}\) and a stochastic transition function \(X \overset{\fof{y \given x}}{\longrightarrow} Y\): \[ \Kale{\pof{X}}{\qof{X}} \ge \Kale{\pof{Y}}{\qof{Y}}, \] where \(\pof{y \given x} = \fof{y \given x} = \qof{y \given x}\). The marginals after the transition are \(\pof{y} = \E{\pof{x}}{\fof{y \given x}}\) and \(\qof{y} = \E{\qof{x}}{\fof{y \given x}}\), so more explicitly: \[ \Kale{\pof{X}}{\qof{X}} \ge \Kale{\E{\pof{x}}{\fof{Y \given x}}}{\E{\qof{x}}{\fof{Y \given x}}}. \]

Thomas and Cover describe this in their book Elements of Information Theory as “relative entropy never increases” and relate it to the second law of thermodynamics.

Example 2: Comparing Image Distributions

As another example, let:

  • \(\pof{x}\) be the true distribution of images in a dataset;
  • \(\qof{x}\) be a generative model that tries to mimic \(\pof{x}\); and
  • \(\fof{y \given x}\) be a function that thresholds images \(x\) into bilevel black and white images \(y\).

Then \(\pof{y}\) and \(\qof{y}\) will be more difficult to distinguish after the thresholding operation than \(\pof{x}\) and \(\qof{x}\). Converting to black and white images has lost information that could help distinguish the real and generated distributions.

This provides some intuition for why the 🥬 divergence between distributions decreases under a shared stochastic mapping, as formalized by the 🥬 data processing inequality. Processing through \(\fof{y \given x}\) makes the distributions harder to tell apart.

Counter-Example 3: Bayesian Inference

It might be inviting to think that this data processing inequality also applies to Bayesian inference (updating the model parameters based on new evidence). Then one could argue that if two agents start with different prior beliefs but update based on the same evidence, their posterior beliefs will become more similar. However, this intuition is flawed: the data processing inequality does not apply to Bayesian inference. Let’s walk through why.

Let:

  • \(\pof{\w}\) be an agent’s prior belief;
  • \(\qof{\w}\) be another agent’s different prior;
  • \(\pof{\w\given x}\) is the posterior after observing data \(x\); and
  • \(\qof{\w\given x}\) is the other agent’s posterior.

The priors \(\pof{\w}\) and \(\qof{\w}\) may have large divergence, representing very different initial beliefs. However, when conditioning on the same data \(x\), the KL divergence between \(\pof{\w \given x}\) and \(\qof{\w \given x}\) could increase or decrease—the data processing inequality does not give us any guarantee.

This is because \(\pof{\w}\) and \(\qof{\w}\) are not evolving under the same stochastic mapping. Rather, each prior is mapped to its respective posterior via Bayes’ rule, which operates differently on \(\opp\) and \(\opq\).

The correct intuition is that observing the same data \(x\) does not necessarily bring the posterior beliefs closer together—their divergence depends on the interplay between the specific priors and likelihoods. The data processing inequality does not directly apply to this Bayesian updating scenario.

\[ \Kale{\qof{\W}}{\pof{\W}} \color{red}{\not\ge} \Kale{\qof{\W \given \mathcal{D}}}{\pof{\W \given \mathcal{D}}}, \]

This counterexample highlights the importance of precisely understanding the assumptions underlying conceptual principles like the DPI. While the DPI provides insight about information dynamics in many cases, it does not universally apply, as exemplified here by Bayesian updating under different priors.

Proofs of the 🥬 Data Processing Inequality

We will prove this inequality in two different ways. First, we will develop a “brute-force” proof, and then we will look at a more elegant proof that follows Thomas and Cover. Importantly, we will also consider the equality case in detail.

Brute-force Proof

If \(\opp\) does not have support in \(\opq\), the inequality is trivially true because then \(\Kale{\pof{Y}}{\qof{Y}}=\infty\).

Thus, now assume that \(\opp\) has support in \(\opq\). Then, we can brute-force from the cross-entropy: \[\begin{aligned} \CrossEntropy{\pof{Y}}{\qof{Y}}&=\CrossEntropy{\pof{Y}}{\E{\qof{x}}{\pof{Y \given x}}}\\ &=\CrossEntropy{\pof{Y}}{\E{\qof{x}}{\frac{\pof{x \given Y}\pof{Y}}{\pof{x}}}}\\ &=\CrossEntropy{\pof{Y}}{\E{\pof{x \given Y}}{\frac{\qof{x}}{\pof{x}}}}+\CrossEntropy{\pof{Y}}{\pof{Y}}\\ &\overset{(1)}{=}\CrossEntropy{\pof{Y}}{\E{\pof{x \given Y}}{\frac{\qof{x}}{\pof{x}}}}+\xHof{\pof{Y}}\\ &\overset{(2)}{\le}\CrossEntropy{\pof{X, Y}}{\frac{\qof{X}}{\pof{X}}}+\xHof{\pof{Y}}\\ &\overset{(3)}{=}\CrossEntropy{\pof{X}}{\frac{\qof{X}}{\pof{X}}}+\xHof{\pof{Y}}\\ &\overset{(4)}{=}\Kale{\pof{X}}{\qof{X}}+\xHof{\pof{Y}}\\ \iff \Kale{\pof{Y}}{\qof{Y}}&\le\Kale{\pof{X}}{\qof{X}}, \end{aligned}\] where we have used (1) the cross-entropy of a distribution with itself is just the entropy, (2) the cross-entropy is convex and we can apply Jensen’s inequality, (3) the RHS side of the cross-entropy does not depend on \(Y\) and we can trivially marginalize it out, and (4) the definition of the Kullback-Leibler divergence as (unnormalized) cross-entropy of a fraction.

This makes it difficult to extract the case for equality, however.

Equality Case

For (2), this is sadly slightly more complex than it might seem on first glance. Let’s unwrap the term: \[ \CrossEntropy{\pof{Y}}{\E{\pof{x \given Y}}{\frac{\qof{x}}{\pof{x}}}} = \E{\pof{y}}{-\log \E{\pof{x \given y}}{\frac{\qof{x}}{\pof{x}}}}. \] We take an expectation over \(\pof{y}\), so we need to look at almost all \(\pof{x \given y} \not= 0\) for different almost all \(\pof{y}\) separately to consider equality. \(-\log x\) is strictly convex, so we need \(F = \frac{\qof{X}}{\pof{X}}\) to be constant almost everywhere in the support of \(\pof{x \given y}\) for any fixed \(y\). Then we have equality in Jensen’s inequality.

In the following, I will limit myself to the discrete case to avoid having to deal with measure theory1. To obtain equality, for all \(y\) with \(\pof{y} \not= 0\) (i.e. we have support) and for all \(x_1, x_2\) with \(\pof{x_1 \given y}, \pof{x_2 \given y} \not= 0\), we need \(\frac{\qof{x_1}}{\pof{x_1}} = \frac{\qof{x_2}}{\pof{x_2}}\). Equivalently (for the reader, why is then \(\pof{x_1} \not= 0?\)): \[ \begin{aligned} \frac{\qof{x_1}}{\pof{x_1}} &= \frac{\qof{x_2}}{\pof{x_2}} \\ \iff \qof{x_1} &= \frac{\qof{x_2}}{\pof{x_2}} \, \pof{x_1} \\ \end{aligned} \] This means that \(\qof{x} = C_y \pof{x}\) piecewise for all \(x\) for which \(\pof{x \given y} \not= 0\) for some fixed \(y\) with \(\pof{y} \not= 0\). That is if we keep \(y\) fixed, all the \(x\) for which \(\pof{x \given y} \not= 0\) have the same constant factor \(C_y\). Then for all \(y\) with \(\pof{y} \not= 0\), we have equality and overall equality in (2).

If for any \(x\) there are multiple \(y\), e.g. \(y_1, y_2\) for which \(\pof{x \given y} \not= 0\), then we have \(C_{y_1} = C_{y_2}\).

As an example, at the simplest, if this is the case for all \(y\), then \(C_y = 1\) constant.

Simple & Elegant Proof

Thomas and Cover provide a beautifully simple proof:

What does this mean? Whereas \(\fof{y \given x}\) is the ‘forward’ transition function, \(\pof{x \given y}\) and \(\qof{x \given y}\) are the ‘backward’ transition functions. We only have equality when the backward transition functions are equal (almost everywhere).

The statement on equality is not very informative yet though, so we have to put in a bit more work. Again, this is written for the discrete case.

This time we explicitly use Bayes’ rule to connect the forward and backward transition functions. First, we have to fix \(y\) such that \(\pof{y} \not= 0\) (i.e. \(y\) is in the support of \(\pof{y}\)) and then \(\qof{y} \not=0\). We have: \[ \begin{aligned} \pof{x \given y} &= \qof{x \given y} \\ \overset{\text{ass. $\pof{y} \not= 0$}}{\iff} \frac{\fof{y \given x}\pof{x}}{\pof{y}} &= \frac{\fof{y \given x}\qof{x}}{\qof{y}} \\ \overset{\text{ass. $\fof{y \given x}\not= 0$}}{\iff} \frac{\pof{x}}{\pof{y}} &= \frac{\qof{x}}{\qof{y}} \\ \iff \pof{x} &= \frac{\pof{y}}{\qof{y}} \, \qof{x}. \end{aligned} \] For a given \(y\) with \(\pof{y} \not=0\), for the equality case, we see that for all \(x\) with \(\fof{y \given x} \not= 0\), \(\pof{x}\) and \(\qof{x}\) have to be coupled via piecewise constant factors.

As another example, if \(\fof{y \given x} \not=0\) (has full support) for all possible \(x\), for the equality case we have \(\pof{x} = \qof{x}\).

Compared to the previous equality case we went a bit deeper and rewrote the conditions to consider the ratios between \(x\) and \(y\). Note we could have shown the same thing in the “brute-force” proof, too.

Altogether, we have see that both \(x\) and \(y\) are modulated by the same constant factor between \(\pof{\cdot}\) and \(\qof{\cdot}\). Essentially, this tells us that we could split our support into unconnected sub-domains and examine each individually for the equality case.

Overall Statement

We have the following overall statement:

(\(\pof{x} \ll \qof{x}\) means that \(\qof{x} > 0\) implies \(\pof{x} > 0\), so the 🥬 divergence is not \(\infty\).)

More precisely, for \(\pof{x} \ll \qof{x}\), we have equality when: \[\forall y, \pof{y} \not= 0 \exists C_y \in \mathbb{R}_{> 0} \forall x, \fof{y \given x}\not=0\colon \pof{x} = C_y \, \qof{x}.\]

Revisiting Data Processing Inequalities

Now, we can use this to derive a few additional results and also to close the circle to the original data processing inequality.

Jensen-Shannon Divergence

The KL (🥬) divergence is not a metric. The triangle inequality does not hold, and it is not symmetric.

We can symmetrize it to obtain the Jensen-Shannon divergence (JSD). The JSD is defined as the mean of the two 🥬 divergences of the two distributions from their average. It essentially makes the 🥬 divergence symmetric: \[ \begin{aligned} \fof{x} &= \frac{\pof{x} + \qof{x}}{2}\\ \JSD{\pof{x}}{\qof{x}} &= \frac{1}{2} \Kale{\pof{x}}{\fof{x}} + \frac{1}{2} \Kale{\qof{x}}{\fof{x}}. \end{aligned} \] Similar approaches can be used to “symmetrize” other concepts; for example matrices: \(\frac{1}{2} A + \frac{1}{2} A^T\) is also symmetric by construction for any matrix \(A\).

The square root of the Jensen-Shannon divergence is symmetric and satisfies the triangle inequality—it is a metric: the Jensen-Shannon distance.

The JSD Data Processing Inequality

We can also obtain a data processing inequality for the Jensen-Shannon divergence and the Jensen-Shannon distance:

The proof uses the 🥬 data processing inequality: \[ \begin{aligned} \JSD{\pof{X}}{\qof{X}} &= \frac{1}{2} \Kale{\pof{X}}{\fof{X}} + \frac{1}{2} \Kale{\qof{X}}{\fof{X}}\\ &\ge \frac{1}{2} \Kale{\pof{Y}}{\fof{Y}} + \frac{1}{2} \Kale{\qof{Y}}{\fof{Y}}\\ &= \JSD{\pof{Y}}{\qof{Y}}. \end{aligned} \] We verify \(\fof{y} = \frac{\pof{y} + \qof{y}}{2}\) is the average of \(\pof{y}\) and \(\qof{y}\): \[ \begin{aligned} \fof{y} &= \E{\fof{x}}{\fof{y \given x}}\\ &= \E{\frac{\pof{x}+\qof{x}}{2}}{\fof{y \given x}}\\ &= \frac{1}{2} \E{\pof{x}}{\fof{y \given x}} + \frac{1}{2} \E{\qof{x}}{\fof{y \given x}}\\ &= \frac{1}{2} \pof{y} + \frac{1}{2} \qof{y}. \end{aligned} \] Finally, \(\pof{x}, \qof{x} \ll \fof{x}\), and the equality condition of the 🥬 data processing inequality gives us: \[ \begin{aligned} &\Kale{\pof{X \given Y}}{\fof{X \given Y}} = 0 &\\ &\Kale{\qof{X \given Y}}{\fof{X \given Y}} = 0 &\\ \implies &\pof{x \given y} = \fof{x \given y} \land \qof{x \given y} = \fof{x \given y}& \forall x,y \\ \implies &\pof{x \given y} = \qof{x \given y}& \forall x,y. \end{aligned} \]

Mutual Information

The JSD can also be expressed as a mutual information. For \[ \begin{aligned} Z &\sim \mathrm{Bernoulli}(\frac{1}{2}) = \fof{Z} \\ X \given Z = 0 &\sim \pof{x}\\ X \given Z = 1 &\sim \qof{x}, \end{aligned} \] we have: \[ \JSD{\pof{X}}{\qof{X}} = \MIof{X;Z}. \] This follows from rewriting the mutual information as a 🥬 divergence: \[ \begin{aligned} \MIof{X;Z} &= \Kale{\fof{X \given Z}}{\fof{X}}\\ &= \E{\fof{z}} {\Kale{\fof{X \given Z = z}}{\fof{X}}}\\ &= \frac{1}{2} \Kale{\pof{x}}{\fof{x}} + \frac{1}{2} \Kale{\qof{x}}{\fof{x}}\\ &= \JSD{\pof{X}}{\qof{X}}. \end{aligned} \]

We can generalize this to the Markov chain \(Z \rightarrow X \rightarrow Y\) with \(\fof{z, x, y} = \fof{z} \fof{x \given z} \fof{y \given x}\) for any distribution \(\fof{z}\): \[ \begin{aligned} \MIof{X;Z} &= \Kale{\fof{X \given Z}}{\fof{X}}\\ &= \E{\fof{z}} {\Kale{\fof{X \given z}}{\fof{X}}}\\ &\overset{(1)}{\ge} \E{\fof{z}} {\Kale{\fof{Y \given z}}{\fof{Y}}}\\ &= \Kale{\fof{Y \given Z}}{\fof{Y}}\\ &= \MIof{Y;Z}, \end{aligned} \] where \((1)\) follows from the 🥬 data processing inequality.

This is just the data processing inequality we presented initially.

The equality gap (Jensen gap) is \(\Kale{\fof{X \given Y, Z}}{\fof{X \given Y}}\). We have equality when: \[ \begin{aligned} \Kale{\fof{X \given Y, Z}}{\fof{X \given Y}} &= 0\\ \implies \MIof{X;Z \given Y} &= 0. \end{aligned} \] This is exactly when \(X\) is independent of \(Z\) given \(Y\). (Then \(Y\) is a sufficient statistic.)

Summary. We have shown that we can derive the data processing inequality using the 🥬 data processing inequality.

Conclusion

In this blog post, we explored the data processing inequality and its applications in information theory. Specifically, we discussed both the original data processing inequality and the 🥬 data processing inequality. We provided derivations and explanations for these inequalities, emphasizing their significance and limitations.

While the data processing inequality provides valuable insights into information dynamics in various scenarios, it is crucial to consider the specific assumptions and conditions under which it holds. The examples and counterexample hopefully demonstrate the nuances of applying these inequalities in different contexts.

By understanding the foundations and limitations of the data processing inequality, we can leverage its insights effectively in information theory and related fields.


Acknowledgements. Thanks to Linara Adilova for valuable feedback again.


  1. I currently don’t have a good ‘toolbox’ to express simple ideas cleanly in measure theory. I’m working on it.↩︎