Partial Information Decomposition

The partial information decomposition (PID), put forth by Williams & Beer [WB10], is a framework for decomposing the information shared between a set of variables we will refer to as inputs, \(X_0, X_1, \ldots\), and another random variable we will refer to as the output, \(Y\). This decomposition seeks to partition the information \(\I{X_0,X_1,\ldots : Y}\) among the antichains of the inputs.

Background

It is often desirable to determine how a set of inputs influence the behavior of an output. Consider the exclusive or logic gates, for example:

In [1]: In [1]: from dit.pid.distributions import bivariates, trivariates

We can see from inspection that either input (the first two indexes) is independent of the output (the final index), yet the two inputs together determine the output. One could call this “synergistic” information. Next, consider the giant bit distribution:

In [2]: In [4]: gb = bivariates['redundant']

In [3]: In [5]: print(gb)
Class:          Distribution
Alphabet:       ('0', '1') for all rvs
Base:           linear
Outcome Class:  str
Outcome Length: 3
RV Names:       None

x     p(x)
000   1/2
111   1/2

In [4]: Class:          Distribution

In [5]: Alphabet:       ('0', '1') for all rvs
   ...: Base:           linear
   ...: Outcome Class:  str
   ...: Outcome Length: 3
   ...: RV Names:       None
   ...: 
  File "<ipython-input-5-35057d4d19b8>", line 1
    Alphabet:       ('0', '1') for all rvs
                                 ^
SyntaxError: invalid syntax

Here, we see that either input informs us of exactly what the output is. One could call this “redundant” information. Furthermore, consider the coinformation of these distributions:

In [6]: In [6]: from dit.multivariate import coinformation as I

This could lead one to intuit that negative values of the coinformation correspond to synergistic effects in a distribution, while positive values correspond to redundant effects. This intuition, however, is at best misleading: the coinformation of a 4-variable giant bit and 4-variable parity distribution are both positive:

In [7]: In [9]: I(dit.example_dists.giant_bit(4, 2))
Out[7]: 1.0

In [8]: Out[9]: 1.0

In [9]: In [10]: I(dit.example_dists.n_mod_m(4, 2))
Out[9]: 1.0

In [10]: Out[10]: 1.0

This, as well as other issues, lead Williams & Beer [WB10] to propose the partial information decomposition.

Framework

The goal of the partial information is to assign to each some non-negative portion of \(\I{\{X_i\} : Y}\) to each antichain over the inputs. An antichain over the inputs is a set of sets, where each of those sets is not a subset of any of the others. For example, \(\left\{ \left\{X_0, X_1\right\}, \left\{X_1, X_2\right\} \right\}\) is an antichain, but \(\left\{ \left\{X_0, X_1\right\}, \left\{X_0 X_1, X_2\right\} \right\}\) is not.

The antichains for a lattice based on this partial order:

\[\alpha \leq \beta \iff \forall \mathbf{b} \in \beta, \exists \mathbf{a} \in \alpha, \mathbf{a} \subseteq \mathbf{b}\]

From here, we wish to find a redundancy measure, \(\Icap{\bullet}\) which would assign a fraction of \(\I{\{X_i\} : Y}\) to each antichain intuitively quantifying what portion of the information in the output could be learned by observing any of the sets of variables within the antichain. In order to be a viable measure of redundancy, there are several axioms a redundancy measure must satisfy.

Bivariate Lattice

Let us consider the special case of two inputs. The lattice consists of four elements: \(\left\{\left\{X_0\right\}, \left\{X_1\right\}\right\}\), \(\left\{\left\{X_0\right\}\right\}\), \(\left\{\left\{X_1\right\}\right\}\), and \(\left\{\left\{X_0, X_1\right\}\right\}\). We can interpret these elements as the redundancy provided by both inputs, the information uniquely provided by \(X_0\), the information uniquely provided by \(X_1\), and the information synergistically provided only by both inputs together. Together these for elements decompose the input-output mutual information:

\[\I{X_0, X_1 : Y} = \Ipart{\left\{X_0\right\}, \left\{X_1\right\} : Y} + \Ipart{\left\{X_0\right\} : Y} + \Ipart{\left\{X_1\right\} : Y} + \Ipart{\left\{X_0, X_1\right\} : Y}\]

Furthermore, due to the self-redundancy axiom (described ahead), the single-input mutual informations decomposed in the following way:

\[ \begin{align}\begin{aligned}\I{X_0 : Y} = \Ipart{\left\{X_0\right\}, \left\{X_1\right\} : Y} + \Ipart{\left\{X_0\right\} : Y}\\\I{X_1 : Y} = \Ipart{\left\{X_0\right\}, \left\{X_1\right\} : Y} + \Ipart{\left\{X_1\right\} : Y}\end{aligned}\end{align} \]

Colloquially, from input \(X_0\) one can learn what is redundantly provided by either input, plus what is uniquely provided by \(X_0\), but not what is uniquely provided by \(X_1\) or what can only be learned synergistically from both inputs.

Axioms

The following three axioms were provided by Williams & Beer.

Symmetry

The redundancy \(\Icap{X_{0:n} : Y}\) is invariant under reorderings of \(X_i\).

Self-Redundancy

The redundancy of a single input is its mutual information with the output:

\[\Icap{X_i : Y} = \I{X_i : Y}\]

Monotonicity

The redundancy should only decrease with in inclusion of more inputs:

\[\Icap{\mathcal{A}_1, \ldots, \mathcal{A}_{k-1}, \mathcal{A}_k : Y} \leq \Icap{\mathcal{A}_1, \ldots, \mathcal{A}_{k-1} : Y}\]

with equality if \(\mathcal{A}_{k-1} \subseteq \mathcal{A}_k\).

There have been other axioms proposed following from those of Williams & Beer.

Identity

The identity axiom [HSP13] states that if the output is identical to the inputs, then the redundancy is the mutual information between the inputs:

\[\Icap{X_0, X_1 : \left(X_0, X_1\right)} = \I{X_0 : X_1}\]

Target (output) Monotonicity

This axiom states that redundancy can not increase when replacing the output by a function of itself.

\[\Icap{X_{0:n} : Y} \ge \Icap{X_{0:n} : f(Y)}\]

It first appeared in [BROJ13] and was expanded upon in [RBO+17].

Measures

We now turn our attention a variety of methods proposed to flesh out this partial information decomposition.

In [11]: In [11]: from dit.pid import *

\(\Imin{\bullet}\)

\(\Imin{\bullet}\)[WB10] was Williams & Beer’s initial proposal for a redundancy measure. It is given by:

\[\Imin{\mathcal{A}_1, \mathcal{A}_2, \ldots : Y} = \sum_{y \in Y} p(y) \min_{\mathcal{A}_i} \I{\mathcal{A}_i : Y=y}\]

However, this measure has been criticized for acting in an unintuitive manner [GK14]:

In [12]: In [12]: d = dit.Distribution(['000', '011', '102', '113'], [1/4]*4)

In [13]: In [13]: PID_WB(d)
Out[13]: 
+--------+--------+--------+
| I_min  |  I_r   |   pi   |
+--------+--------+--------+
| {0:1}  | 2.0000 | 1.0000 |
|  {0}   | 1.0000 | 0.0000 |
|  {1}   | 1.0000 | 0.0000 |
| {0}{1} | 1.0000 | 1.0000 |
+--------+--------+--------+

In [14]: ╔════════╤════════╤════════╗
   ....:  I_min    I_r      pi   
   ....: ╟────────┼────────┼────────╢
   ....:  {0:1}   2.0000  1.0000 
   ....:   {0}    1.0000  0.0000 
   ....:   {1}    1.0000  0.0000 
   ....:  {0}{1}  1.0000  1.0000 
   ....: ╚════════╧════════╧════════╝
   ....: 
  File "<ipython-input-14-8a7d3ad228d0>", line 1
    ╔════════╤════════╤════════╗
                               ^
SyntaxError: invalid character in identifier

We have constructed a distribution whose inputs are independent random bits, and whose output is the concatenation of those inputs. Intuitively, the output should then be informed by one bit of unique information from \(X_0\) and one bit of unique information from \(X_1\). However, \(\Imin{\bullet}\) assesses that there is one bit of redundant information, and one bit of synergistic information. This is because \(\Imin{\bullet}\) quantifies redundancy as the least amount of information one can learn about an output given any single input. Here, however, the one bit we learn from \(X_0\) is, in a sense, orthogonal from the one bit we learn from \(X_1\). This observation has lead to much of the follow-on work.

\(\Immi{\bullet}\)

One potential measure of redundancy is the minimum mutual information [BROJ13]:

\[\Immi{X_{0:n} : Y} = \min_{i} \I{X_i : Y}\]

This measure, though crude, is known to be correct for multivariate gaussian variables [OBR15].

\(\Iwedge{\bullet}\)

Redundancy seems to intuitively be related to common information Common Informations. This intuition lead to the development of \(\Iwedge{\bullet}\) [GCJ+14]:

\[\Iwedge{X_{0:n} : Y} = \I{ \meet X_i : Y}\]

That is, redundancy is the information the Gács-Körner Common Information of the inputs shares with the output.

Warning

This measure can result in a negative PID.

\(\Iproj{\bullet}\)

Utilizing information geometry, Harder et al [HSP13] have developed a strictly bivariate measure of redundancy, \(\Iproj{\bullet}\):

\[\Iproj{\left\{X_0\right\}\left\{X_1\right\} : Y} = \min \{ I^\pi_Y[X_0 \mss X_1], I^\pi_Y[X_1 \mss X_0] \}\]

where

\[ \begin{align}\begin{aligned}I^\pi_Y[X_0 \mss X_1] = \sum_{x_0, y} p(x_0, y) \log \frac{p_{(x_0 \mss X_1)}(y)}{p(y)}\\p_{(x_0 \mss X_1)}(Y) = \pi_{C_{cl}(\langle X_1 \rangle_Y)}(p(Y | x_0)\\\pi_B(p) = \arg \min_{r \in B} \DKL{p || r}\\C_{cl}(\langle X_1 \rangle_Y) = C_{cl}(\left\{p(Y | x_1) : x_1 \in X_1 \right\})\end{aligned}\end{align} \]

where \(C_{cl}(\bullet)\) denotes closure. Intuitively, this measures seeks to quantify redundancy as the minimum of how much \(p(Y | X_0)\) can be expressed when \(X_0\) is projected on to \(X_1\), and vice versa.

\(\Ibroja{\bullet}\)

In a very intuitive effort, Bertschinger et al (henceforth BROJA) [BRO+14][GK14] defined unique information as the minimum conditional mutual informations obtainable while holding the input-output marginals fixed:

\[ \begin{align}\begin{aligned}\Delta = \{ Q : \forall i : p(x_i, y) = q(x_i, y) \}\\\Ibroja{X_{0:n} : Y} = \min_{Q \in \Delta} \I{X_i : Y | X_\overline{\{i\}}}\end{aligned}\end{align} \]

Note

In the bivariate case, Griffith independently suggested the same decomposition but from the viewpoint of synergy [GK14].

The BROJA measure has recently been criticized for behaving in an unintuitive manner on some examples. Consider the reduced or distribution:

In [15]: In [16]: bivariates['reduced or']
Out[15]: 
Class:          Distribution
Alphabet:       ('0', '1') for all rvs
Base:           linear
Outcome Class:  str
Outcome Length: 3
RV Names:       None

x     p(x)
000   1/2
011   1/4
101   1/4

In [16]: Out[16]:
   ....: Class:          Distribution
   ....: Alphabet:       ('0', '1') for all rvs
   ....: Base:           linear
   ....: Outcome Class:  str
   ....: Outcome Length: 3
   ....: RV Names:       None
   ....: 
  File "<ipython-input-16-79219cbf7045>", line 1
    Out[16]:
            ^
SyntaxError: invalid syntax

We see that in this instance BROJA assigns no partial information to either unique information. However, it is not difficult to argue that in the case that either input is a 1, that input then has unique information regarding the output.

In the BROJA paper [BRO+14] the only example given where their decomposition differs from that of Harder et al. is the dit.example_dists.summed_dice(). We can find a simpler example where they differ using hypothesis:

In [17]: In [17]: from hypothesis import find

\(\Iccs{\bullet}\)

Taking a pointwise point of view, Ince has proposed a measure of redundancy based on the coinformation [Inc17a]:

\[\Iccs{X_{0:n} : Y} = \sum p(x_0, \ldots, x_n, y) \I{x_0 : \ldots : x_n : y}~~\textrm{if}~~\operatorname{sign}(\I{x_i : y}) = \operatorname{sign}(\I{x_0 : \ldots : x_n : y})\]

While this measure behaves intuitively in many examples, it also assigns negative values to some partial information atoms in some instances.

This decomposition also displays an interesting phenomena, that of subadditive redundancy. The gband distribution is an independent mix of a giant bit (redundancy of 1 bit) and the and distribution (redundancy of 0.1038 bits), and yet gband has 0.8113 bits of redundancy:

In [18]: In [20]: PID_CCS(bivariates['gband'])
Out[18]: 
+--------+--------+--------+
| I_ccs  |  I_r   |   pi   |
+--------+--------+--------+
| {0:1}  | 1.8113 | 0.0000 |
|  {0}   | 1.3113 | 0.5000 |
|  {1}   | 1.3113 | 0.5000 |
| {0}{1} | 0.8113 | 0.8113 |
+--------+--------+--------+

In [19]: Out[20]:
   ....: ╔════════╤════════╤════════╗
   ....:  I_ccs    I_r      pi   
   ....: ╟────────┼────────┼────────╢
   ....:  {0:1}   1.8113  0.0000 
   ....:   {0}    1.3113  0.5000 
   ....:   {1}    1.3113  0.5000 
   ....:  {0}{1}  0.8113  0.8113 
   ....: ╚════════╧════════╧════════╝
   ....: 
  File "<ipython-input-19-34edadd4045f>", line 1
    Out[20]:
            ^
SyntaxError: invalid syntax

Warning

This measure can result in a negative PID.

\(\Idep{\bullet}\)

James et al [JEC17] have developed a method of quantifying unique information based on the Dependency Decomposition. Unique information from variable \(X_i\) is evaluated as the least change in sources-target mutual information when adding the constraint \(X_i Y\).

In [20]: In [21]: PID_dep(bivariates['not two'])
Out[20]: 
+--------+--------+--------+
| I_dep  |  I_r   |   pi   |
+--------+--------+--------+
| {0:1}  | 0.5710 | 0.5364 |
|  {0}   | 0.0200 | 0.0146 |
|  {1}   | 0.0200 | 0.0146 |
| {0}{1} | 0.0054 | 0.0054 |
+--------+--------+--------+

In [21]: Out[21]:
   ....: ╔════════╤════════╤════════╗
   ....:  I_dep    I_r      pi   
   ....: ╟────────┼────────┼────────╢
   ....:  {0:1}   0.5710  0.5364 
   ....:   {0}    0.0200  0.0146 
   ....:   {1}    0.0200  0.0146 
   ....:  {0}{1}  0.0054  0.0054 
   ....: ╚════════╧════════╧════════╝
   ....: 
  File "<ipython-input-21-f2b8143ceead>", line 1
    Out[21]:
            ^
SyntaxError: invalid syntax

\(\Ipm{\bullet}\)

Also taking a pointwise view, Finn & Lizier’s \(\Ipm{\bullet}\) [finn2017] instead splits the pointwise mutual information into two components:

\[i(s, t) = h(s) - h(s|t)\]

They then define two partial information lattices, one quantified locally by \(h(s)\) and the other by \(h(s|t)\). By averaging these local lattices and then recombining them, we arrive at a standard Williams & Beer redundancy lattice.

In [22]: In [22]: PID_PM(bivariates['pnt. unq'])
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-22-870c9adeb292> in <module>
----> 1 PID_PM(bivariates['pnt. unq'])

KeyError: 'pnt. unq'

In [23]: Out[22]:
   ....: ╔════════╤════════╤════════╗
   ....:  I_pm     I_r      pi   
   ....: ╟────────┼────────┼────────╢
   ....:  {0:1}   1.0000  0.0000 
   ....:   {0}    0.5000  0.5000 
   ....:   {1}    0.5000  0.5000 
   ....:  {0}{1}  0.0000  0.0000 
   ....: ╚════════╧════════╧════════╝
   ....: 
  File "<ipython-input-23-e2407bb260d8>", line 1
    Out[22]:
            ^
SyntaxError: invalid syntax

Warning

This measure can result in a negative PID.

\(\Irav{\bullet}\)

Taking a functional perspective as in \(\Iwedge\), \(\Irav\) defines bivariate redundancy as the maximum coinformation between the two sources \(X_0, X_1', a target :math:\), and a deterministic function of the inputs \(f(X_0,X_1)\).

\[\Irav{X_{0:2} : Y} = \max_f\left(\I{X_0\!:\!X_1\!:\!Y\!:\!f(X_0,X_1)}\]

This measure is designed to exploit the conflation of synergy and redundancy in the three variable coinformation: \(\I{X_0\!:\!X_1\!:\!Y} = R - S\).

In [24]: In [23]: PID_RAV(bivariates['pnt. unq'])
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-24-423f0d9acce6> in <module>
----> 1 PID_RAV(bivariates['pnt. unq'])

KeyError: 'pnt. unq'

In [25]: Out[23]:
   ....: ╔════════╤════════╤════════╗
   ....:  I_pm     I_r      pi   
   ....: ╟────────┼────────┼────────╢
   ....:  {0:1}   1.0000  0.0000 
   ....:   {0}    0.5000  0.5000 
   ....:   {1}    0.5000  0.5000 
   ....:  {0}{1}  0.0000  0.0000 
   ....: ╚════════╧════════╧════════╝
   ....: 
  File "<ipython-input-25-6501608668d7>", line 1
    Out[23]:
            ^
SyntaxError: invalid syntax

\(\Irr{\bullet}\)

In order to combine \(\Immi{\bullet}\) with the coinformation, Goodwell and Kumar [GK17] have introduced their rescaled redundancy:

\[ \begin{align}\begin{aligned}\Irr{X_0 : X_1} = R_{\text{min}} + I_{S} (\Immi{X_{0:2} : Y} - R_{\text{min}}\\R_{\text{min}} = \max\{ 0, \I{X_0 : X_1 : Y} \}\\I_{S} = \frac{\I{X_0 : X_1}}{\min\{ \H{X_0}, \H{X_1} \}}\end{aligned}\end{align} \]
In [26]: In [24]: PID_RR(bivariates['pnt. unq'])
---------------------------------------------------------------------------
KeyError                                  Traceback (most recent call last)
<ipython-input-26-ae8ee1a87167> in <module>
----> 1 PID_RR(bivariates['pnt. unq'])

KeyError: 'pnt. unq'

In [27]: Out[24]:
   ....: ╔════════╤════════╤════════╗
   ....:  I_rr     I_r      pi   
   ....: ╟────────┼────────┼────────╢
   ....:  {0:1}   1.0000  0.3333 
   ....:   {0}    0.5000  0.1667 
   ....:   {1}    0.5000  0.1667 
   ....:  {0}{1}  0.3333  0.3333 
   ....: ╚════════╧════════╧════════╝
   ....: 
  File "<ipython-input-27-348151318710>", line 1
    Out[24]:
            ^
SyntaxError: invalid syntax

\(\Ira{\bullet}\)

Drawing from the reconstructability analysis work of Zwick [Zwi04], we can define \(Ira{\bullet}\) as a restricted form of \(\Idep{\bullet}\).

Warning

This measure can result in a negative PID.

Secret Key Agreement Rates

One can associate Secret Key Agreement rates with unique informations by considering the rate at which one source and the target can agree upon a secret key while the other source eavesdrops. This results in four possibilities: - neither source nor target communicate - only the source communicates - only the target communicates - both the source and the target communicate

\[\Ipart{X_i \rightarrow Y \setminus X_j} = \operatorname{S}[X_i : Y || X_j]\]

Warning

This measure can result in an inconsistent PID.

Camel

\[\Ipart{X_i \rightarrow Y \setminus X_j} = \operatorname{S}[X_i \rightarrow Y || X_j]\]

Elephant

\[\Ipart{X_i \rightarrow Y \setminus X_j} = \operatorname{S}[X_i \leftarrow Y || X_j]\]

Warning

This measure can result in an inconsistent PID.

Two-Way Communication
\[\Ipart{X_i \rightarrow Y \setminus X_j} = \operatorname{S}[X_i \leftrightarrow Y || X_j]\]

Warning

This measure can result in an inconsistent PID.

Partial Entropy Decomposition

Ince [Inc17b] proposed applying the PID framework to decompose multivariate entropy (without considering information about a separate target variable). This partial entropy decomposition (PED), seeks to partition a mutlivariate entropy \(\H{X_0,X_1,\ldots}\) among the antichains of the variables. The PED perspective shows that bivariate mutual information is equal to the difference between redundant entropy and synergistic entropy.

\[\I{X_0 : X_1} = \Hpart{\left\{X_0\right\}, \left\{X_1\right\}} - \Hpart{\left\{X_0,X_1\right\}}\]

\(\Hcs{\bullet}\)

Taking a pointwise point of view, following \(\Iccs{\bullet}\), Ince has proposed a measure of redundant entropy based on the coinformation [Inc17b]:

\[\Hcs{X_{0:n}} = \sum p(x_0, \ldots, x_n) \I{x_0 : \ldots : x_n}~~\textrm{if}~~(\I{x_0 : \ldots : x_n} > 0)\]

While this measure behaves intuitively in many examples, it also assigns negative values to some partial entropy atoms in some instances. However, Ince [Inc17b] argues that concepts such as mechanistic information redundnacy (non-zero information redundancy between independent predictors, c.f. AND) necessitate negative partial entropy terms.

Like \(\Iccs{\bullet}\), \(\Hcs{\bullet}\) is also subadditive.

In [28]: In [25]: PED_CS(dit.Distribution(['00','01','10','11'],[0.25]*4))
Out[28]: 
+--------+--------+--------+
|  H_cs  |  H_r   |  H_d   |
+--------+--------+--------+
| {0:1}  | 2.0000 | 0.0000 |
|  {0}   | 1.0000 | 1.0000 |
|  {1}   | 1.0000 | 1.0000 |
| {0}{1} | 0.0000 | 0.0000 |
+--------+--------+--------+

In [29]: Out[25]:
   ....: ╔════════╤════════╤════════╗
   ....:   H_cs    H_r     H_d   
   ....: ╟────────┼────────┼────────╢
   ....:  {0:1}   2.0000  0.0000 
   ....:   {0}    1.0000  1.0000 
   ....:   {1}    1.0000  1.0000 
   ....:  {0}{1}  0.0000  0.0000 
   ....: ╚════════╧════════╧════════╝
   ....: 
  File "<ipython-input-29-79e04db5bd23>", line 1
    Out[25]:
            ^
SyntaxError: invalid syntax