# Secret Key Agreement¶

One of the only methods of encrypting a message from Alice to Bomb such that no third party (Eve) can possibly decrypt it is a one-time pad. This technique requires that Alice and Bob have a secret sequence of bits, \(S\), which Alice then encrypts by computing the exclusive-or of it with the plaintext, \(P\), to produce the cyphertext, \(C\): \(C = S \oplus P\). Bob can then decrypt by xoring again: \(P = S \oplus C\).

In order to pull this off, Alice and Bob need to construct \(S\) out of some sort joint randomness, \(p(x, y, z)\), and public communication, \(V\), which is assumed to have perfect fidelity. The maximum rate at which \(S\) can be constructed in the *secret key agreement rate*.

## Background¶

Given \(N\) IID copies of a joint distribution governed by \(p(x, y, z)\), let \(X^N\) denote the random variables observed by Alice, \(Y^N\) denote the random variables observed by Bob, and \(Z^N\) denote the random variables observed by Even. Furthermore, let \(S[X : Y || Z]\) be the maximum rate \(R\) such that, for \(N > 0\), \(\epsilon > 0\), some public communication \(V\), and functions \(f\) and \(g\):

Intuitively, this means there exists some procedure such that, for every \(N\) observations, Alice and Bob can publicly converse and then construct \(S\) bits which agree almost surely, and are almost surely independent of everything Eve has access to. \(S\) is then known as a *secret key*.

## Lower Bounds¶

### Lower Intrinsic Mutual Information¶

The first lower bound on the secret key agreement rate is known in `dit`

as the `lower_intrinsic_mutual_information()`

, and is given by:

### Secrecy Capacity¶

Next is the secrecy capacity:

This gives the secret key agreement rate when communication is not allowed.

### Necessary Intrinsic Mutual Information¶

A tighter bound is given by the `necessary_intrinsic_mutual_information()`

[GGunluK17]:

This quantity is actually equal to the secret key agreement rate when communication is limited to being unidirectional.

## Upper Bounds¶

### Upper Intrinsic Mutual Information¶

The secret key agreement rate is trivially upper bounded by:

### Intrinsic Mutual Information¶

The `intrinsic_mutual_information()`

[MW97] is defined as:

It is straightforward to see that \(p(\overline{z} | z)\) being a constant achieves \(\I{X : Y}\), and \(p(\overline{z} | z)\) being the identity achieves \(\I{X : Y | Z}\).

### Reduced Intrinsic Mutual Information¶

This bound can be improved, producing the `reduced_intrinsic_mutual_information()`

[RSW03]:

This bound improves upon the Intrinsic Mutual Information when a small amount of information, \(U\), can result in a larger decrease in the amount of information shared between \(X\) and \(Y\) given \(Z\) and \(U\).

## All Together Now¶

Taken together, we see the following structure:

## Generalizations¶

Most of the above bounds have straightforward multivariate generalizations. These are not necessarily bounds on the multiparty secret key agreement rate. For example, one could compute the `minimal_intrinsic_dual_total_correlation()`

:

## Examples¶

Let us consider a few examples:

```
In [1]: from dit.multivariate.secret_key_agreement import *
In [2]: from dit.example_dists.intrinsic import intrinsic_1, intrinsic_2, intrinsic_3
```

First, we consider the distribution `intrinsic_1`

:

```
In [3]: print(intrinsic_1)
Class: Distribution
Alphabet: ('0', '1', '2', '3') for all rvs
Base: linear
Outcome Class: str
Outcome Length: 3
RV Names: None
x p(x)
000 1/8
011 1/8
101 1/8
110 1/8
222 1/4
333 1/4
```

With upper bounds:

```
In [4]: upper_intrinsic_mutual_information(intrinsic_1, [[0], [1]], [2])
Out[4]: 0.5
```

We see that the trivial upper bound is 0.5, because without conditioning on \(Z\), \(X\) and \(Y\) can agree when the observe either a \(2\) or a \(3\), which results in \(\I{X : Y} = 0.5\). Given \(Z\), however, that information is no longer private. But, given \(Z\), a conditional dependence is induced between \(X\) and \(Y\): \(Z\) knows that if she is a \(0\) that \(X\) and \(Y\) agree, and if she is a \(1\) they disagree. This results \(\I{X : Y | Z} = 0.5\). In either case, however, \(X\) and \(Y\) can not agree upon a secret key: in the first case the eavesdropper knows their correlation, while in the second they are actually independent.

The `intrinsic_mutual_information()`

, however can detect this:

```
In [5]: intrinsic_mutual_information(intrinsic_1, [[0], [1]], [2])
Out[5]: 0.0
```

Next, let’s consider the distribution `intrinsic_2`

:

```
In [6]: print(intrinsic_2)
Class: Distribution
Alphabet: (('0', '1', '2', '3'), ('0', '1', '2', '3'), ('0', '1'))
Base: linear
Outcome Class: str
Outcome Length: 3
RV Names: None
x p(x)
000 1/8
011 1/8
101 1/8
110 1/8
220 1/4
331 1/4
```

In this case, \(Z\) no longer can distinguish between the case where \(X\) and \(Y\) can agree on a secret bit, and when they can not, because she can not determine when they are in the \(01\) regime or in the \(23\) regime:

```
In [7]: intrinsic_mutual_information(intrinsic_2, [[0], [1]], [2])
Out[7]: 1.5
```

This seems to imply that \(X\) and \(Y\) can adopt a scheme such as: if they observe either a \(0\) or a \(1\), write down \(0\), and if they observe either a \(2\) or a \(3\), write that down. This has a weakness, however: what if \(Z\) were able to distinguish the two regimes? This costs her \(1\) bit, but reduces the secrecy of \(X\) and \(Y\) to nil. Thus, the secret key agreement rate is actually only \(1\) bit:

```
In [8]: minimal_intrinsic_mutual_information(intrinsic_2, [[0], [1]], [2], bounds=(3,))
Out[8]: 1.0
```