General Information¶
- Documentation:
- http://docs.dit.io
- Downloads:
- https://pypi.org/project/dit/
- Dependencies:
- Python 2.7, 3.3, 3.4, 3.5, or 3.6
- boltons
- contextlib2
- debtcollector
- networkx
- numpy
- prettytable
- scipy
- six
Optional Dependencies¶
- colorama: colored column heads in PID indicating failure modes
- cython: faster sampling from distributions
- hypothesis: random sampling of distributions
- matplotlib, python-ternary: plotting of various information-theoretic expansions
- numdifftools: numerical evaluation of gradients and hessians during optimization
- pint: add units to informational values
- scikit-learn: faster nearest-neighbor lookups during entropy/mutual information estimation from samples
- Mailing list:
- None
- Code and bug tracker:
- https://github.com/dit/dit
- License:
- BSD 3-Clause, see LICENSE.txt for details.
Quickstart¶
The basic usage of dit
corresponds to creating distributions, modifying
them if need be, and then computing properties of those distributions.
First, we import:
In [1]: import dit
Suppose we have a really thick coin, one so thick that there is a reasonable
chance of it landing on its edge. Here is how we might represent the coin in
dit
.
In [2]: d = dit.Distribution(['H', 'T', 'E'], [.4, .4, .2])
In [3]: print(d)
Class: Distribution
Alphabet: ('E', 'H', 'T') for all rvs
Base: linear
Outcome Class: str
Outcome Length: 1
RV Names: None
x p(x)
E 1/5
H 2/5
T 2/5
Calculate the probability of \(H\) and also of the combination: \(H~\mathbf{or}~T\).
In [4]: d['H']
Out[4]: 0.4
In [5]: d.event_probability(['H','T'])
Out[5]: 0.8
Calculate the Shannon entropy and extropy of the joint distribution.
In [6]: dit.shannon.entropy(d)
Out[6]: 1.5219280948873621
In [7]: dit.other.extropy(d)
Out[7]: 1.1419011889093373
Create a distribution representing the \(\mathbf{xor}\) logic function. Here, we have two inputs, \(X\) and \(Y\), and then an output \(Z = \mathbf{xor}(X,Y)\).
In [8]: import dit.example_dists
In [9]: d = dit.example_dists.Xor()
In [10]: d.set_rv_names(['X', 'Y', 'Z'])
In [11]: print(d)
Class: Distribution
Alphabet: ('0', '1') for all rvs
Base: linear
Outcome Class: str
Outcome Length: 3
RV Names: ('X', 'Y', 'Z')
x p(x)
000 1/4
011 1/4
101 1/4
110 1/4
Calculate the Shannon mutual informations \(\I[X:Z]\), \(\I[Y:Z]\), and \(\I[X,Y:Z]\).
In [12]: dit.shannon.mutual_information(d, ['X'], ['Z'])
Out[12]: 0.0
In [13]: dit.shannon.mutual_information(d, ['Y'], ['Z'])
Out[13]: 0.0
In [14]: dit.shannon.mutual_information(d, ['X', 'Y'], ['Z'])
Out[14]: 1.0
Calculate the marginal distribution \(P(X,Z)\). Then print its probabilities as fractions, showing the mask.
In [15]: d2 = d.marginal(['X', 'Z'])
In [16]: print(d2.to_string(show_mask=True, exact=True))
Class: Distribution
Alphabet: ('0', '1') for all rvs
Base: linear
Outcome Class: str
Outcome Length: 2 (mask: 3)
RV Names: ('X', 'Z')
x p(x)
0*0 1/4
0*1 1/4
1*0 1/4
1*1 1/4
Convert the distribution probabilities to log (base 3.5) probabilities, and access its probability mass function.
In [17]: d2.set_base(3.5)
In [18]: d2.pmf
Out[18]: array([-1.10658951, -1.10658951, -1.10658951, -1.10658951])
Draw 5 random samples from this distribution.
Enjoy!