Formulas (1948-1994) 🧮

1948

$$
H(S) \leqslant m(S) \leqslant H(S) + 1
$$

The mathematician Claude Shannon introduced the entropy in information theory in 1948. Entropy in information theory can be defined as the expected number of bits of information contained in an event. For instance, tossing a fair coin has the entropy of 1. It is because of the probability of having a head or tail is 0.5. The amount of information required to identify it’s head or tail is one by asking one, yes or no question — “is it head ? or is it tail?”. If the entropy is higher, that means we need more information to represent an event. Now, we can say that entropy increases with increases in uncertainty. Another example is that crossing the street has less number of information required to represent / store / communicate than playing a poker game.

Calculating how much information is in a random variable, $X={x_{0}, x_{1}, …. , x_{n}}$ is same as calculating the information for the probability distribution of the events in the random variable. In that sense, entropy is considered as average bits of information required to represent an event drawn from the probability distribution. Entropy for a random variable X can be computed using the below equation:

$$
{ Entropy, } \ H(X)=-\sum_{i=1}^{n} P\left(x_{i}\right) \log_{2}\left(P\left(x_{i} \right) \right)
$$

1971

$$
P \stackrel{?}{=} N P
$$

In 1971, Stephen Cook introduced the precise statement of the P versus NP problem in his article “The complexity of theorem proving procedures”. Today, many people consider this problem to be the most important open problem in computer science.

1976

$$
x_{n+1}=r x_{n}\left(1-x_{n}\right)
$$

The logistic map – also known as the “logistic difference equation” – was made famous by Robert May in 1976 when he used it to model the behaviour of each generation of a biological species, and found to his amazement that from this simplest of mathematical models, complex dynamics can emerge.

The logistic map is a discrete recursive mathematical function that maps the output of one iteration of the function onto the input of the next. Thus the logistic map is a simple mathematical way of examining the effects of feedback on population growth.

1994

$$
\begin{array}{c}
x^{n}+y^{n}=z^{n} \quad n>2 \
=> \
x y z=0
\end{array}
$$

In 1994, Andrew Wiles, 62, cracked Fermat’s Last Theorem, which was put forth by 17th-century mathematician Pierre de Fermat.

In number theory, Fermat’s Last Theorem (sometimes called Fermat’s conjecture, especially in older texts) states that no three positive integers x, y, and z satisfy the equation $x^{n} + y^{n} = z^{n}$ for any integer value of n greater than 2. The cases n = 1 and n = 2 have been known since antiquity to have infinitely many solutions.

The proposition was first stated as a theorem by Pierre de Fermat around 1637 in the margin of a copy of Arithmetica; Fermat added that he had a proof that was too large to fit in the margin. Although other statements claimed by Fermat without proof were subsequently proven by others and credited as theorems of Fermat (for instance, Fermat’s theorem on sums of two squares), Fermat’s Last Theorem resisted proof, leading to doubt that Fermat ever had a correct proof and it becoming known as a conjecture rather than a theorem. After 358 years of effort by mathematicians, the first successful proof was released in 1994 by Andrew Wiles, and formally published in 1995; it was described as a “stunning advance” in the citation for Wiles’s Abel Prize award in 2016. It also proved much of the modularity theorem and opened up entire new approaches to numerous other problems and mathematically powerful modularity lifting techniques.