Cross entropy
In information theory, the cross entropy between two probability distributions and
over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set, if a coding scheme is used that is optimized for an "unnatural" probability distribution
, rather than the "true" distribution
.
The cross entropy for the distributions and
over a given set is defined as follows:
where is the entropy of
, and
is the Kullback–Leibler divergence of
from
(also known as the relative entropy of p with respect to q — note the reversal of emphasis).
For discrete and
this means
The situation for continuous distributions is analogous:
NB: The notation is also used for a different concept, the joint entropy of
and
.
Motivation
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value out of a set of possibilities
can be seen as representing an implicit probability distribution
over
, where
is the length of the code for
in bits. Therefore, cross entropy can be interpreted as the expected message-length per datum when a wrong distribution
is assumed while the data actually follows a distribution
. That is why the expectation is taken over the probability distribution
and not
.
Estimation
There are many situations where cross-entropy needs to be measured but the distribution of is unknown. An example is language modeling, where a model is created based on a training set
, and then its cross-entropy is measured on a test set to assess how accurate the model is in predicting the test data. In this example,
is the true distribution of words in any corpus, and
is the distribution of words as predicted by the model. Since the true distribution is unknown, cross-entropy cannot be directly calculated. In these cases, an estimate of cross-entropy is calculated using the following formula:
where is the size of the test set, and
is the probability of event
estimated from the training set. The sum is calculated over
. This is a Monte Carlo estimate of the true cross entropy, where the training set is treated as samples from
.
Cross-entropy minimization
Cross-entropy minimization is frequently used in optimization and rare-event probability estimation; see the cross-entropy method.
When comparing a distribution against a fixed reference distribution
, cross entropy and KL divergence are identical up to an additive constant (since
is fixed): both take on their minimal values when
, which is
for KL divergence, and
for cross entropy. In the engineering literature, the principle of minimising KL Divergence (Kullback's "Principle of Minimum Discrimination Information") is often called the Principle of Minimum Cross-Entropy (MCE), or Minxent.
However, as discussed in the article Kullback–Leibler divergence, sometimes the distribution is the fixed prior reference distribution, and the distribution
is optimised to be as close to
as possible, subject to some constraint. In this case the two minimisations are not equivalent. This has led to some ambiguity in the literature, with some authors attempting to resolve the inconsistency by redefining cross-entropy to be
, rather than
.
Cross-entropy error function and logistic regression
Cross entropy can be used to define the loss function in machine learning and optimization. The true probability is the true label, and the given distribution
is the predicted value of the current model.
More specifically, let us consider logistic regression, which (in its most basic form) deals with classifying a given set of data points into two possible classes generically labelled and
. The logistic regression model thus predicts an output
, given an input vector
. The probability is modeled using the logistic function
. Namely, the probability of finding the output
is given by
where the vector of weights is learned through some appropriate algorithm such as gradient descent. Similarly, the conjugate probability of finding the output
is simply given by
The true (observed) probabilities can be expressed similarly as and
.
Having set up our notation, and
, we can use cross entropy to get a measure for similarity between
and
:
The typical loss function that one uses in logistic regression is computed by taking the average of all cross-entropies in the sample. For example, suppose we have samples with each sample labeled by
. The loss function is then given by:
where , with
the logistic function as before.
The logistic loss is sometimes called cross-entropy loss. It's also known as log loss (In this case, the binary label is often denoted by {-1,+1}).[1]