Small-bias sample space

In theoretical computer science, a small-bias sample space (also known as \epsilon-biased sample space, \epsilon-biased generator, or small-bias probability space) is a probability distribution that fools parity functions. In other words, no parity function can distinguish between a small-bias sample space and the uniform distribution with high probability, and hence, small-bias sample spaces naturally give rise to pseudorandom generators for parity functions.

The main useful property of small-bias sample spaces is that they need far fewer truly random bits than the uniform distribution to fool parities. Efficient constructions of small-bias sample spaces have found many applications in computer science, some of which are derandomization, error-correcting codes, and probabilistically checkable proofs. The connection with error-correcting codes is in fact very strong since \epsilon-biased sample spaces are equivalent to \epsilon-balanced error-correcting codes.

Definition

Bias

Let X be a probability distribution over \{0,1\}^n. The bias of X with respect to a set of indices I \subseteq \{1,\dots,n\} is defined as[1]


\text{bias}_I(X)
=
\left|
\Pr_{x\sim X} \left(\sum_{i\in I} x_i = 0\right)
-
\Pr_{x\sim X} \left(\sum_{i\in I} x_i = 1\right)
\right|
=
\left|
2 \cdot \Pr_{x\sim X} \left(\sum_{i\in I} x_i = 0\right)
-1
\right|
\,,

where the sum is taken over \mathbb F_2, the finite field with two elements. In other words, the sum \sum_{i\in I} x_i equals 0 if the number of ones in the sample x\in\{0,1\}^n at the positions defined by I is even, and otherwise, the sum equals 1. For I=\emptyset, the empty sum is defined to be zero, and hence \text{bias}_{\emptyset} (X) = 1.

ϵ-biased sample space

A probability distribution X over \{0,1\}^n is called an \epsilon-biased sample space if 
\text{bias}_I(X) \leq \epsilon
holds for all non-empty subsets I \subseteq \{1,2,\ldots,n\}.

ϵ-biased set

An \epsilon-biased sample space X that is generated by picking a uniform element from a multiset X\subseteq \{0,1\}^n is called \epsilon-biased set. The size s of an \epsilon-biased set X is the size of the multiset that generates the sample space.

ϵ-biased generator

An \epsilon-biased generator G:\{0,1\}^\ell \to \{0,1\}^n is a function that maps strings of length \ell to strings of length n such that the multiset X_G=\{G(y) \;\vert\; y\in\{0,1\}^\ell \} is an \epsilon-biased set. The seed length of the generator is the number \ell and is related to the size of the \epsilon-biased set X_G via the equation s=2^\ell.

Connection with epsilon-balanced error-correcting codes

There is a close connection between \epsilon-biased sets and \epsilon-balanced linear error-correcting codes. A linear code C:\{0,1\}^n\to\{0,1\}^s of message length n and block length s is \epsilon-balanced if the Hamming weight of every nonzero codeword C(x) is between (\frac{1}{2}-\epsilon)s and (\frac{1}{2}+\epsilon)s. Since C is a linear code, its generator matrix is an (n\times s)-matrix A over \mathbb F_2 with C(x)=x \cdot A.

Then it holds that a multiset X\subset\{0,1\}^{n} is \epsilon-biased if and only if the linear code C_X, whose columns are exactly elements of X, is \epsilon-balanced.[2]

Constructions of small epsilon-biased sets

Usually the goal is to find \epsilon-biased sets that have a small size s relative to the parameters n and \epsilon. This is because a smaller size s means that the amount of randomness needed to pick a random element from the set is smaller, and so the set can be used to fool parities using few random bits.

Theoretical bounds

The probabilistic method gives a non-explicit construction that achieves size s=O(n/\epsilon^2).[2] The construction is non-explicit in the sense that finding the \epsilon-biased set requires a lot of true randomness, which does not help towards the goal of reducing the overall randomness. However, this non-explicit construction is useful because it shows that these efficient codes exist. On the other hand, the best known lower bound for the size of \epsilon-biased sets is s=\Omega(n/ (\epsilon^2 \log (1/\epsilon)), that is, in order for a set to be \epsilon-biased, it must be at least that big.[2]

Explicit constructions

There are many explicit, i.e., deterministic constructions of \epsilon-biased sets with various parameter settings:

These bounds are mutually incomparable. In particular, none of these constructions yields the smallest \epsilon-biased sets for all settings of \epsilon and n.

Application: almost k-wise independence

An important application of small-bias sets lies in the construction of almost k-wise independent sample spaces.

k-wise independent spaces

A random variable Y over \{0,1\}^n is a k-wise independent space if, for all index sets I\subseteq\{1,\dots,n\} of size k, the marginal distribution Y|_I is exactly equal to the uniform distribution over \{0,1\}^k. That is, for all such I and all strings z\in\{0,1\}^k, the distribution Y satisfies \Pr_Y (Y|_I = z) = 2^{-k}.

Constructions and bounds

k-wise independent spaces are fairly well-understood.

Joffe's construction

Joffe (1974) constructs a k-wise independent space Y over the finite field with some prime number n>k of elements, i.e., Y is a distribution over \mathbb F_n^n. The initial k marginals of the distribution are drawn independently and uniformly at random:

(Y_0,\dots,Y_{k-1}) \sim\mathbb F_n^k.

For each i with k \leq i < n, the marginal distribution of Y_i is then defined as

Y_i=Y_0 + Y_1 \cdot i + Y_2 \cdot i^2 + \dots + Y_{k-1} \cdot i^{k-1}\,,

where the calculation is done in \mathbb F_n. Joffe (1974) proves that the distribution Y constructed in this way is k-wise independent as a distribution over \mathbb F_n^n. The distribution Y is uniform on its support, and hence, the support of Y forms a k-wise independent set. It contains all n^k strings in \mathbb F_n^k that have been extended to strings of length n using the deterministic rule above.

Almost k-wise independent spaces

A random variable Y over \{0,1\}^n is a \delta-almost k-wise independent space if, for all index sets I\subseteq\{1,\dots,n\} of size k, the restricted distribution Y|_I and the uniform distribution U_k on \{0,1\}^k are \delta-close in 1-norm, i.e., \Big\|Y|_I - U_k\Big\|_1 \leq \delta.

Constructions

Naor & Naor (1990) give a general framework for combining small k-wise independent spaces with small \epsilon-biased spaces to obtain \delta-almost k-wise independent spaces of even smaller size. In particular, let G_1:\{0,1\}^h\to\{0,1\}^n be a linear mapping that generates a k-wise independent space and let G_2:\{0,1\}^\ell \to \{0,1\}^h be a generator of an \epsilon-biased set over \{0,1\}^h. That is, when given a uniformly random input, the output of G_1 is a k-wise independent space, and the output of G_2 is \epsilon-biased. Then G : \{0,1\}^\ell \to \{0,1\}^n with G(x) = G_1(G_2(x)) is a generator of an \delta-almost k-wise independent space, where \delta=2^{k/2} \epsilon.[3]

As mentioned above, Alon, Babai & Itai (1986) construct a generator G_1 with h=\tfrac{k}{2} \log n, and Naor & Naor (1990) construct a generator G_2 with \ell=\log s=\log h + O(\log(\epsilon^{-1})). Hence, the concatenation G of G_1 and G_2 has seed length \ell = \log k + \log \log n + O(\log(\epsilon^{-1})). In order for G to yield a \delta-almost k-wise independent space, we need to set \epsilon = \delta 2^{-k/2}, which leads to a seed length of \ell = \log \log n + O(k+\log(\delta^{-1})) and a sample space of total size 2^\ell \leq \log n \cdot \text{poly}(2^k \cdot\delta^{-1}).

Notes

References

This article is issued from Wikipedia - version of the Saturday, March 19, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.