Guruswami–Sudan list decoding algorithm

In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. Using unique decoder one can correct up to  \delta / 2 fraction of errors. But when error rate is greater than  \delta / 2 , unique decoder will not able to output the correct result. List decoding overcomes that issue. List decoding can correct more than  \delta / 2 fraction of errors.

There are many efficient algorithms that can perform List decoding. list decoding algorithm for Reed–Solomon (RS) codes by Sudan which can correct up to  1 - \sqrt{2R} errors is given first. Later on more efficient GuruswamiSudan list decoding algorithm, which can correct up to  1 - \sqrt{R} errors is discussed.

Here is the plot between rate R and distance  \delta for different algorithms.

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/81/Graph.jpg

Algorithm 1 (Sudan's list decoding algorithm)

Problem statement

Input : A field \mathbb {F}; n distinct pairs of elements {(x_{i},y_{i})_{i=1}^n}  from F \times F; and integers d and t.

Output: A list of all functions f: F\mapsto  F satisfying

 f(x) is a polynomial in  x of degree at most d with | { i|f(x_{i}) = y_{i} } |  \ge  t -- (1)

To understand Sudan's Algorithm better, one may want to first know another algorithm which can be considered as the earlier version or the fundamental version of the algorithms for list decoding RS codes - the Berlekamp–Welch algorithm. Welch and Berlekamp initially came with an algorithm which can solve the problem in polynomial time with best threshold on t to be  t \ge (n+d+1)/2. The mechanism of Sudan's Algorithm is almost the same as the algorithm of Berlekamp–Welch Algorithm, except in the step 1, one wants to compute a bivariate polynomial of bounded (1, k) degree. Sudan's list decoding algorithm for Reed–Solomon code which is an improvement on Berlekamp and Welch algorithm, can solve the problem with  t = (\sqrt{2nd}).This bound is better than the unique decoding bound  1 - \left (\frac{R} {2} \right) for R < 0.07.

Algorithm

Definition 1 (weighted degree)

For weights w_x,w_y  , \epsilon Z^+ , the (w_x,w_y) – weighted degree of monomial  q_{ij}x^i y^j is iw_x + jw_y. The (w_x,w_y) – weighted degree of a polynomial Q(x,y) = \sum_{ij} q_{ij}x^iy^j is the maximum, over the monomials with non-zero coefficients, of the (w_x,w_y) – weighted degree of the monomial.

E.g. : 3xy is a monomial in variables x,y with a coefficient of 3.

Algorithm:

Inputs: n,d,t; {(x_1,y_1)\cdots(x_n,y_n)} /* Parameters l,m to be set later. */

Step 1: Find any function Q : F^2 \mapsto F satisfying Q(x,y) has (1,d)-weighted degree at most m+ld, (2) for every  i \in n, Q(x_i,y_i) = 0, Q is not identically zero.

Step 2. Factor the polynomial Q into irreducible factors.

Step 3. Output all the polynomials  f such that  (y- f(x)) is a factor of Q and f(x_i) = y_i for at least t values of i \in n

Analysis

One has to prove that the above algorithm runs in polynomial time and outputs the correct result. That can be done by proving following set of claims.

Claim 1:

If a function Q : F^2 \mapsto F satisfying (2) exists, then one can find it in polynomial time.

Proof:

Note that a bivariate polynomial Q(x, y ) of (1, k) degree at most D can be represented as follows: Let Q(x,y) = \sum_{j=0}^l \sum_{k=0}^{m+(l-j)d} q_{kj} x^k y^j. Then one has to find the coefficients q_{kj} satisfying the constraints \sum_{j=0}^l \sum_{k=0}^{m+(l-j)d} q_{kj} x^k y^j = 0, for every  i \epsilon [n]. This is a linear set of equations in the unknowns {q_{kj}}. One can find a solution using Gaussian elimination in polynomial time.

Claim 2:

If (m+1)(l+1)+d \begin{pmatrix}l + 1\\2\end{pmatrix} > n then there exists a function  Q(x,y) satisfying (2)

Proof:

To ensure a non zero solution exists, the number of variables in Q(x,y) should be greater than the number of constraints. Assume that maximum degree deg_X(Q) of  X in Q(x,y) be m and maximum degree deg_Y(Q) of  Y in Q(x,y) be l. Then the degree of Q(x,y) will be atmost  m+ld . One has to see that the linear system is homogenous. The setting q_{jk} = 0 satisfies all linear constraints. However this does not satisfy (2), since the solution can be identically zero. To ensure that non-zero solutions exists, One has to make sure that number of unknowns in the linear system to be (m+1)(l+1)+d \begin{pmatrix}l + 1\\2\end{pmatrix} > n, so that one can have a non zero  Q(x,y). Since this value is greater than n, there are more variables than constraints and therefore a non-zero solution exists.

Claim 3:

If Q(x,y) is a function satisfying (2) and f(x) is function satisfying (1) and t>m+ld, then (y-f(x)) divides Q(x,y)

Proof:

Consider a function p(x) = Q(x,f(x)). This is a polynomial in x, and argue that it has degree at most m+ld. Consider any monomial q_{jk}x^k y^j of Q(x). Since Q has (1,d)-weighted degree at most m+ld, one can say that k+jd \le m+ld. Thus the term q_{kj}x^kf(x)^j is a polynomial in x of degree at most k+jd \le m+ld. Thus p(x) has degree at most m+ld

Next argue that p(x) is identically zero. Since Q(x_i,f(x_i)) is zero whenever  y_i = f(x_i) , one can say that p(x_i) is zero for strictly greater than  m+ld points. Thus p has more zeroes than its degree and hence is identically zero, implying  Q(x,f(x)) \equiv 0

Finding optimal values for m and l. Note that  m+ld <t and (m+1)(l+1)+d \begin{pmatrix}l + 1\\2\end{pmatrix} > n For a given value l, one can compute the smallest m for which the second condition holds By interchanging the second condition one can get m to be at most (n+1-d \begin{pmatrix}l + 1\\2\end{pmatrix})/2 - 1 Substituting this value into first condition one can get t to be at least \frac{n+1}{l+1} + \frac{dl}{2} Next minimize the above equation of unknown parameter l. One can do that by taking derivative of the equation and equating that to zero By doing that one will get,  l = \sqrt{\frac{2(n+1)}{d}} -1 Substituting back the l value into m and t one will get  m \ge \sqrt{\frac{(n+1)d}{2}} - \sqrt{\frac {(n+1)d}{2}} + \frac{d}{2} - 1 = \frac{d}{2} -1  t > \sqrt{\frac{2(n+1)d^2}{d}} - \frac {d}{2} -1  t > \sqrt{2(n+1)d} - \frac {d}{2} -1

Algorithm 2 (Guruswami–Sudan list decoding algorithm)

Definition

Consider a (n,k) Reed–Solomon code over the finite field \mathbb{F} = GF(q) with evaluation set (\alpha_1,\alpha_2,\ldots,\alpha_n) and a positive integer r, the Guruswami-Sudan List Decoder accepts a vector \beta = (\beta_1,\beta_2,\ldots,\beta_n) \in \mathbb{F}^n as input, and outputs a list of polynomials of degree \le k which are in 1 to 1 correspondence with codewords.

The idea is to add more restrictions on the bi-variate polynomial Q(x,y) which results in the increment of constraints along with the number of roots.

Multiplicity

A bi-variate polynomial Q(x,y) has a zero of multiplicity r at (0,0) means that Q(x,y) has no term of degree \le r, where the x-degree of f(x) is defined as the maximum degree of any x term in f(x) \qquad deg_x f(x) = \max_{i \in I} \{i\}

For example: Let Q(x,y) = y - 4x^2.

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig1.jpg

Hence, Q(x,y) has a zero of multiplicity 1 at (0,0).

Let Q(x,y) = y +  6x^2.

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig2.jpg

Hence, Q(x,y) has a zero of multiplicity 1 at (0,0).

Let Q(x,y) = (y -  4x^2) (y +  6x^2) Q(x,y) = y^2 + 6x^2y - 4x^2y -24x^4

https://wiki.cse.buffalo.edu/cse545/sites/wiki.cse.buffalo.edu.cse545/files/76/Fig3.jpg

Hence, Q(x,y) has a zero of multiplicity 2 at (0,0).

Similarly, if Q(x,y) = [(y - \beta) -  4(x - \alpha)^2)] [(y - \beta) +  6(x - \alpha)^2)] Then, Q(x,y) has a zero of multiplicity 2 at (\alpha,\beta).

General definition of multiplicity

Q(x,y) has r roots at (\alpha,\beta) if Q(x,y) has a zero of multiplicity r at (\alpha,\beta) when (\alpha,\beta) \ne (0,0).

Algorithm

Let the transmitted codeword be ( f(\alpha_1), f(\alpha_2),\ldots,f(\alpha_n)),(\alpha_1,\alpha_2,\ldots,\alpha_n) be the support set of the transmitted codeword & the received word be (\beta_1,\beta_2,\ldots,\beta_n)

The algorithm is as follows:

Interpolation step

For a received vector (\beta_1,\beta_2,\ldots,\beta_n), construct a non-zero bi-variate polynomial Q(x,y) with (1,k)-weighted degree of at most d such that Q has a zero of multiplicity r at each of the points (\alpha_i,\beta_i) where  1 \le i \le n

Q(\alpha_i,\beta_i) = 0 \,

Factorization step

Find all the factors of Q(x,y) of the form y - p(x) and p(\alpha_i) = \beta_i for at least t values of i

where 0 \le i \le n & p(x) is a polynomial of degree \le k

Recall that polynomials of degree \le k are in 1 to 1 correspondence with codewords. Hence, this step outputs the list of codewords.

Analysis

Interpolation step

Lemma: Interpolation step implies \begin{pmatrix}r + 1\\2\end{pmatrix} constraints on the coefficients of a_i

Let Q(x,y) = \sum_{i = 0, j = 0} ^{i = m, j = p} a_{i,j} x^i y^j where \deg_x Q(x,y) = m and \deg_y Q(x,y) = p

Then, Q(x + \alpha, y + \beta) = \sum_{u = 0, v = 0} ^{r} Q_{u,v} (\alpha, \beta) x^{u} y^{v} ........................(Equation 1)

where Q_{u,v} (x, y) = \sum_{i = 0, j = 0} ^{i = m, j = p} \begin{pmatrix}i\\u\end{pmatrix} \begin{pmatrix}j\\v\end{pmatrix} a_{i,j} x^{i-u} y^{j-v}

Proof of Equation 1:

Q(x + \alpha,y + \beta) = \sum_{i,j} a_{i,j} (x + \alpha)^i (y + \beta)^j
Q(x + \alpha,y + \beta)=\sum_{i,j} a_{i,j} \Bigg ( \sum_u \begin{pmatrix}i\\u\end{pmatrix} x^u \alpha^{i-u} \Bigg ) \Bigg ( \sum_v \begin{pmatrix}i\\v\end{pmatrix} y^v \beta^{j-v} \Bigg ).................Using binomial expansion
Q(x + \alpha,y + \beta) = \sum_{u,v} x^u y^v \Bigg ( \sum_{i,j} \begin{pmatrix}i\\u\end{pmatrix} \begin{pmatrix}i\\v \end{pmatrix} a_{i,j} \alpha^{i-u} \beta^{j-v} \Bigg )
Q(x + \alpha,y + \beta) = \sum_{u,v} Q_{u,v} (\alpha, \beta) x^u y^v

Proof of Lemma:

The polynomial Q(x, y) has a zero of multiplicity r at (\alpha,\beta) if

Q_{u,v} (\alpha,\beta) \equiv 0 such that 0 \le u + v \le r - 1
u can take r - v values as 0 \le v \le r-1. Thus, the total number of constraints is

\sum_{v = 0}^{r-1} {r - v} = \begin{pmatrix}r + 1\\2\end{pmatrix}

Thus, \begin{pmatrix}r + 1\\2\end{pmatrix} number of selections can be made for (u,v) and each selection implies constraints on the coefficients of a_i

Factorization step

Proposition:

Q(x, p(x)) \equiv 0 if y - p(x) is a factor of Q(x,y)

Proof:

Since, y - p(x) is a factor of Q(x,y), Q(x,y) can be represented as

Q(x,y) = L(x,y) (y - p(x)) + R(x)

where, L(x,y) is the quotient obtained when Q(x,y) is divided by y - p(x) R(x) is the remainder

Now, if y is replaced by p(x), Q(x, p(x)) \equiv 0, only if R(x) \equiv 0

Theorem:

If p(\alpha) = \beta, then (x - \alpha) ^r is a factor of Q(x,p(x))

Proof:

Q(x, y) = \sum_{u,v} Q_{u,v} (\alpha, \beta) (x - \alpha)^{u} (y - \beta)^{v}...........................From Equation 2

Q(x, p(x)) = \sum_{u,v} Q_{u,v} (\alpha, \beta) (x - \alpha)^{u} (p(x) - \beta)^{v}

Given, p(\alpha) = \beta (p(x) - \beta) mod (x - \alpha) = 0

Hence, (x - \alpha)^{u} (p(x) - \beta)^{v} mod (x - \alpha) ^{u+v} = 0

Thus, (x - \alpha) ^r is a factor of Q(x,p(x)).

As proved above,

t \cdot r > D

t > \frac{D} {r}

\frac{D(D+2)} {2(k-1)} > n\begin{pmatrix}r + 1\\2\end{pmatrix} where LHS is the upper bound on the number of coefficients of Q(x,y) and RHS is the earlier proved Lemma.

D = \sqrt{knr(r-1)} \,

Therefore, t = \left \lceil{\sqrt{kn (1 - \frac{1}{r})}} \right \rceil

Substitute r = 2kn,

t > \left \lceil {\sqrt{kn - \frac{1}{2}}} \right \rceil > \left \lceil {\sqrt{kn}} \right \rceil

Hence proved, that Guruswami–Sudan List Decoding Algorithm can list decode Reed-Solomon(RS) codes up to 1 - \sqrt{R} errors.

References

This article is issued from Wikipedia - version of the Wednesday, June 24, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.