Random coordinate descent

Randomized (Block) Coordinate Descent Method is an optimization algorithm popularized by Nesterov (2010) and Richtárik and Takáč (2011). The first analysis of this method, when applied to the problem of minimizing a smooth convex function, was performed by Nesterov (2010).[1] In Nesterov's analysis the method needs to be applied to a quadratic perturbation of the original function with an unknown scaling factor. Richtárik and Takáč (2011) give iteration complexity bounds which do not require this, i.e., the method is applied to the objective function directly. Furthermore, they generalize the setting to the problem of minimizing a composite function, i.e., sum of a smooth convex and a (possibly nonsmooth) convex block-separable function:

 F(x) = f(x) + \Psi(x),

where  \Psi(x) = \sum_{i=1}^n \Psi_i(x^{(i)}), x\in R^N is decomposed into  n blocks of variables/coordinates:  x = (x^{(1)},\dots,x^{(n)}) and  \Psi_1,\dots, \Psi_n are (simple) convex functions.

Example (block decomposition): If  x = (x_1,x_2,\dots,x_5) \in R^5 and  n = 3 , one may choose  x^{(1)} = (x_1,x_3), x^{(2)} = (x_2,x_5) and  x^{(3)} = x_4 .

Example (block-separable regularizers):

  1.  n=N; \Psi(x) = \|x\|_1 = \sum_{i=1}^n |x_i|
  2.  N = N_1 + N_2 + \dots + N_n; \Psi(x) = \sum_{i=1}^n \|x^{(i)}\|_2 , where  x^{(i)}\in R^{N_i} and \|\cdot\|_2 is the standard Euclidean norm.

Algorithm

Consider the optimization problem

 \min_{x \in R^n} f(x),

where f is a convex and smooth function.

Smoothness: By smoothness we mean the following: we assume the gradient of f is coordinate-wise Lipschitz continuous with constants L_1, L_2, \dots , L_n. That is, we assume that

|\nabla_i f(x + h e_i) - \nabla_i f(x)| \leq L_i |h|,

for all  x \in R^n and  h \in R , where  \nabla_i denotes the partial derivative with respect to variable x^{(i)}.

Nesterov, and Richtarik and Takac showed that the following algorithm converges to the optimal point:

Algorithm Random Coordinate Descent Method
  Input: x_0 \in R^n //starting point
  Output: x
  set x=x_0
  for k=1,... do
     choose coordinate i\in \{1,2,\dots,n\}, uniformly at random
     update x^{(i)} = x^{(i)} - \frac1{L_i} \nabla f_i(x) 
  endfor;
  • "" is a shorthand for "changes to". For instance, "largest item" means that the value of largest changes to the value of item.
  • "return" terminates the algorithm and outputs the value that follows.

Convergence rate

Since the iterates of this algorithm are random vectors, a complexity result would give a bound on the number of iterations needed for the method to output an approximate solution with high probability. It was shown in [2] that if k\geq \frac{2n R_L(x_0)}{\epsilon} \log \left(\frac{f(x_0)-f^*}{\epsilon \rho}\right), where R_L(x)=\max_{y}  \max_{x^* \in X^*} \{ \|y-x^*\|_L : f(y)\leq f(x) \}, f^* is an optimal solution ( f^* = \min_{x\in R^n}\{f(x)\}), \rho\in(0,1) is a confidence level and \epsilon>0 is target accuracy, then Prob(f(x_k)-f^*> \epsilon) \leq \rho.

Example on particular function

The following Figure shows how x_k develops during iterations, in principle. The problem is

 f(x) = \tfrac{1}{2} x^T \left(\begin{array}{cc}
                             1 & 0.5 \\ 0.5 & 1
                            \end{array}
                            \right)
                            x -\left(\begin{array}{cc}
                             1.5 & 1.5
                            \end{array}
                            \right)x,\quad    x_0=\left(\begin{array}{ cc}
                             0   & 0
                            \end{array}
                            \right)^T

Extension to Block Coordinate Setting

Blocking coordinate directions into Block coordinate directions

One can naturally extend this algorithm not only just to coordinates, but to blocks of coordinates. Assume that we have space R^5. This space has 5 coordinate directions, concretely  e_1 = (1,0,0,0,0)^T,
e_2 = (0,1,0,0,0)^T,
e_3 = (0,0,1,0,0)^T,
e_4 = (0,0,0,1,0)^T,
e_5 = (0,0,0,0,1)^T in which Random Coordinate Descent Method can move. However, one can group some coordinate directions into blocks and we can have instead of those 5 coordinate directions 3 block coordinate directions (see image).

See also

References

  1. Nesterov, Yurii (2010), "Efficiency of coordinate descent methods on huge-scale optimization problems", CORE Discussion Paper (\#2010/2)
  2. Richtárik, Peter; Takáč, Martin (2011), Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function (Mathematical Programming, Ser A)
This article is issued from Wikipedia - version of the Tuesday, April 23, 2013. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.