SPIKE algorithm

The SPIKE algorithm is a hybrid parallel solver for banded linear systems developed by Eric Polizzi and Ahmed Sameh ^

Overview

The SPIKE algorithm deals with a linear system AX = F, where A is a banded n\times n matrix of bandwidth much less than n, and F is an n\times s matrix containing s right-hand sides. It is divided into a preprocessing stage and a postprocessing stage.

Preprocessing stage

In the preprocessing stage, the linear system AX = F is partitioned into a block tridiagonal form


\begin{bmatrix}
\boldsymbol{A}_1 & \boldsymbol{B}_1\\
\boldsymbol{C}_2 & \boldsymbol{A}_2 & \boldsymbol{B}_2\\
& \ddots & \ddots & \ddots\\
& & \boldsymbol{C}_{p-1} & \boldsymbol{A}_{p-1} & \boldsymbol{B}_{p-1}\\
& & & \boldsymbol{C}_p & \boldsymbol{A}_p
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{X}_1\\
\boldsymbol{X}_2\\
\vdots\\
\boldsymbol{X}_{p-1}\\
\boldsymbol{X}_p
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{F}_1\\
\boldsymbol{F}_2\\
\vdots\\
\boldsymbol{F}_{p-1}\\
\boldsymbol{F}_p
\end{bmatrix}.

Assume, for the time being, that the diagonal blocks Aj (j = 1,,p with p 2) are nonsingular. Define a block diagonal matrix

D = diag(A1,…,Ap),

then D is also nonsingular. Left-multiplying D−1 to both sides of the system gives


\begin{bmatrix}
\boldsymbol{I} & \boldsymbol{V}_1\\
\boldsymbol{W}_2 & \boldsymbol{I} & \boldsymbol{V}_2\\
& \ddots & \ddots & \ddots\\
& & \boldsymbol{W}_{p-1} & \boldsymbol{I} & \boldsymbol{V}_{p-1}\\
& & & \boldsymbol{W}_p & \boldsymbol{I}
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{X}_1\\
\boldsymbol{X}_2\\
\vdots\\
\boldsymbol{X}_{p-1}\\
\boldsymbol{X}_p
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{G}_1\\
\boldsymbol{G}_2\\
\vdots\\
\boldsymbol{G}_{p-1}\\
\boldsymbol{G}_p
\end{bmatrix},

which is to be solved in the postprocessing stage. Left-multiplication by D−1 is equivalent to solving p systems of the form

Aj[Vj Wj Gj] = [Bj Cj Fj]

(omitting W1 and C1 for j=1, and Vp and Bp for j=p), which can be carried out in parallel.

Due to the banded nature of A, only a few leftmost columns of each Vj and a few rightmost columns of each Wj can be nonzero. These columns are called the spikes.

Postprocessing stage

Without loss of generality, assume that each spike contains exactly m columns (m is much less than n) (pad the spike with columns of zeroes if necessary). Partition the spikes in all Vj and Wj into


\begin{bmatrix}
\boldsymbol{V}_j^{(t)}\\
\boldsymbol{V}_j'\\
\boldsymbol{V}_j^{(b)}
\end{bmatrix}
and 
\begin{bmatrix}
\boldsymbol{W}_j^{(t)}\\
\boldsymbol{W}_j'\\
\boldsymbol{W}_j^{(b)}\\
\end{bmatrix}

where V (t)
j
 
, V (b)
j
 
, W (t)
j
 
and W (b)
j
 
are of dimensions m\times m. Partition similarly all Xj and Gj into


\begin{bmatrix}
\boldsymbol{X}_j^{(t)}\\
\boldsymbol{X}_j'\\
\boldsymbol{X}_j^{(b)}
\end{bmatrix}
and 
\begin{bmatrix}
\boldsymbol{G}_j^{(t)}\\
\boldsymbol{G}_j'\\
\boldsymbol{G}_j^{(b)}\\
\end{bmatrix}.

Notice that the system produced by the preprocessing stage can be reduced to a block pentadiagonal system of much smaller size (recall that m is much less than n)


\begin{bmatrix}
\boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\
\boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{(t)}\\
& \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)} & \boldsymbol{0} \\
& & \ddots & \ddots & \ddots & \ddots & \ddots\\
& & & \boldsymbol{0} & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{p-1}^{(t)}\\
& & & & \boldsymbol{W}_{p-1}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)} & \boldsymbol{0}\\
& & & & & \boldsymbol{0} & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\
& & & & & & \boldsymbol{W}_p^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{X}_1^{(t)}\\
\boldsymbol{X}_1^{(b)}\\
\boldsymbol{X}_2^{(t)}\\
\boldsymbol{X}_2^{(b)}\\
\vdots\\
\boldsymbol{X}_{p-1}^{(t)}\\
\boldsymbol{X}_{p-1}^{(b)}\\
\boldsymbol{X}_p^{(t)}\\
\boldsymbol{X}_p^{(b)}
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{G}_1^{(t)}\\
\boldsymbol{G}_1^{(b)}\\
\boldsymbol{G}_2^{(t)}\\
\boldsymbol{G}_2^{(b)}\\
\vdots\\
\boldsymbol{G}_{p-1}^{(t)}\\
\boldsymbol{G}_{p-1}^{(b)}\\
\boldsymbol{G}_p^{(t)}\\
\boldsymbol{G}_p^{(b)}
\end{bmatrix}\text{,}

which we call the reduced system and denote by S̃X̃ = G̃.

Once all X (t)
j
 
and X (b)
j
 
are found, all Xj can be recovered with perfect parallelism via


\begin{cases}
\boldsymbol{X}_1'=\boldsymbol{G}_1'-\boldsymbol{V}_1'\boldsymbol{X}_2^{(t)}\text{,}\\
\boldsymbol{X}_j'=\boldsymbol{G}_j'-\boldsymbol{V}_j'\boldsymbol{X}_{j+1}^{(t)}-\boldsymbol{W}_j'\boldsymbol{X}_{j-1}^{(b)}\text{,} & j=2,\ldots,p-1\text{,}\\
\boldsymbol{X}_p'=\boldsymbol{G}_p'-\boldsymbol{W}_p\boldsymbol{X}_{p-1}^{(b)}\text{.}
\end{cases}

SPIKE as a polyalgorithmic banded linear system solver

Despite being logically divided into two stages, computationally, the SPIKE algorithm comprises three stages:

  1. factorizing the diagonal blocks,
  2. computing the spikes,
  3. solving the reduced system.

Each of these stages can be accomplished in several ways, allowing a multitude of variants. Two notable variants are the recursive SPIKE algorithm for non-diagonally-dominant cases and the truncated SPIKE algorithm for diagonally-dominant cases. Depending on the variant, a system can be solved either exactly or approximately. In the latter case, SPIKE is used as a preconditioner for iterative schemes like Krylov subspace methods and iterative refinement.

Recursive SPIKE

Preprocessing stage

The first step of the preprocessing stage is to factorize the diagonal blocks Aj. For numerical stability, one can use LAPACK's XGBTRF routines to LU factorize them with partial pivoting. Alternatively, one can also factorize them without partial pivoting but with a "diagonal boosting" strategy. The latter method tackles the issue of singular diagonal blocks.

In concrete terms, the diagonal boosting strategy is as follows. Let 0ε denote a configurable "machine zero". In each step of LU factorization, we require that the pivot satisfy the condition

|pivot| > 0εA1.

If the pivot does not satisfy the condition, it is then boosted by


\mathrm{pivot}=
\begin{cases}
\mathrm{pivot}+\epsilon\lVert\boldsymbol{A}_j\rVert_1 & \text{if }\mathrm{pivot}\geq 0\text{,}\\
\mathrm{pivot}-\epsilon\lVert\boldsymbol{A}_j\rVert_1 & \text{if }\mathrm{pivot}<0
\end{cases}

where ε is a positive parameter depending on the machine's unit roundoff, and the factorization continues with the boosted pivot. This can be achieved by modified versions of ScaLAPACK's XDBTRF routines. After the diagonal blocks are factorized, the spikes are computed and passed on to the postprocessing stage.

Postprocessing stage

The two-partition case

In the two-partition case, i.e., when p = 2, the reduced system S̃X̃ = G̃ has the form


\begin{bmatrix}
\boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\
\boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\
& \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{X}_1^{(t)}\\
\boldsymbol{X}_1^{(b)}\\
\boldsymbol{X}_2^{(t)}\\
\boldsymbol{X}_2^{(b)}
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{G}_1^{(t)}\\
\boldsymbol{G}_1^{(b)}\\
\boldsymbol{G}_2^{(t)}\\
\boldsymbol{G}_2^{(b)}
\end{bmatrix}\text{.}

An even smaller system can be extracted from the center:


\begin{bmatrix}
\boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\
\boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{X}_1^{(b)}\\
\boldsymbol{X}_2^{(t)}
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{G}_1^{(b)}\\
\boldsymbol{G}_2^{(t)}
\end{bmatrix}\text{,}

which can be solved using the block LU factorization


\begin{bmatrix}
\boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\
\boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{I}_m\\
\boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\
& \boldsymbol{I}_m-\boldsymbol{W}_2^{(t)}\boldsymbol{V}_1^{(b)}
\end{bmatrix}\text{.}

Once X (b)
1
 
and X (t)
2
 
are found, X (t)
1
 
and X (b)
2
 
can be computed via

X (t)
1
 
= G (t)
1
 
V (t)
1
 
X (t)
2
 
,
X (b)
2
 
= G (b)
2
 
W (b)
2
 
X (b)
1
 
.
The multiple-partition case

Assume that p is a power of two, i.e., p = 2d. Consider a block diagonal matrix

D̃1 = diag(D̃ [1]
1
 
,,D̃ [1]
p/2
 
)

where


\boldsymbol{\tilde{D}}_k^{[1]}=
\begin{bmatrix}
\boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{2k-1}^{(t)}\\
\boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{2k-1}^{(b)} & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{W}_{2k}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\
& \boldsymbol{W}_{2k}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m
\end{bmatrix}

for k = 1,,p/2. Notice that D̃1 essentially consists of diagonal blocks of order 4m extracted from S̃. Now we factorize S̃ as

S̃ = D̃1S̃2.

The new matrix S̃2 has the form


\begin{bmatrix}
\boldsymbol{I}_{3m} & \boldsymbol{0} & \boldsymbol{V}_1^{[2](t)}\\
\boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{[2](b)} & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{W}_2^{[2](t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{[2](t)}\\
& \boldsymbol{W}_2^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_{3m} & \boldsymbol{V}_2^{[2](b)} & \boldsymbol{0} \\
& & \ddots & \ddots & \ddots & \ddots & \ddots\\
& & & \boldsymbol{0} & \boldsymbol{W}_{p/2-1}^{[2](t)} & \boldsymbol{I}_{3m} & \boldsymbol{0} & \boldsymbol{V}_{p/2-1}^{[2](t)}\\
& & & & \boldsymbol{W}_{p/2-1}^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p/2-1}^{[2](b)} & \boldsymbol{0}\\
& & & & & \boldsymbol{0} & \boldsymbol{W}_{p/2}^{[2](t)} & \boldsymbol{I}_m & \boldsymbol{0}\\
& & & & & & \boldsymbol{W}_{p/2}^{[2](b)} & \boldsymbol{0} & \boldsymbol{I}_{3m}
\end{bmatrix}\text{.}

Its structure is very similar to that of S̃2, only differing in the number of spikes and their height (their width stays the same at m). Thus, a similar factorization step can be performed on S̃2 to produce

S̃2 = D̃2S̃3

and

S̃ = D̃1D̃2S̃3.

Such factorization steps can be performed recursively. After d 1 steps, we obtain the factorization

S̃ = D̃1D̃d1S̃d,

where S̃d has only two spikes. The reduced system will then be solved via

X̃ = S̃ 1
d
 
D̃ 1
d1
 
D̃ 1
1
 
G̃
.

The block LU factorization technique in the two-partition case can be used to handle the solving steps involving D̃1, …, D̃d1 and S̃d for they essentially solve multiple independent systems of generalized two-partition forms.

Generalization to cases where p is not a power of two is almost trivial.

Truncated SPIKE

When A is diagonally-dominant, in the reduced system


\begin{bmatrix}
\boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_1^{(t)}\\
\boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)} & \boldsymbol{0}\\
\boldsymbol{0} & \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_2^{(t)}\\
& \boldsymbol{W}_2^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)} & \boldsymbol{0} \\
& & \ddots & \ddots & \ddots & \ddots & \ddots\\
& & & \boldsymbol{0} & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m & \boldsymbol{0} & \boldsymbol{V}_{p-1}^{(t)}\\
& & & & \boldsymbol{W}_{p-1}^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)} & \boldsymbol{0}\\
& & & & & \boldsymbol{0} & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m & \boldsymbol{0}\\
& & & & & & \boldsymbol{W}_p^{(b)} & \boldsymbol{0} & \boldsymbol{I}_m
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{X}_1^{(t)}\\
\boldsymbol{X}_1^{(b)}\\
\boldsymbol{X}_2^{(t)}\\
\boldsymbol{X}_2^{(b)}\\
\vdots\\
\boldsymbol{X}_{p-1}^{(t)}\\
\boldsymbol{X}_{p-1}^{(b)}\\
\boldsymbol{X}_p^{(t)}\\
\boldsymbol{X}_p^{(b)}
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{G}_1^{(t)}\\
\boldsymbol{G}_1^{(b)}\\
\boldsymbol{G}_2^{(t)}\\
\boldsymbol{G}_2^{(b)}\\
\vdots\\
\boldsymbol{G}_{p-1}^{(t)}\\
\boldsymbol{G}_{p-1}^{(b)}\\
\boldsymbol{G}_p^{(t)}\\
\boldsymbol{G}_p^{(b)}
\end{bmatrix}\text{,}

the blocks V (t)
j
 
and W (b)
j
 
are often negligible. With them omitted, the reduced system becomes block diagonal


\begin{bmatrix}
\boldsymbol{I}_m\\
& \boldsymbol{I}_m & \boldsymbol{V}_1^{(b)}\\
& \boldsymbol{W}_2^{(t)} & \boldsymbol{I}_m\\
& & & \boldsymbol{I}_m & \boldsymbol{V}_2^{(b)}\\
& & & \ddots & \ddots & \ddots\\
& & & & \boldsymbol{W}_{p-1}^{(t)} & \boldsymbol{I}_m\\
& & & & & & \boldsymbol{I}_m & \boldsymbol{V}_{p-1}^{(b)}\\
& & & & & & \boldsymbol{W}_p^{(t)} & \boldsymbol{I}_m\\
& & & & & & & & \boldsymbol{I}_m
\end{bmatrix}
\begin{bmatrix}
\boldsymbol{X}_1^{(t)}\\
\boldsymbol{X}_1^{(b)}\\
\boldsymbol{X}_2^{(t)}\\
\boldsymbol{X}_2^{(b)}\\
\vdots\\
\boldsymbol{X}_{p-1}^{(t)}\\
\boldsymbol{X}_{p-1}^{(b)}\\
\boldsymbol{X}_p^{(t)}\\
\boldsymbol{X}_p^{(b)}
\end{bmatrix}
=
\begin{bmatrix}
\boldsymbol{G}_1^{(t)}\\
\boldsymbol{G}_1^{(b)}\\
\boldsymbol{G}_2^{(t)}\\
\boldsymbol{G}_2^{(b)}\\
\vdots\\
\boldsymbol{G}_{p-1}^{(t)}\\
\boldsymbol{G}_{p-1}^{(b)}\\
\boldsymbol{G}_p^{(t)}\\
\boldsymbol{G}_p^{(b)}
\end{bmatrix}

and can be easily solved in parallel .

The truncated SPIKE algorithm can be wrapped inside some outer iterative scheme (e.g., BiCGSTAB or iterative refinement) to improve the accuracy of the solution.

SPIKE for tridiagonal systems

The first SPIKE partitioning and algorithm was presented in and was designed as the means to improve the stability properties of a parallel Givens rotations-based solver for tridiagonal systems. A version of the algorithm, termed g-Spike, that is based on serial Givens rotations applied independently on each block was designed for the NVIDIA GPU . A SPIKE-based algorithm for the GPU that is based on a special block diagonal pivoting strategy is described in .

SPIKE as a preconditioner

The SPIKE algorithm can also function as a preconditioner for iterative methods for solving linear systems. To solve a linear system Ax = b using a SPIKE-preconditioned iterative solver, one extracts center bands from A to form a banded preconditioner M and solves linear systems involving M in each iteration with the SPIKE algorithm.

In order for the preconditioner to be effective, row and/or column permutation is usually necessary to move “heavy” elements of A close to the diagonal so that they are covered by the preconditioner. This can be accomplished by computing the weighted spectral reordering of A.

The SPIKE algorithm can be generalized by not restricting the preconditioner to be strictly banded. In particular, the diagonal block in each partition can be a general matrix and thus handled by a direct general linear system solver rather than a banded solver. This enhances the preconditioner, and hence allows better chance of convergence and reduces the number of iterations.

Implementations

Intel offers an implementation of the SPIKE algorithm under the name Intel Adaptive Spike-Based Solver . Tridiagonal solvers have also been developed for the NVIDIA GPU and the Xeon Phi co-processors. The method in is the basis for a tridiagonal solver in the cuSPARSE library.[1] The Givens rotations based solver was also implemented for the GPU and the Intel Xeon Phi.[2]

References

  1. ^ Polizzi, E.; Sameh, A. H. (2006). "A parallel hybrid banded system solver: the SPIKE algorithm". Parallel Computing 32 (2): 177–194. doi:10.1016/j.parco.2005.07.005. 
  2. ^ Polizzi, E.; Sameh, A. H. (2007). "SPIKE: A parallel environment for solving banded linear systems". Computers & Fluids 36: 113–141. doi:10.1016/j.compfluid.2005.07.005. 
  3. ^ Mikkelsen, C. C. K.; Manguoglu, M. (2008). "Analysis of the Truncated SPIKE Algorithm". SIAM. J. Matrix Anal. & Appl. 30 (4): 1500–1519. doi:10.1137/080719571. 
  4. ^ Manguoglu, M.; Sameh, A. H.; Schenk, O. (2009). "PSPIKE: A parallel hybrid sparse linear system solver". Lecture Notes in Computer Science 5704: 797–808. Bibcode:2009LNCS.5704..797M. doi:10.1007/978-3-642-03869-3_74. 
  5. ^ "Intel Adaptive Spike-Based Solver - Intel Software Network". Retrieved 2009-03-23. 
  6. ^ Sameh, A. H.; Kuck, D. J. (1978). "On Stable Parallel Linear System Solvers". Journal of the ACM 25: 81. doi:10.1145/322047.322054. 
  7. ^ Venetis, I.E.; Kouris, A.; Sobczyk, A.; Gallopoulos, E.; Sameh, A. H. (2015). "A direct tridiagonal solver based on Givens rotations for GPU architectures". Parallel Computing 25: 101–116. doi:10.1016/j.parco.2015.03.008. 
  8. ^ Chang, L.-W.; Stratton, J.; Kim, H.; Hwu, W.-M. (2012). "A scalable, numerically stable, high-performance tridiagonal solver using GPUs". Proc. Int’l. Conf. High Performance Computing, Networking Storage and Analysis (SC'12) (Los Alamitos, CA, USA: IEEE Computer Soc. Press): 27:1–27:11. ISBN 978-1-4673-0804-5. 

See also

  1. Gallopoulos, E.; Philippe, B.; Sameh, A.H. (2015). Parallelism in Matrix Computations. Springer. ISBN 978-94-017-7188-7. 
This article is issued from Wikipedia - version of the Monday, February 08, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.