next up previous contents
Next: 5.4 Gauß-Seidel iteration Up: 5.3 -Jacobi iteration Previous: 5.3.2 Data flow of   Contents


5.3.3 Parallel algorithm

We store usually the stiffness matrix $ K$ as distributed but we need in the division by the entries of the main diagonal in algorithm 5.5 the accumulated values of them. Therefore, we have to store the main diagonal separately and accumulate it. The strategy for parallelization follows the parallelization of the cg method in 5.1.2 .
 $&bull#bullet;$
Store the accumulated and inverted main diagonal as vector $ \underline{{\ensuremath{\color{red}\mathfrak{d}}}} = {\ensuremath{\color{red}\...
...iag}^{-1}(\sum_{i=1}^P A_s^T {\ensuremath{\color{green}{\sf K}}}_s A_s)\enspace$.
 $&bull#bullet;$
Matrix $ K$ is stored as accumulated (type-II) matrix $ {\ensuremath{\color{green}{\sf K}}}$ (4.1).
$ \Longrightarrow$
Vector $ \underline{u}$ $ \rightarrow$ $ \underline{{\ensuremath{\color{red}\mathfrak{u}}}}$ is accumulated (type-I), vectors $ \underline{r}$, $ \underline{f}$ $ \rightarrow$ $ \underline{{\ensuremath{\color{green}{\sf r}}}}$, $ \underline{{\ensuremath{\color{green}{\sf f}}}}$ are distributed stored.
$ \Longrightarrow$ No communication in the matrix-times-vector operation (4.8).
!!
Inner product requires different vector types
$ \Longrightarrow$ accumulation of $ \underline{{\ensuremath{\color{green}{\sf r}}}}$ $ \longrightarrow$ $ \underline{{\ensuremath{\color{red}\mathfrak{w}}}}$.
!!
Using the accumulated vector  $ \underline{{\ensuremath{\color{red}\mathfrak{w}}}}$ we can perform the remaining DAXPY operations without any communication.

\begin{algorithmus}
% latex2html id marker 16082
[H]\caption{Parallel Jacobi ite...
... \\
\multicolumn{3}{l}{ \mbox{\textbf{\sf end}}}
\end{array}$\end{algorithmus}
Here, we denote by $ \underline{{\ensuremath{\color{red}\mathfrak{d}}}} \circledast \underline{{\ensuremath{\color{red}\mathfrak{w}}}}$ the component-wise multiplication of two vectors, i.e, $ \{{\ensuremath{\color{red}\mathfrak{d}}}_i \cdot {\ensuremath{\color{red}\mathfrak{w}}}_i\}_{i=\overline{1,n} \makebox[0pt]{}}\enspace$. This is nothing else then a multiplication of a vector by a diagonal matrix. If the relaxation parameter $ \omega $ remains constant during the iterations ( $ \omega\neq\omega(k)$) then the vector  $ \underline{{\ensuremath{\color{red}\mathfrak{d}}}}$ can be scaled with $ \omega $ in the initial step.
Each step of the Jacobi iteration requires one vector accumulation and one ALL/SMALL>_REDUCE of a real number.

In case of using the Jacobi iteration as a smoother with a fixed number of iterations the calculation of the inner product is no longer necessary. This saves the ALL/SMALL>_REDUCE operation in the parallel code and vectors  $ \underline{{\ensuremath{\color{green}{\sf r}}}}$ and $ \underline{{\ensuremath{\color{red}\mathfrak{w}}}}$ can be stored in one place.


next up previous contents
Next: 5.4 Gauß-Seidel iteration Up: 5.3 -Jacobi iteration Previous: 5.3.2 Data flow of   Contents
Gundolf Haase 2000-03-20