next up previous contents
Next: 5.4.4.3 Variant 3 : Gauß-Seidel Up: 5.4.4 Parallel algorithms Previous: 5.4.4.1 Variant 1 : formal   Contents


5.4.4.2 Variant 2 : Gauß-Seidel-$ \omega $-Jacobi

The large amount of communication steps of variant 1 was caused by the coupling of unknowns on the interface in the same iteration. By using only the old $ (k-1)^{\text{th}}$ iterates one can omit this coupling and also the requirements on the mesh.
Between the blocks ''$ V,E,I$'' and inside the subdomains (block ''$ I$'') a Gauß-Seidel iteration will be performed. Due to the partial use of the Jacobi iteration, the convergence rate is worse as in a Gauß-Seidel iteration.
\begin{algorithmus}
% latex2html id marker 17379
[H]\caption{Parallel Gau\ss-Sei...
... \\
\multicolumn{3}{l}{ \mbox{\textbf{\sf end}}}
\end{array}$\end{algorithmus}
Operation  $ \underline{{\ensuremath{\color{red}\mathfrak{d}}}} \circledast \underline{{\ensuremath{\color{red}\mathfrak{w}}}}$ in Alg. 5.10 denotes the component wise multiplication of two vectors. We took additionally into account that in the interior of the domains (''$ I$'') accumulated and distributed vectors/matrices are identical.
The accumulation of cross points (''$ V$'') and interface data (''$ E$'') is usually performed separately so that we need exactly the same amount of communication as in the $ \omega $-Jacobi iteration (Alg. 5.6) !
In case of using the iteration as a smoother with a fixed number of iterations the calculation of the inner product is no longer necessary. This saves the ALL/SMALL>_REDUCE operation in the parallel code and vectors  $ \underline{{\ensuremath{\color{green}{\sf r}}}}$ and $ \underline{{\ensuremath{\color{red}\mathfrak{w}}}}$ can be stored in one place.
next up previous contents
Next: 5.4.4.3 Variant 3 : Gauß-Seidel Up: 5.4.4 Parallel algorithms Previous: 5.4.4.1 Variant 1 : formal   Contents
Gundolf Haase 2000-03-20