2人の大学院生による雑記ブログ

# 7-3　Iterative Method（Jacobi method, Gauss-seidel method）[python]

In Part 7-3, I would like to explain the solution of the inverse matrix in numerical analysis.

In the first place, we are proceeding with the idea of ​​solving with the implicit method in Part 7. And I think it has been understood from the flow up to the previous time that the implicit method can be transformed as follows.
$$\bf A \bf x = \bf b$$

We already know (bf A, bf b) and want to figure out (bf x).

There are three methods of solving such a matrix: “direct solution method”, “steady iteration method”, and “nonstationary iterative method”.

①direct solution method
t is a method to obtain a solution by finite number of operations. “Gaussian elimination method” (Part 2-2)and “TDMA method” (Part 3-2)are direct solutions.
The answer is certain, but it requires a lot of storage capacity and takes a long time to calculate.
It may be used when the element positions are fixed and the calculation can be simplified like the TDMA method, but it is rare.

A method of performing iterative calculations to converge from an initial vector $$\bf x ^ {(0)}$$ to a true solution $$\bf x ^ *$$. Among them, the method with a constant convergence rate.

③nonstationary iterative method
A method of performing iterative calculation in the same way as the stationary iterative method. Among them, the method in which the convergence rate is not constant.

The iterative method may not meet the convergence condition (it is basically okay in numerical analysis)，The amount of calculation to converge is smaller than that of the direct solution, and the mainstream of numerical analysis is here.

In this article, we will introduce the Jacobi method and the Gauss-Seidel method, which are stationary iteration methods.

It’s hard to understand what you’re doing at first, but take a look.

We use the previous implicit one-dimensional diffusion equation. The discretized form is as follows.

$$\rho C \frac{\Delta T^n_i}{\Delta t}=\frac{1}{\Delta x^2}(\lambda_{i+1/2} (\Delta T_{i+1}^n-\Delta T_i^n )-\lambda_{i-1/2}(\Delta T_{i}^n -\Delta T_{i-1}^n))+R^n_i$$

Attention! $$\Delta T^n_i =T^{n+1}_i-T^n_i, \ \ \ R^n_i=\frac{1}{\Delta x^2}(\lambda{i+1/2} (T_{i+1}^n-T_i^n)-\lambda_{i-1/2}(T_{i}^n -T_{i-1}^n))$$

Transform $$\Delta T_i ^ n$$ to the left side.

$$\left ( \frac{\rho C }{\Delta t}+ \frac{ \lambda_{i+1/2} }{\Delta x^2} + \frac{ \lambda_{i-1/2} }{\Delta x^2} \right ) \Delta T^n_i =\frac{1}{\Delta x^2} \left ( \lambda_{i+1/2} \Delta T_{i+1}^n-\lambda_{i-1/2} \Delta T_{i-1}^n \right )+R^n_i$$

$$\Delta T^n_i =\left [ \frac{1}{\Delta x^2} \left ( \lambda_{i+1/2} \Delta T_{i+1}^n-\lambda_{i-1/2} \Delta T_{i-1}^n \right )+R^n_i \right ] / \left ( \frac{\rho C }{\Delta t}+ \frac{ \lambda_{i+1/2} }{\Delta x^2} + \frac{ \lambda_{i-1/2} }{\Delta x^2} \right )$$

Here, using the number of iterations (k), write as follows.

$$\Delta T^{k+1}_i =\left [ \frac{1}{\Delta x^2} \left ( \lambda_{i+1/2} \Delta T_{i+1}^k-\lambda_{i-1/2} \Delta T_{i-1}^k \right )+R^n_i \right ] / \left ( \frac{\rho C }{\Delta t}+ \frac{ \lambda_{i+1/2} }{\Delta x^2} + \frac{ \lambda_{i-1/2} }{\Delta x^2} \right )$$

Start the calculation of the above equation with a suitable initial value, for example (Delta T_i = 0, k = 0), increasing (k). The calculation is complete when it converge enough.

Don’t get confused, the converged result is the solution at the next time step.

In other words, if the solution at a certain time $$n$$ is known and the calculation is performed from $$k=0$$to $$k = k ^ *$$ and converges, $$\Delta T_ {i} ^ {n + 1} = \Delta T_ {i } ^ {k ^ *}$$.

In addition, $$R ^ n_i$$ in the above equation remains $$n$$ because the value to be iterated is only the desired solution $$\bf x$$ of $$\bf A \bf x =\bf b$$

This is the Jacobi method.

I will write about the proof briefly.

$$\bf A \bf x=\bf b$$
$$⇔(\bf L+\bf D+\bf U) \bf x=\bf b$$
$$⇔\bf D \bf x=-(\bf L+\bf U)\bf x+\bf b$$
$$⇔\bf x=-\bf D^{-1}(\bf L+\bf U)\bf x+\bf D \bf b$$

Replace with $$\bf M, \bf N$$ as follows: Also, add $$*$$ to the true solution. $$⇔\bf x^{*}=\bf M \bf x^{*}+\bf N \ \ \ \ \ \ ①$$

The Jacobi method can be written as follows.
$$⇔\bf x^{k+1}=\bf M \bf x^{k}+\bf N \ \ \ \ \ \ ②$$

Then, (①-②)
$$⇔\bf x^{*}-\bf x^{k+1}=\bf M (\bf x^{*}-\bf x^{k})$$
$$⇔\bf x^{*}-\bf x^{k+1}=\bf M ({\bf M \bf x^{*}+\bf N}-{\bf M \bf x^{k-1}+\bf N })$$
$$⇔\bf x^{*}-\bf x^{k+1}=\bf M ^2 (\bf x^{*}-\bf x^{k-1})$$
$$\ \ \ \ \ \ \ \ \ \ \ \ \ \vdots$$
$$⇔\bf x^{*}-\bf x^{k+1}=\bf M ^{k+1} (\bf x^{*}-\bf x^{0})$$

Take the 2-norm on both sides.

Think of the 2-norm as a generalization of the distance from the origin in coordinate space.

As a concrete example, it looks like this.
$$\bf x = [1 \ \ 2] ⇔ ||\bf x||=\sqrt{1^2+2^2}=\sqrt{5}$$

We will apply like this;
$$⇔||\bf x^{*}-\bf x^{k+1}||=||\bf M ||^{k+1} ||\bf x^{*}-\bf x^{0}||$$

As lomg as $$||\bf M ||<1$$, When $$k→\infty$$, $$||\bf M ||^{k+1}→0$$
$$\bf x^{*}=\bf x^{k+1}$$

$$||\bf M ||<1$$ (This is true when diagonally dominant.)

Although we don’t explain in detail, in the case of the diffusion equation, the information in cell $$i$$ at the current time is effective in the information in cell $$i$$ at the next time. Therefore, this iterative method can be used because it automatically becomes a diagonally dominant matrix.
Therefore, this iterative method can be used because it automatically becomes a diagonally dominant matrix.

When updating the number of iterations, the Jacobian method uses (k) to calculate and update all of (k + 1).

However, the calculation is done in order, so when the cell (i + 1) is updated, the calculation for cell (i) has already been completed.

So we use cell ((i, k + 1)) to calculate cell ((i + 1, k + 1)).
This is the Gauss-Seidel method, which converges faster than the Jacobi method.

The convergence conditions have diagonal dominance too.

Jacobi method: easy parallelization, poor convergence

Gauss-Seidel method: easy to implement, good convergence

Even today, applications of the Gauss-Seidel method are still used in the numerical analysis community.

Next time, we will organize the equation for the two-dimensional diffusion equation and put it in a form that can be implemented.