2人の大学院生による雑記ブログ

# 7-4　2D Diffusion Equasion（Theory for Coding）[python]

In Part 7-4, we will transform 2D diffusion equasion into a form that can be implemented.

$$\rho C \frac{\partial T}{\partial t}=\frac{\partial}{\partial x} \left( \lambda \frac{\partial T}{\partial x} \right) + \frac{\partial}{\partial y} \left( \lambda \frac{\partial T}{\partial y} \right)$$

Discretize with first-order accuracy about time and implicit method, using the method performed in the 7-2.

The left side is the same as before.

$$\rho C \frac{\partial T}{\partial t} \simeq \rho C \frac{\Delta T^n_{i,j}}{\Delta t}$$

$$\Delta T^n_{i,j}= T^{n+1}_{i,j}- T^{n}_{i,j}$$

The first term on the right side is written as follows. For more details, please refer to No. 7-2.

$$\frac{\partial}{\partial x} \left( \lambda \frac{\partial T}{\partial x} \right) \simeq \frac{\partial}{\partial x} \left( \lambda \frac{\partial T^{n+1}}{\partial x} \right)$$

$$\simeq \frac{ \lambda_{i+1/2,j} }{\Delta x^2} (\Delta T_{i+1,j}^n +T_{i+1,j}^n-\Delta T_{i,j}^n -T_{i,j}^n )$$

$$-\frac{ \lambda_{i-1/2,j} }{\Delta x^2} (\Delta T_{i,j}^n +T_{i,j}^n -\Delta T_{i-1,j}^n -T_{i-1,j}^n)$$

Do the same for the second term on the right side.

$$\frac{\partial}{\partial y} \left( \lambda \frac{\partial T}{\partial y} \right)$$

$$\simeq \frac{ \lambda_{i,j+1/2} }{\Delta y^2} (\Delta T_{i,j+1}^n +T_{i,j+1}^n-\Delta T_{i,j}^n -T_{i,j}^n )$$

$$- \frac{ \lambda_{i,j-1/2} }{\Delta y^2} (\Delta T_{i,j}^n +T_{i,j}^n -\Delta T_{i,j-1}^n -T_{i,j-1}^n)$$

The summary is as follows.

$$\rho C \frac{\Delta T^n_{i,j}}{\Delta t}=\frac{1}{\Delta x^2}(\lambda_{i+1/2,j} (\Delta T_{i+1,j}^n -\Delta T_{i,j}^n)-\lambda_{i-1/2,j}(\Delta T_{i,j}^n -\Delta T_{i-1,j}^n))$$

$$+ \frac{1}{\Delta y^2}(\lambda_{i,j+1/2} (\Delta T_{i,j+1}^n -\Delta T_{i,j}^n )-\lambda_{i,j-1/2}(\Delta T_{i,j}^n -\Delta T_{i,j-1}^n))+R_{i,j}^n$$

$$R_{i,j}^n =\frac{1}{\Delta x^2}(\lambda_{i+1/2,j} (T_{i+1,j}^n – T_{i,j}^n)-\lambda_{i-1/2,j}( T_{i,j}^n – T_{i-1,j}^n))$$

$$+ \frac{1}{\Delta y^2}(\lambda_{i,j+1/2} (T_{i,j+1}^n – T_{i,j}^n )-\lambda_{i,j-1/2}( T_{i,j}^n – T_{i,j-1}^n))$$

Further, it is transformed into a Gauss-Seidel method with the number of iterations k. For details, please refer to Part 7-3.

$$\left ( \frac{\rho C }{\Delta t}+ \frac{ \lambda_{i+1/2,j} }{\Delta x^2} + \frac{ \lambda_{i-1/2,j} }{\Delta x^2}+ \frac{ \lambda_{i,j+1/2} }{\Delta y^2} + \frac{ \lambda_{i,j-1/2} }{\Delta x^2} \right ) \Delta T^{k+1}_{i,j} =\frac{1}{\Delta x^2} \left[\lambda_{i+1/2,j} \Delta T_{i+1,j}^k -\lambda_{i-1/2,j} \Delta T_{i-1,j}^k \right]$$ $$+ \frac{1}{\Delta y^2}\left[\lambda_{i,j+1/2} \Delta T_{i,j+1}^k -\lambda_{i,j-1/2} \Delta T_{i,j-1}^k \right]+R_{i,j}^n$$

$$\Delta T^{k+1}_{i,j} =\left ( \frac{1}{\Delta x^2} \left[\lambda_{i+1/2,j} \Delta T_{i+1,j}^k -\lambda_{i-1/2,j} \Delta T_{i-1,j}^k \right]+ \frac{1}{\Delta y^2} \left [ \lambda_{i,j+1/2} \Delta T_{i,j+1}^k -\lambda_{i,j-1/2} \Delta T_{i,j-1}^k \right]+R_{i,j}^n \right )$$ $$/ \left ( \frac{\rho C }{\Delta t}+ \frac{ \lambda_{i+1/2,j} }{\Delta x^2} + \frac{ \lambda_{i-1/2,j} }{\Delta x^2}+ \frac{ \lambda_{i,j+1/2} }{\Delta y^2} + \frac{ \lambda_{i,j-1/2} }{\Delta x^2} \right )$$

Now it can be implemented as an expression. (In the case of the Gauss-Seidel method, some parts are written as k + 1 instead of k depending on the order of the grid, but this is left as it is because it is easy to understand.)

However, we have not yet established a criterion for assessing the convergence of the iterative method.

This time, I want to finish the calculation when the 2-norm becomes (10 ^ {-4}) or less.

The stationary iteration method converges at a constant rate. Therefore, if there is almost no difference from the true solution, it is considered to have converged. The standard is 2-norm is less than (10 ^ {-4}) in this case.

Given the following matrix: (Bf A) is an (i × i) matrix, and (bf x) and (bf b) are column matrices of (i).

$$\bf A \bf x =\bf b$$

There is an error when considering the number of iterations (k), so the right side of the following equation will not be 0.

$$\bf A \bf x^{k} -\bf b=\bf e$$

So take the 2-norm.

$$2-norm=\sqrt{\sum_{k=0}^i (a_{k,j}x_{k}-b_{k})^2}$$

Standardize using (bf) for using anytime.

$$2-norm=\sqrt{\frac{\sum_{k=0}^i (a_{k,j}x_{k}-b_{k})^2}{\sum_{k=0}^i (b_{k})^2}}$$

We will stop calculation if we get following result.

$$2-norm \le 10^{-4}$$

All the transformations to the implementation are over, so next time I’d like to actually drop it into the code. 