Linear systems - iterative methods

Iterative methods

Jacobi method

$Ax=b$

$$\left[\begin{array}{cccc} a_{1,1} & a_{1,2} & a_{1,3} & a_{1,4} \\\\ a_{2,1} & a_{2,2} & a_{2,3} & a_{2,4} \\\\ a_{3,1} & a_{3,2} & a_{3,3} & a_{3,4} \\\\ a_{4,1} & a_{4,2} & a_{4,3} & a_{4,4} \end{array}\right] \left[\begin{array}{c} x_{1} \\\\ x_{2} \\\\ x_{3} \\\\ x_{4} \end{array}\right]= \left[\begin{array}{c} b_{1} \\\\ b_{2} \\\\ b_{3} \\\\ b_{4} \end{array}\right]$$$$\rightarrow \sum_j a_{i,j}x_i = b_i,\,\,i=1\ldots n.$$$$a_{i,1}x_1 + a_{i,2}x_2 + a_{i,3}x_3 + a_{i,4}x_4 = b_i,\,\, i=1,2,3,4.$$

Solve for $x_i$:

$$R_i^k = \left(b_i - \sum_{j=1}^na_{i,j}x_j^k\right).$$

and we can write

$$x_i^{k+1} = x_i^k + \frac{1}{a_{i,i}}R_{i}^.k$$

Gauss-Seidel method

Successive over-relaxation (SOR)

Since solutions converge smoothly from our guess for $x$ to the actual $x$, we can take a bigger step towards the solution than a given Jacobi or G.S. step provides:

$$x_i^{k+1} = x_i^k + \frac{R_i^k}{a_{i,i}} \rightarrow x_i^{k+1} = x_i^k + \Delta x_i^k,$$

with $x_i^{k+1}-x_i^k = \Delta x_i^k = R_i^k/a_{i,i}.$

The SOR method multiplies the $\Delta x_i^k$ by a factor $\omega > 1$ to take a larger step towards $x$:

$$x_i^{k+1} = x_i^k + \omega\frac{R_i^k}{a_{i,i}}.$$

Convergence

Now,

Convergence criteria

Example Code

import numpy as np

def jacobi(A,b,x, tol=1E-5, maxit=10000):
    ''' 
    Program solves Ax=b linear systems for x using the Jacobi iteration method
    A in a 2-D, square input matrix (n x n)
    b is a 1-D input array (length n)
    x is a 1-D input initial guess of the solution (length n)
    Function returns the solution x and the number of iterations required.
    '''
    
    for k in range(1,maxit+1):                            # iteration loop
        xold = x.copy()                                   # reset xold = xnew each time 
        R = b - np.dot(A,xold)                            # compute the Residual (error)
        x = xold + R/np.diag(A)                           # update the solution
        #RE = np.sqrt(np.sum(R**2))/np.sqrt(np.sum(x**2))  # relative error
        RE = np.linalg.norm(R)/np.linalg.norm(x)          # relative error
        if RE <= tol:                                     # check for convergence
            return x, k                                   # return solution and number of iterations
    
    print('Warning, reached', maxit, ' iterations without converging to tolerance =', tol)
    return x, maxit

Test the function

Take $A$ as a penta-diagonal matrix with the following form for $n=7$. We’ll solve the problem for $n=60$.

We’ll let $b$ be a vector of ones, and use a vector of zeros for the initial guess for $x$.

n = 60

A = np.diag(np.ones(n-3),-3)  + np.diag(-1*np.ones(n-1),-1) + \
    np.diag(4*np.ones(n),0) + np.diag(-1.*np.ones(n-1),1)   + \
    np.diag(np.ones(n-3),3)
b  = np.ones(n)
x0 = np.zeros(n)

x, k_iter = jacobi(A,b,x0)

print(f"The problem required {k_iter} iterations")

x_py = np.linalg.solve(A,b)
reAvg = np.mean(np.abs((x_py-x)/x_py))
print(f"\nThe average relative error between the Jacobi function and np.linalg.solve is {reAvg:.3e}")
print(f"\nx =\n{x}")
The problem required 31 iterations

The average relative error between the Jacobi function and np.linalg.solve is 4.824e-06

x =
[0.27184421 0.35220014 0.35220014 0.2648233  0.21524253 0.20822162
 0.23630615 0.25987558 0.26646384 0.25805109 0.24802253 0.24384705
 0.24597541 0.24993196 0.25214914 0.25182954 0.25037258 0.2493163
 0.24922614 0.24972154 0.25018695 0.25030629 0.25015446 0.24996385
 0.24988824 0.2499289  0.25000129 0.25003442 0.25001996 0.2499948
 0.2499948  0.25001996 0.25003442 0.25000129 0.2499289  0.24988824
 0.24996385 0.25015446 0.25030629 0.25018695 0.24972154 0.24922614
 0.2493163  0.25037258 0.25182954 0.25214914 0.24993196 0.24597541
 0.24384705 0.24802253 0.25805109 0.26646384 0.25987558 0.23630615
 0.20822162 0.21524253 0.2648233  0.35220014 0.35220014 0.27184421]