Method of simple iterations slough speed of work steps. Solving slough using simple iteration method

Topic 3. Solution of linear systems algebraic equations iterative methods.

The direct methods for solving SLAEs described above are not very effective when solving large-dimensional systems (i.e., when the value n big enough). In such cases, iterative methods are more suitable for solving SLAEs.

Iterative methods for solving SLAEs(their second name is methods of successive approximation to the solution) do not give an exact solution of the SLAE, but only an approximation to the solution, and each subsequent approximation is obtained from the previous one and is more accurate than the previous one (provided that the convergence iterations). The initial (or so-called zero) approximation is chosen close to the expected solution or arbitrarily (the vector of the right side of the system can be taken as it). The exact solution is found as the limit of such approximations as their number tends to infinity. As a rule, this limit is not reached in a finite number of steps (i.e. iterations). Therefore, in practice, the concept is introduced solution accuracy, namely, some positive and sufficiently small number is given e and the process of calculations (iterations) is carried out until the relation is satisfied .

Here is the approximation to the solution obtained after iteration number n , a is the exact solution of the SLAE (which is unknown in advance). Number of iterations n = n (e ) , necessary to achieve a given accuracy for specific methods, can be obtained from theoretical considerations (i.e., there are calculation formulas for this). The quality of different iterative methods can be compared by the number of iterations required to achieve the same accuracy.

To study iterative methods on convergence you need to be able to calculate the norms of matrices. Matrix norm- this is a certain numerical value that characterizes the size of the matrix elements in absolute value. IN higher mathematics there are several various types norms of matrices, which are usually equivalent. In our course we will use only one of them. Namely, under matrix norm we will understand the maximum value among the sums of absolute values ​​of the elements of individual rows of the matrix. To indicate the norm of the matrix, its name is enclosed in two pairs of vertical bars. So, for the matrix A by its norm we mean the quantity

. (3.1)

So, for example, the norm of matrix A from Example 1 is found as follows:

Most wide application To solve the SLAE we obtained three iterative methods

Simple iteration method

Jacobi method

Guass-Seidel method.

Simple iteration method involves a transition from writing the SLAE in its original form (2.1) to writing it in the form

(3.2)

or, which is also the same, in matrix form,

x = WITH × x + D , (3.3)

C - matrix of coefficients of the transformed dimension system n ´ n

x - vector of unknowns consisting of n component

D - vector of the right parts of the transformed system, consisting of n component.

The system in the form (3.2) can be represented in a reduced form

Based on this idea simple iteration formula will look like

Where m - iteration number, and - value x j on m -th iteration step. Then, if the iteration process converges, with increasing number of iterations it will be observed

It has been proven that the iteration process converges, If norm matrices D will less unitss.

If we take the vector of free terms as the initial (zero) approximation, i.e. x (0) = D , That magnitude of error looks like

(3.5)

here under x * the exact solution of the system is understood. Hence,

If , then according specified accuracye can be calculated in advance required number of iterations. Namely, from the relation

after small transformations we get

. (3.6)

When performing such a number of iterations, the specified accuracy of finding the solution to the system is guaranteed. This theoretical estimate of the required number of iteration steps is somewhat overestimated. In practice, the required accuracy can be achieved in fewer iterations.

It is convenient to search for solutions to a given SLAE using a simple iteration method by entering the results obtained in a table of the following form:

x 1

x 2

x n

It should be especially noted that in solving SLAEs using this method the most complex and time-consuming is to transform the system from form (2.1) to form (3.2). These transformations must be equivalent, i.e. not changing the solution of the original system, and ensuring the value of the norm of the matrix C (after completing them) smaller unit. There is no single recipe for performing such transformations. Here, in each specific case, it is necessary to be creative. Let's consider examples, which will provide some ways to transform the system to the required form.

Example 1. Let us find a solution to a system of linear algebraic equations using the simple iteration method (with an accuracy e= 0.001)

This system is brought to the required form in the simplest way. Let's move all the terms from the left side to the right, and then add to both sides of each equation x i (i =1, 2, 3, 4). We obtain a transformed system of the following form

.

Matrix C and vector D in this case will be as follows

C = , D = .

Let's calculate the norm of the matrix C . We get

Since the norm turned out to be less than unity, the convergence of the simple iteration method is ensured. As an initial (zero) approximation, we take the components of the vector D . We get

, , , .

Using formula (3.6), we calculate the required number of iteration steps. Let us first determine the norm of the vector D . We get

.

Therefore, to achieve the specified accuracy, it is necessary to perform at least 17 iterations. Let's do the first iteration. We get

Having performed all arithmetic operations, we get

.

Continuing similarly, we will perform further iteration steps. We summarize their results in the following table ( D- the largest change in the solution components between the current and previous steps)

M

Since after the tenth step the difference between the values ​​at the last two iterations became less than the specified accuracy, we will stop the iteration process. As the solution found, we will take the values ​​obtained at the last step.

Example 2.

Let us first proceed similarly to the previous example. We get

Matrix C there will be such a system

C =.

Let's calculate its norm. We get

Obviously, the iteration process for such a matrix will not be convergent. It is necessary to find another way to transform the given system of equations.

Let us rearrange its individual equations in the original system of equations so that the third line becomes the first, the first - the second, the second - the third. Then, transforming it in the same way, we get

Matrix C there will be such a system

C =.

Let's calculate its norm. We get

Since the norm of the matrix C turned out to be less than unity, the system transformed in this way is suitable for solution by the simple iteration method.

Example 3. Let's transform the system of equations

to a form that would allow the simple iteration method to be used in solving it.

Let us first proceed similarly to example 1. We obtain

Matrix C there will be such a system

C =.

Let's calculate its norm. We get

Obviously, the iteration process for such a matrix will not be convergent.

To transform the original matrix to a form convenient for applying the simple iteration method, we proceed as follows. First, we form an “intermediate” system of equations in which

- first equation is the sum of the first and second equations of the original system

- second equation- the sum of twice the third equation with the second minus the first

- third equation- the difference between the third and second equations of the original system.

As a result, we obtain an “intermediate” system of equations equivalent to the original one

From it it is easy to obtain another system, an “intermediate” system

,

and from it transformed

.

Matrix C there will be such a system

C =.

Let's calculate its norm. We get

The iteration process for such a matrix will be convergent.

Jacobi method assumes that all diagonal elements of the matrix A of the original system (2.2) are not equal to zero. Then the original system can be rewritten as

(3.7)

From such a record the system is formed iteration formula of the Jacobi method

The condition for the convergence of the iterative process of the Jacobi method is the so-called condition dominance of the diagonal in the original system (type (2,1)). Analytically, this condition is written in the form

. (3.9)

It should be noted that if in a given system of equations the convergence condition of the Jacobi method (i.e., the condition of dominance of the diagonal) is not satisfied, in many cases it is possible, by means of equivalent transformations of the original SLAE, to reduce its solution to the solution of an equivalent SLAE in which this condition is satisfied.

Example 4. Let's transform the system of equations

to a form that would allow the Jacobi method to be used in solving it.

We have already considered this system in Example 3, so let’s move on from it to the “intermediate” system of equations obtained there. It is easy to establish that its diagonal dominance condition is satisfied, so let us transform it to the form necessary to apply the Jacobi method. We get

From it we obtain a formula for performing calculations using the Jacobi method for a given SLAE

Taking it as initial, i.e. zero, approximation vector of free terms, we will perform all the necessary calculations. Let's summarize the results in a table.

m

D

Quite high accuracy of the obtained solution was achieved in six iterations.

Gauss-Seidel method is an improvement on the Jacobi method and also assumes that all diagonal elements of the matrix A of the original system (2.2) are not equal to zero. Then the original system can be rewritten in a form similar to the Jacobi method, but slightly different from it

It is important to remember here that if in the summation sign the upper index is less than the lower index, then no summation is performed.

The idea of ​​the Gauss-Seidel method is that the authors of the method saw the opportunity to speed up the calculation process in relation to the Jacobi method due to the fact that in the process of the next iteration, having found a new value x 1 Can At once use this new value in the same iteration to calculate the remaining variables. Similarly, further, having found a new value x 2 you can also immediately use it in the same iteration, etc.

Based on this, iteration formula for the Gauss-Seidel method has the following form

Sufficientconvergence clause Iterative process of the Gauss-Seidel method is still the same condition dominance of the diagonal (3.9). Convergence speed This method is slightly higher than in the Jacobi method.

Example 5. Let us solve the system of equations using the Gauss-Seidel method

We have already considered this system in examples 3 and 4, so we will immediately move from it to the transformed system of equations (see example 4), in which the condition of diagonal dominance is satisfied. From it we obtain a formula for performing calculations using the Gauss-Seidel method

Taking the vector of free terms as the initial (i.e. zero) approximation, we perform all the necessary calculations. Let's summarize the results in a table.

m

Quite high accuracy of the obtained solution was achieved in five iterations.

The advantage of iterative methods is their applicability to ill-conditioned systems and high-order systems, their self-correction and ease of implementation on a PC. To begin calculations, iterative methods require specifying some initial approximation to the desired solution.

It should be noted that the conditions and rate of convergence of the iterative process significantly depend on the properties of the matrix A system and on the choice of initial approximations.

To apply the iteration method, the original system (2.1) or (2.2) must be reduced to the form

after which the iterative process is performed according to recurrent formulas

, k = 0, 1, 2, ... . (2.26A)

Matrix G and vector are obtained as a result of transformation of system (2.1).

For convergence (2.26 A) is necessary and sufficient so that |l i(G)| < 1, где li(G) - All eigenvalues matrices G. Convergence will also occur if || G|| < 1, так как |li(G)| < " ||G||, where " is any.

Symbol || ... || means the norm of the matrix. When determining its value, they most often stop at checking two conditions:

||G|| = or || G|| = , (2.27)

Where . Convergence is also guaranteed if the original matrix A has diagonal dominance, i.e.

. (2.28)

If (2.27) or (2.28) is satisfied, the iteration method converges for any initial approximation. Most often, the vector is taken either zero or unit, or the vector itself is taken from (2.26).

There are many approaches to transforming the original system (2.2) with the matrix A to ensure the form (2.26) or satisfy the convergence conditions (2.27) and (2.28).

For example, (2.26) can be obtained as follows.

Let A = IN+ WITH, det IN#0; Then ( B+ WITH)= Þ B= −C+ Þ Þ B –1 B= −B –1 C+ B–1, whence= − B –1 C+ B –1 .

Putting - B –1 C = G, B–1 = , we obtain (2.26).

From the convergence conditions (2.27) and (2.28) it is clear that the representation A = IN+ WITH cannot be arbitrary.

If matrix A satisfies conditions (2.28), then as a matrix IN you can select the lower triangular one:

, a ii ¹ 0.

; Þ ; Þ ; Þ

By choosing the parameter a, we can ensure that || G|| = ||E+a A|| < 1.

If (2.28) prevails, then the transformation to (2.26) can be done by solving each i th equation of system (2.1) with respect to x i according to the following recurrent formulas:

(2.28A)

If in the matrix A there is no diagonal dominance; it must be achieved using some linear transformations that do not violate their equivalence.

As an example, consider the system

(2.29)

As you can see, in equations (1) and (2) there is no diagonal dominance, but in (3) there is, so we leave it unchanged.

Let us achieve diagonal dominance in equation (1). Let's multiply (1) by a, (2) by b, add both equations and in the resulting equation choose a and b so that there is diagonal dominance:

(2a + 3b) X 1 + (–1.8a + 2b) X 2 +(0.4a – 1.1b) X 3 = a.

Taking a = b = 5, we get 25 X 1 + X 2 – 3,5X 3 = 5.

To transform equation (2) with a predominance of (1) multiply by g, (2) multiply by d and subtract (1) from (2). We get

(3d – 2g) X 1 + (2d + 1.8g) X 2 +(–1.1d – 0.4g) X 3 = −g.

Putting d = 2, g = 3, we get 0 X 1 + 9,4 X 2 – 3,4 X 3 = −3. As a result, we obtain the system

(2.30)

This technique can be used to find solutions to a wide class of matrices.

or

Taking vector = (0.2; –0.32; 0) as an initial approximation T, we will solve this system using technology (2.26 A):

k = 0, 1, 2, ... .

The calculation process stops when two neighboring approximations of the solution vector coincide in accuracy, i.e.

.

Technology of iterative solution of the form (2.26 A) named simple iteration method .

Grade absolute error for the simple iteration method:

where is the symbol || ... || means normal.

Example 2.1. Using a simple iteration method with an accuracy of e = 0.001, solve the system linear equations:

The number of steps that give an answer accurate to e = 0.001 can be determined from the relation

£0.001.

Let us estimate the convergence using formula (2.27). Here || G|| = = max(0.56; 0.61; 0.35; 0.61) = 0.61< 1; = 2,15. Значит, сходимость обеспечена.

As an initial approximation, we take the vector of free terms, i.e. = (2.15; –0.83; 1.16; 0.44) T. Let's substitute the vector values ​​into (2.26 A):

Continuing the calculations, we enter the results into the table:

k X 1 X 2 X 3 X 4
2,15 –0,83 1,16 0,44
2,9719 –1,0775 1,5093 –0,4326
3,3555 –1,0721 1,5075 –0,7317
3,5017 –1,0106 1,5015 –0,8111
3,5511 –0,9277 1,4944 –0,8321
3,5637 –0,9563 1,4834 –0,8298
3,5678 –0,9566 1,4890 –0,8332
3,5760 –0,9575 1,4889 –0,8356
3,5709 –0,9573 1,4890 –0,8362
3,5712 –0,9571 1,4889 –0,8364
3,5713 –0,9570 1,4890 –0,8364

Convergence in thousandths occurs already at the 10th step.

Answer: X 1 » 3.571; X 2 "-0.957; X 3 » 1.489; X 4 "-0.836.

This solution can also be obtained using formulas (2.28 A).

Example 2.2. To illustrate the algorithm using formulas (2.28 A) consider the solution of the system (only two iterations):

; . (2.31)

Let us transform the system to the form (2.26) according to (2.28 A):

Þ (2.32)

Let's take the initial approximation = (0; 0; 0) T. Then for k= 0 it is obvious that the value = (0.5; 0.8; 1.5) T. Let us substitute these values ​​into (2.32), i.e., when k= 1 we get = (1.075; 1.3; 1.175) T.

Error e 2 = = max(0.575; 0.5; 0.325) = 0.575.

Block diagram of the algorithm for finding a solution to the SLAE using the method simple iterations according to working formulas (2.28 A) is shown in Fig. 2.4.

A special feature of the block diagram is the presence of the following blocks:

– block 13 – its purpose is discussed below;

– block 21 – displaying results on the screen;

– block 22 – check (indicator) of convergence.

Let us analyze the proposed scheme using the example of system (2.31) ( n= 3, w = 1, e = 0.001):

= ; .

Block 1. Enter the initial data A, ,w,e, n: n= 3, w = 1, e = 0.001.

Cycle I. Set the initial values ​​of the vectors x 0i And x i (i = 1, 2, 3).

Block 5. Reset the iteration counter.

Block 6. Reset the current error counter to zero.

IN cycle II, the matrix row numbers are changed A and vector.

Cycle II:i = 1: s = b 1 = 2 (block 8).

Go to nested loop III, block 9 – matrix column number counter A: j = 1.

Block 10: j = i, therefore, we return to block 9 and increase j per unit: j = 2.

In block 10 j ¹ i(2 ¹ 1) – we move to block 11.

Block 11: s= 2 – (–1) × X 0 2 = 2 – (–1) × 0 = 2, go to block 9, in which j increase by one: j = 3.

In block 10 the condition j ¹ i is fulfilled, so let's move on to block 11.

Block 11: s= 2 – (–1) × X 0 3 = 2 – (–1) × 0 = 2, after which we move on to block 9, in which j increase by one ( j= 4). Meaning j more n (n= 3) – we finish the cycle and move on to block 12.

Block 12: s = s / a 11 = 2 / 4 = 0,5.

Block 13: w = 1; s = s + 0 = 0,5.

Block 14: d = | x is | = | 1 – 0,5 | = 0,5.

Block 15: x i = 0,5 (i = 1).

Block 16. Checking the condition d > de: 0.5 > 0, therefore, go to block 17, in which we assign de= 0.5 and return using the link “ A» to the next step of cycle II – to block 7, in which i increase by one.

Cycle II: i = 2: s = b 2 = 4 (block 8).

j = 1.

Through block 10 j ¹ i(1 ¹ 2) – we move to block 11.

Block 11: s= 4 – 1 × 0 = 4, go to block 9, in which j increase by one: j = 2.

In block 10 the condition is not met, so we move on to block 9, in which j increase by one: j= 3. By analogy, we move on to block 11.

Block 11: s= 4 – (–2) × 0 = 4, after which we finish cycle III and move on to block 12.

Block 12: s = s/ a 22 = 4 / 5 = 0,8.

Block 13: w = 1; s = s + 0 = 0,8.

Block 14: d = | 1 – 0,8 | = 0,2.

Block 15: x i = 0,8 (i = 2).

Block 16. Checking the condition d > de: 0,2 < 0,5; следовательно, возвращаемся по ссылке «A» to the next step of cycle II - to block 7.

Cycle II: i = 3: s = b 3 = 6 (block 8).

Go to nested loop III, block 9: j = 1.

Block 11: s= 6 – 1 × 0 = 6, go to block 9: j = 2.

Using block 10 we move to block 11.

Block 11: s= 6 – 1 × 0 = 6. We finish cycle III and move on to block 12.

Block 12: s = s/ a 33 = 6 / 4 = 1,5.

Block 13: s = 1,5.

Block 14: d = | 1 – 1,5 | = 0,5.

Block 15: x i = 1,5 (i = 3).

According to block 16 (including references " A" And " WITH") we leave cycle II and move on to block 18.

Block 18. Increasing the number of iterations it = it + 1 = 0 + 1 = 1.

In blocks 19 and 20 of cycle IV, we replace the initial values X 0i obtained values x i (i = 1, 2, 3).

Block 21. We print intermediate values ​​of the current iteration, in this case: = (0.5; 0.8; 1.5) T, it = 1; de = 0,5.

We go to cycle II to block 7 and perform the considered calculations with new initial values X 0i (i = 1, 2, 3).

After which we get X 1 = 1,075; X 2 = 1,3; X 3 = 1,175.

Here, then, Seidel's method converges.

According to formulas (2.33)

k X 1 X 2 X 3
0,19 0,97 –0,14
0,2207 1,0703 –0,1915
0,2354 1,0988 –0,2118
0,2424 1,1088 –0,2196
0,2454 1,1124 –0,2226
0,2467 1,1135 –0,2237
0,2472 1,1143 –0,2241
0,2474 1,1145 –0,2243
0,2475 1,1145 –0,2243

Answer: x 1 = 0,248; x 2 = 1,115; x 3 = –0,224.

Comment. If the simple iteration and Seidel methods converge for the same system, then the Seidel method is preferable. However, in practice, the areas of convergence of these methods may be different, i.e., the simple iteration method converges, but the Seidel method diverges, and vice versa. For both methods, if || G|| close to unit, the convergence speed is very low.

To speed up convergence, an artificial technique is used - the so-called relaxation method . Its essence lies in the fact that the next value obtained using the iteration method x i (k) is recalculated using the formula

where w is usually changed in the range from 0 to 2 (0< w £ 2) с каким-либо шагом (h= 0.1 or 0.2). The parameter w is selected so that the convergence of the method is achieved in a minimum number of iterations.

Relaxation– a gradual weakening of any state of the body after the cessation of the factors that caused this state (physical engineering).

Example 2.4. Let us consider the result of the fifth iteration using the relaxation formula. Let's take w = 1.5:

As you can see, the result of almost the seventh iteration was obtained.

Lecture Iterative methods for solving a system of algebraic linear equations.

Condition for convergence of the iterative process. Jacobi method. Seidel method

Simple iteration method

A system of linear algebraic equations is considered

To apply iterative methods, the system must be reduced to an equivalent form

Then an initial approximation to the solution of the system of equations is selected and a sequence of approximations to the root is found.

For the iterative process to converge, it is sufficient that the condition be satisfied
(matrix norm). The criterion for ending iterations depends on the iterative method used.

Jacobi method .

The simplest way to bring the system into a form convenient for iteration is as follows:

From the first equation of the system we express the unknown x 1, from the second equation of the system we express x 2, etc.

As a result, we obtain a system of equations with matrix B, in which there are zero elements on the main diagonal, and the remaining elements are calculated using the formulas:

The components of vector d are calculated using the formulas:

The calculation formula for the simple iteration method is:

or in coordinate notation it looks like this:

The criterion for finishing iterations in the Jacobi method has the form:

If
, then we can apply a simpler criterion for ending iterations

Example 1. Solving a system of linear equations using the Jacobi method.

Let the system of equations be given:

It is required to find a solution to the system with accuracy

Let us reduce the system to a form convenient for iteration:

Let us choose an initial approximation, for example,

- vector of the right side.

Then the first iteration looks like this:

The following approximations to the solution are obtained similarly.

Let's find the norm of matrix B.

We will use the norm

Since the sum of the modules of the elements in each row is 0.2, then
, so the criterion for ending iterations in this problem is

Let's calculate the norms of vector differences:

Because
the specified accuracy was achieved at the fourth iteration.

Answer: x 1 = 1.102, x 2 = 0.991, x 3 = 1.0 1 1

Seidel method .

The method can be considered as a modification of the Jacobi method. The main idea is that when calculating the next (n+1)-th approach to the unknown x i at i >1 use already found (n+1)- e approaching the unknown x 1 ,x 2 , ...,x i - 1 and not n th approximation, as in the Jacobi method.

The calculation formula of the method in coordinate notation looks like this:

The convergence conditions and the criterion for ending iterations can be taken the same as in the Jacobi method.

Example 2. Solving systems of linear equations using the Seidel method.

Let us consider in parallel the solution of 3 systems of equations:

Let us reduce the systems to a form convenient for iterations:

Note that the convergence condition
done only for the first system. Let's calculate 3 first approximations to the solution in each case.

1st system:

The exact solution will be the following values: x 1 = 1.4, x 2 = 0.2 . The iterative process converges.

2nd system:

It can be seen that the iteration process diverges.

Exact solution x 1 = 1, x 2 = 0.2 .

3rd system:

It can be seen that the iterative process has gone in cycles.

Exact solution x 1 = 1, x 2 = 2 .

Let the matrix of the system of equations A be symmetric and positive definite. Then, for any choice of initial approximation, the Seidel method converges. No additional conditions are imposed on the smallness of the norm of a certain matrix.

Simple iteration method.

If A is a symmetric and positive definite matrix, then the system of equations is often reduced to the equivalent form:

x=x-τ (A x- b), τ – iteration parameter.

The calculation formula of the simple iteration method in this case has the form:

x (n+1) =x n- τ (A x (n) - b).

and the parameter τ > 0 is chosen so as to minimize, if possible, the value

Let λ min and λ max be the minimum and maximum eigenvalues ​​of matrix A. The optimal choice of parameter is

In this case
takes a minimum value equal to:

Example 3. Solving systems of linear equations using the simple iteration method. (in MathCAD)

Let the system of equations Ax = b be given

    To construct an iterative process, we find eigenvalues matrices A:

- uses a built-in function to find eigenvalues.

    Let's calculate the iteration parameter and check the convergence condition

The convergence condition is satisfied.

    Let's take the initial approximation - vector x0, set the accuracy to 0.001 and find the initial approximations using the program below:

Exact solution

Comment. If the program returns the rez matrix, then you can view all the iterations found.

The simple iteration method, also called the successive approximation method, is a mathematical algorithm for finding the value of an unknown quantity by gradually refining it. The essence of this method is that, as the name suggests, gradually expressing subsequent ones from the initial approximation, more and more refined results are obtained. This method is used to find the value of a variable in given function, as well as when solving systems of equations, both linear and nonlinear.

Let us consider how this method is implemented when solving SLAEs. The simple iteration method has the following algorithm:

1. Checking the fulfillment of the convergence condition in the original matrix. Convergence theorem: if the original matrix of the system has diagonal dominance (i.e., in each row, the elements of the main diagonal must be greater in absolute value than the sum of the elements of the secondary diagonals in absolute value), then the simple iteration method is convergent.

2. The matrix of the original system does not always have a diagonal predominance. In such cases, the system can be converted. Equations that satisfy the convergence condition are left untouched, and linear combinations are made with those that do not, i.e. multiply, subtract, add equations to each other until the desired result is obtained.

If in the resulting system there are inconvenient coefficients on the main diagonal, then terms of the form with i * x i are added to both sides of such an equation, the signs of which must coincide with the signs of the diagonal elements.

3. Transformation of the resulting system to normal form:

x - =β - +α*x -

This can be done in many ways, for example, like this: from the first equation, express x 1 in terms of other unknowns, from the second - x 2, from the third - x 3, etc. In this case we use the formulas:

α ij = -(a ij / a ii)

i = b i /a ii
You should again make sure that the resulting system normal looking corresponds to the convergence condition:

∑ (j=1) |α ij |≤ 1, while i= 1,2,...n

4. We begin to apply, in fact, the method of successive approximations itself.

x (0) is the initial approximation, we will express x (1) through it, then we will express x (2) through x (1). General formula and in matrix form it looks like this:

x (n) = β - +α*x (n-1)

We calculate until we achieve the required accuracy:

max |x i (k)-x i (k+1) ≤ ε

So, let's put the simple iteration method into practice. Example:
Solve SLAE:

4.5x1-1.7x2+3.5x3=2
3.1x1+2.3x2-1.1x3=1
1.8x1+2.5x2+4.7x3=4 with accuracy ε=10 -3

Let's see whether diagonal elements predominate in modulus.

We see that only the third equation satisfies the convergence condition. Let's transform the first and second, and add the second to the first equation:

7.6x1+0.6x2+2.4x3=3

From the third we subtract the first:

2.7x1+4.2x2+1.2x3=2

We converted the original system into an equivalent one:

7.6x1+0.6x2+2.4x3=3
-2.7x1+4.2x2+1.2x3=2
1.8x1+2.5x2+4.7x3=4

Now let's bring the system to its normal form:

x1=0.3947-0.0789x2-0.3158x3
x2=0.4762+0.6429x1-0.2857x3
x3= 0.8511-0.383x1-0.5319x2

We check the convergence of the iterative process:

0.0789+0.3158=0,3947 ≤ 1
0.6429+0.2857=0.9286 ≤ 1
0.383+ 0.5319= 0.9149 ≤ 1, i.e. the condition is met.

0,3947
Initial guess x(0) = 0.4762
0,8511

We substitute these values ​​into the normal form equation and obtain the following values:

0,08835
x(1) = 0.486793
0,446639

Substituting new values, we get:

0,215243
x(2) = 0.405396
0,558336

We continue calculations until we approach values ​​that satisfy the given condition.

x (7) = 0.441091

Let's check the correctness of the results obtained:

4,5*0,1880 -1.7*0,441+3.5*0,544=2,0003
3.1*0.1880+2.3*0.441-1.1x*0.544=0.9987
1.8*0,1880+2.5*0,441+4.7*0,544=3,9977

The results obtained by substituting the found values ​​into the original equations fully satisfy the conditions of the equation.

As we can see, the simple iteration method gives fairly accurate results, but to solve this equation we had to spend a lot of time and do cumbersome calculations.



error: Content protected!!