Engineering Math Numerical Techniques-I

Contents

**Describe Algebraic and Transcendental Equations** 1

**Apply Bisection Method for finding the Roots of a Equation** 3

**Apply Regula-Falsi Method for finding the Roots of Equation** 6

**Recall Newton-Raphson Method** 8

**Apply Newton-Raphson Method for finding the Roots of a Equation** 9

**Apply Secant Method for finding the Roots of a Equation** 11

**Describe the System of Linear Equations** 12

**Recall Gauss Elimination Method for solving the Linear Equations** 13

**Apply the Gauss Elimination Method for solving the Linear Equation** 15

**Recall Gauss-Jordan Elimination Method for solving the Linear Equations** 17

**Apply the Gauss-Jordan Elimination Method for solving the Linear Equation** 18

**Apply LU Decomposition Method for solving the Linear Equations** 19

**Describe Jacobi Method for solving the Linear Equations** 22

**Apply Jacobi Method for solving the Linear Equations** 23

**Describe Gauss-Seidel Method for solving the Linear Equations** 24

**Apply Gauss-seidel Method for solving the Linear Equations** 25

**Recall Eigenvalues and corresponding Eigenvectors** 26

**Describe the Rayleigh Power Method to evaluate Largest Eigenvalues and corresponding Eigenvectors** 26

**Describe Algebraic and Transcendental Equations**

Algebraic equations are mathematical expressions that involve a finite number of algebraic operations (addition, subtraction, multiplication, and division) to solve for the unknown variable. Transcendental equations, on the other hand, involve algebraic operations as well as transcendental functions like exponential, logarithmic, and trigonometric functions.

Here are some examples of algebraic and transcendental equations:

- Algebraic equation:

x^{2} + 3x – 10 = 0

This equation can be solved using algebraic operations to find the values of x that satisfy the equation.

- Transcendental equation:

e^{x} + 3x – 10 = 0

This equation involves the transcendental function of exponential (e^{x}) in addition to algebraic operations. It cannot be solved using algebraic operations alone and requires numerical or graphical methods to find the solutions.

- Algebraic equation:

2x + 5 = 9

This equation involves only algebraic operations and can be solved by isolating the variable (x) on one side of the equation.

- Transcendental equation:

sin(x) + x = 3

This equation involves the transcendental function of trigonometry (sin(x)) and requires numerical or graphical methods to solve for the variable (x) that satisfies the equation.

- Algebraic equation:

2x^{3} + 5x^{2} – 3x + 7 = 0

This equation involves only algebraic operations and can be solved by factoring, using the rational root theorem or numerical methods.

- Transcendental equation:

ln(x) + x^{2} = 5

This equation involves the transcendental function of logarithm (ln(x)) and requires numerical or graphical methods to find the values of x that satisfy the equation.

**Recall Bisection Method**

The bisection method is a numerical method used to find the root of a function. It works by repeatedly bisecting an interval and selecting the subinterval in which the root lies, until the desired accuracy is achieved. Here are the steps of the bisection method:

- Choose two initial points a and b such that f(a) and f(b) have opposite signs.
- Calculate the midpoint c = (a+b)/2.
- Evaluate the function at the midpoint, f(c).
- If f(c) = 0, then c is the root of the function. If not, check the sign of f(c).
- If f(c) has the same sign as f(a), then the root lies in the interval (c,b). Otherwise, the root lies in the interval (a,c).
- Repeat steps 2 to 5 until the desired accuracy is achieved.

**Apply Bisection Method for finding the Roots of a Equation**

Let’s apply the bisection method to find the root of the equation f(x) = x^{3 }– 3x + 1 in the interval [0, 2].

Step 1: Choose two initial points a = 0 and b = 2. Evaluate f(a) and f(b):

f(a) = (0)^{3} – 3(0) + 1 = 1

f(b) = (2)^{3} – 3(2) + 1 = -1

Since f(a) and f(b) have opposite signs, a root must exist in the interval [0,2].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.

Step 3: Evaluate the function at the midpoint, f(c) = (1)^{3 }– 3(1) + 1 = -1.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1,2].

Step 5: Repeat the bisection method using the interval [1,2].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.5.

Step 3: Evaluate the function at the midpoint, f(c) = (1.5)^{3} – 3(1.5) + 1 = 0.125.

Step 4: Since f(c) has the opposite sign as f(a), the root must lie in the interval [1,1.5].

Step 5: Repeat the bisection method using the interval [1,1.5].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.25.

Step 3: Evaluate the function at the midpoint, f(c) = (1.25)^{3} – 3(1.25) + 1 = -0.8594.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.25,1.5].

Step 5: Repeat the bisection method using the interval [1.25,1.5].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.375.

Step 3: Evaluate the function at the midpoint, f(c) = (1.375)^{3} – 3(1.375) + 1 = -0.3672.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.375,1.5].

Step 5: Repeat the bisection method using the interval [1.375,1.5].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.4375.

Step 3: Evaluate the function at the midpoint, f(c) = (1.4375)^{3 }– 3(1.4375) + 1 = -0.1223.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.4375,1.5].

Step 5: Repeat the bisection method using the interval [1.4375,1.5].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.46875.

Step 3: Evaluate the function at the midpoint, f(c) = (1.46875)^{3} – 3(1.46875) + 1 = 0.0043.

Step 4: Since f(c) has the opposite sign as f(a), the root must lie in the interval [1.4375,1.46875].

Step 5: Repeat the bisection method using the interval [1.4375,1.46875].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.453125.

Step 3: Evaluate the function at the midpoint, f(c) = (1.453125)^{3} – 3(1.453125) + 1 = -0.0584.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.453125,1.46875].

Step 5: Repeat the bisection method using the interval [1.453125,1.46875].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.4609375.

Step 3: Evaluate the function at the midpoint, f(c) = (1.4609375)^{3} – 3(1.4609375) + 1 = -0.0273.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.4609375,1.46875].

Step 5: Repeat the bisection method using the interval [1.4609375,1.46875].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.46484375.

Step 3: Evaluate the function at the midpoint, f(c) = (1.46484375)^{3} – 3(1.46484375) + 1 = -0.0115.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.46484375,1.46875].

Step 5: Repeat the bisection method using the interval [1.46484375,1.46875].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.466796875.

Step 3: Evaluate the function at the midpoint, f(c) = (1.466796875)^{3} – 3(1.466796875) + 1 = -0.0036.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.466796875,1.46875].

Step 5: Repeat the bisection method using the interval [1.466796875,1.46875].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.4677734375.

Step 3: Evaluate the function at the midpoint, f(c) = (1.4677734375)^{3 }– 3(1.4677734375) + 1 = 0.0004.

Step 4: Since f(c) has the opposite sign as f(a), the root must lie in the interval [1.466796875,1.4677734375].

Step 5: Repeat the bisection method using the interval [1.466796875,1.4677734375].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.46728515625.

Step 3: Evaluate the function at the midpoint, f(c) = (1.46728515625)^{3} – 3(1.46728515625) + 1 = -0.0016.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.46728515625,1.4677734375]

Step 5: Repeat the bisection method using the interval [1.46728515625,1.4677734375].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.467529296875.

Step 3: Evaluate the function at the midpoint, f(c) = (1.467529296875)^{3} – 3(1.467529296875) + 1 = -0.0006.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.467529296875,1.4677734375].

Step 5: Repeat the bisection method using the interval [1.467529296875,1.4677734375].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.4676513671875.

Step 3: Evaluate the function at the midpoint, f(c) = (1.4676513671875)^{3 }– 3(1.4676513671875) + 1 = -0.0001.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.4676513671875,1.4677734375].

Step 5: Repeat the bisection method using the interval [1.4676513671875,1.4677734375].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.46771240234375.

Step 3: Evaluate the function at the midpoint, f(c) = (1.46771240234375)^{3} – 3(1.46771240234375) + 1 = -0.0001.

Step 4: Since f(c) has the same sign as f(a), the root must lie in the interval [1.46771240234375,1.4677734375].

Step 5: Repeat the bisection method using the interval [1.46771240234375,1.4677734375].

Step 2: Calculate the midpoint c = (a+b)/2 = 1.467742919921875.

Step 3: Evaluate the function at the midpoint, f(c) = (1.467742919921875)^{3} – 3(1.467742919921875) + 1 = 0.

Step 4: Since f(c) = 0, the root has been found and is approximately 1.467742919921875.

Therefore, the approximate root of the equation f(x) = x^{3} – 3x + 1 in the interval [1.4,1.5] is x ≈ 1.467742919921875, with an error of less than 0.00000001.

**Recall Regula-Falsi Method**

Regula-Falsi Method is a numerical method for finding the roots of a function. It is also known as the method of false position. It is similar to the bisection method, but instead of bisecting the interval, it uses a linear interpolation between two points on the function.

The Regula-Falsi Method starts with two initial guesses, a and b, such that f(a) and f(b) have opposite signs, indicating that the root lies between them. The method then finds the point c where the line connecting f(a) and f(b) intersects the x-axis. If f(c) is positive, the root is between c and b, so b is updated to c. If f(c) is negative, the root is between a and c, so a is updated to c. This process is repeated until the desired level of accuracy is achieved.

The formula for calculating the new guess, c, is:

c = (af(b) – bf(a)) / (f(b) – f(a))

where f(a) and f(b) are the values of the function at a and b, respectively.

The Regula-Falsi Method may converge faster than the bisection method in some cases, but it is not guaranteed to converge and may oscillate if the function is not well-behaved.

**Apply Regula-Falsi Method for finding the Roots of Equation**

Let’s apply the Regula-Falsi Method to find the roots of the equation f(x) = x^{3} – 3x + 1.

Step 1: Choose two initial guesses, a = 1 and b = 2.

Step 2: Calculate the function values at the initial guesses, f(a) = -1 and f(b) = 3.

Step 3: Calculate the first approximation of the root using the formula:

c = (af(b) – bf(a)) / (f(b) – f(a))

= (1 * 3 – 2 * (-1)) / (3 – (-1))

= 1.25

Step 4: Calculate the function value at the new guess, f(c) = (1.25)^{3} – 3(1.25) + 1 = -0.546875.

Step 5: Determine whether the root lies between a and c or between c and b by checking the sign of f(c). Since f(c) is negative, the root must lie between a and c.

Step 6: Update the interval for the next iteration. Since the root is between a and c, set b = c.

Step 7: Repeat the Regula-Falsi Method using the updated interval [1, 1.25].

Step 2: Calculate the function values at the updated guesses, f(a) = -1 and f(b) = -0.546875.

Step 3: Calculate the new guess using the formula:

c = (af(b) – bf(a)) / (f(b) – f(a))

= (1 * (-0.546875) – 1.25 * (-1)) / (-0.546875 – (-1))

= 1.4

Step 4: Calculate the function value at the new guess, f(c) = (1.4)^{3} – 3(1.4) + 1 = 0.084.

Step 5: Determine whether the root lies between a and c or between c and b by checking the sign of f(c). Since f(c) is positive, the root must lie between c and b.

Step 6: Update the interval for the next iteration. Since the root is between c and b, set a = c.

Step 7: Repeat the Regula-Falsi Method using the updated interval [1.4, 1.25].

Step 2: Calculate the function values at the updated guesses, f(a) = 0.084 and f(b) = -0.546875.

Step 3: Calculate the new guess using the formula:

c = (af(b) – bf(a)) / (f(b) – f(a))

= (1.4 * (-0.546875) – 1.25 * 0.084) / (-0.546875 – 0.084)

= 1.468071735

Step 4: Calculate the function value at the new guess, f(c) = (1.468071735)^{3 }– 3(1.468071735) + 1 = -0.000004.

Step 5: Determine whether the root lies between a and c or between c and b by checking the sign of f(c). Since f(c) is negative, the root must lie between a and c.

Step 6: Update the interval for the next iteration. Since the root is between a and c, set b = c.

Step 7: Repeat the Regula-Falsi Method using the updated interval [1.4, 1.468071735].

Step 2: Calculate the function values at the updated guesses, f(a) = 0.084 and f(b) = -0.000004.

Step 3: Calculate the new guess using the formula:

c = (af(b) – bf(a)) / (f(b) – f(a))

= (1.4 * (-0.000004) – 1.468071735 * 0.084) / (-0.000004 – 0.084)

= 1.442249570

Step 4: Calculate the function value at the new guess, f(c) = (1.442249570)^{3} – 3(1.442249570) + 1 = -0.000635.

Step 5: Determine whether the root lies between a and c or between c and b by checking the sign of f(c). Since f(c) is negative, the root must lie between a and c.

Step 6: Update the interval for the next iteration. Since the root is between a and c, set b = c.

Step 7: Repeat the Regula-Falsi Method using the updated interval [1.442249570, 1.468071735].

Step 2: Calculate the function values at the updated guesses, f(a) = -0.000635 and f(b) = -0.000004.

Step 3: Calculate the new guess using the formula:

c = (af(b) – bf(a)) / (f(b) – f(a))

= (1.442249570 * (-0.000004) – 1.468071735 * (-0.000635)) / (-0.000004 – (-0.000635))

= 1.442249572

Step 4: Calculate the function value at the new guess, f(c) = (1.442249572)^{3} – 3(1.442249572) + 1 = -0.000000002.

Step 6: Update the interval for the next iteration. Since the root is between a and c, set b = c.

Step 7: Repeat the Regula-Falsi Method using the updated interval [1.442249572, 1.468071735].

We can continue this process until we reach the desired level of accuracy. The Regula-Falsi Method is a bit more efficient than the Bisection Method, but it can still be slow for highly nonlinear functions or for functions with multiple roots. In such cases, other numerical methods may be more appropriate.

**Recall Newton-Raphson Method**

The Newton-Raphson Method, also known as the Newton Method, is an iterative numerical method used to find the roots of a function. Given an initial guess x_{0}, the method uses the tangent line at that point to approximate the function and find a new guess x1. This process is repeated until the root is found, or until the method fails to converge.

The formula for Newton-Raphson Method is:

x_{(n+1)} = x_{n} – f(xn ) / f'(xn )

where xn is the current guess, f(xn ) is the function value at x_{n} , and f'(x_{n} ) is the derivative of the function at x_{n} . The next guess x_{(n+1)} is obtained by finding the intersection of the tangent line at xn with the x-axis.

The method works best when the initial guess is close to the root and the function has a continuous derivative. If the derivative is zero or very close to zero at the root, the method may fail to converge or converge very slowly. It is also possible for the method to converge to a local maximum or minimum instead of a root.

The Newton-Raphson Method is a popular and widely used method due to its fast convergence rate, typically quadratic, and relatively simple implementation. However, it requires the computation of the derivative, which can be difficult or computationally expensive for some functions.

**Apply Newton-Raphson Method for finding the Roots of a Equation**

Let’s apply the Newton-Raphson Method to find the roots of the equation f(x) = x^{3} – 3x + 1. We’ll use an initial guess of x_{0} = 1.

Step 1: Define the function f(x) = x^{3} – 3x + 1 and its derivative f'(x) = 3x^{2} – 3.

Step 2: Choose an initial guess x_{0} = 1.

Step 3: Calculate the function value and the derivative at the initial guess:

f(x_{0}) = 1^{3} – 3(1) + 1 = -1

f'(x_{0}) = 3(1)^{2} – 3 = 0

Since f'(x_{0}) is zero, we cannot use the Newton-Raphson Method with this initial guess. We’ll choose a new guess that is slightly different.

Step 4: Choose a new guess x_{1} = 1.5.

Step 5: Calculate the function value and the derivative at the new guess:

f(x1) = 1.5^{3} – 3(1.5) + 1 = 0.875

f'(x1) = 3(1.5)^{2} – 3 = 6.75

Step 6: Calculate the next guess using the formula:

x_{2} = x_{1} – f(x_{1}) / f'(x_{1})

= 1.5 – 0.875 / 6.75

= 1.39259259259

Step 7: Calculate the function value and the derivative at the new guess:

f(x_{2}) = 1.39259259259^{3} – 3(1.39259259259) + 1 = -0.0028533424

f'(x_{2}) = 3(1.39259259259)^{2} – 3 = 6.26469135802

Step 8: Calculate the next guess using the formula:

x_{3} = x_{2} – f(x_{2}) / f'(x_{2})

= 1.39259259259 – (-0.0028533424) / 6.26469135802

= 1.39222374634

Step 9: Calculate the function value and the derivative at the new guess:

f(x_{3}) = 1.39222374634^{3 }– 3(1.39222374634) + 1 = -0.0000011230

f'(x_{3}) = 3(1.39222374634)^{2} – 3 = 6.26349435087

Step 10: Calculate the next guess using the formula:

x_{4} = x_{3} – f(x_{3}) / f'(x_{3})

= 1.39222374634 – (-0.0000011230) / 6.26349435087

= 1.39222375627

We can continue this process until we reach the desired level of accuracy. In this case, we can stop after a few iterations, since the root is already known to several decimal places.

The Newton-Raphson Method is a powerful and widely used method, especially for functions that have simple derivatives. It converges very quickly and is computationally efficient. However, it can fail to converge or converge to a local maximum or minimum instead of a root if the initial guess is not close enough to the root or if the function has complex behavior near the root.

**Recall Secant Method**

The Secant Method is a numerical method used to find the root of a nonlinear function. It is similar to the Newton-Raphson method, but instead of using the derivative of the function, it uses an estimate of the derivative based on two points. The method starts with two initial guesses for the root and then iteratively improves the guesses using the formula:

x_{{n+1}} = x_{n} – f(x_{n} ) \frac{x_{n} – x_{{n-1}}}{f(x_{n} ) – f(x_{{n-1}})}

where f(x) is the function whose root is being found, x_{n} and x_{{n-1}} are the current and previous guesses for the root, and x_{{n+1}} is the new guess. The method continues until the difference between consecutive guesses is smaller than a specified tolerance, indicating that the root has been found.

The Secant Method does not always converge and may diverge if the initial guesses are too far apart or if the function has a horizontal tangent or a singularity. Therefore, it is recommended to use a bracketing method, such as the Bisection or Regula-Falsi method, to obtain initial guesses that bracket the root before applying the Secant Method.

**Apply Secant Method for finding the Roots of a Equation**

Let’s apply the Secant Method to find the root of the equation f(x) = x^{3} – 2x – 5 with initial guesses x_{0} = 2 and x_{1} = 3. We’ll stop the iteration when the difference between consecutive guesses is less than 10^{{-6}}.

Using the Secant Method formula:

x_{{n+1}} = x_{n} – f(x_{n} ) \frac{x_{n} – x_{{n-1}}}{f(x_{n} ) – f(x_{{n-1}})}

We get:

x_{2} = x_{1} – f(x_{1}) \frac{x_{1} – x_{0}}{f(x_{1}) – f(x_{0})}

x_{2} = 3 – f(3) \frac{3 – 2}{f(3) – f(2)}

x_{2} = 3 – (23 – 2\times3) \frac{1}{f(2) – f(3)}

x_{2} x_{2} = 3 + \frac{21}{29}

x_{2} \approx 3.7241

Now, we use x1 and x_{2} as the new guesses and repeat the process:

x_{3 }= x_{2} – f(x_{2} ) \frac{x_{2} – x1}{f(x_{2} ) – f(x_{1})}

x_{3 }= 3.7241 – f(3.7241) \frac{3.7241 – 3}{f(3.7241) – f(3)}

x_{3 } \approx 2.6652

The difference between x_{2} and x_{3} is less than 10^{{-6}}, so we can stop the iteration and conclude that the root of the equation f(x) = x^{3 }– 2x – 5 is approximately x \approx 2.6652.

**Describe the System of Linear Equations**

A system of linear equations is a collection of two or more linear equations involving the same set of variables. In general, a system of linear equations can be written as:

a_{{11}}x_{1 }+ a_{{12}}x_{2} + ….. + a_{{1n}}xn = b_{1}

a_{{21}}x_{1 }+ a_{{22}}x_{2 }+ ….. + a{_{2n}}xn = b_{2}

…….

a_{{m1}}x_{1} + a_{{m2}}x_{2} + ….. + a_{{mn}xn} = b_{m}

where x_{1}, x_{2}, ….., x_{n} are the unknowns or variables, a{_{ij}} are the coefficients of the variables, and b_{i} are the constants on the right-hand side of each equation.

This system of equations can be represented in matrix form as:

Ax = b

where A is the coefficient matrix, x is the column vector of the unknowns, and b is the column vector of constants. The system of equations is said to be consistent if there exists at least one solution that satisfies all the equations, and inconsistent if there is no solution that satisfies all the equations.

Solving a system of linear equations involves finding the values of the unknowns that satisfy all the equations in the system. There are several methods for solving systems of linear equations, including Gaussian elimination, LU decomposition, and matrix inversion.

**Recall Gauss Elimination Method for solving the Linear Equations**

Gauss elimination is a method for solving a system of linear equations by transforming the augmented matrix into an upper triangular form, and then back-substituting to find the solution. The steps involved in Gauss elimination are as follows:

Step 1: Write the augmented matrix of the system of linear equations.

Step 2: Choose the pivot element, which is the first non-zero element in the first row.

Step 3: Use row operations to eliminate all the elements below the pivot element in the same column. The row operations include multiplying a row by a non-zero constant, adding a multiple of one row to another row, and interchanging two rows.

Step 4: Move to the next row and repeat steps 2 and 3, choosing the next pivot element as the first non-zero element in the row.

Step 5: Continue the process until the augmented matrix is in row echelon form, i.e., all the elements below the diagonal are zero.

Step 6: Back-substitute to find the values of the unknowns.

Let’s take an example to illustrate the Gauss elimination method:

Suppose we have the following system of linear equations:

x_{1} + 2x_{2 }+ 3x_{3} = 14

2x_{1 }+ 3x_{2} + x_{3} = 11

3x_{1} + x_{2} + 2x_{3} = 13

Step 1: Write the augmented matrix of the system:

1 2 3 14

2 3 1 11

3 1 2 13

Step 2: Choose the pivot element as a_{{11}} = 1.

Step 3: Use row operations to eliminate the elements below the pivot element:

1 2 3 14

0 -1 -5 -17

0 -5 -7 -25

Step 4: Choose the pivot element as a_{{22}} = -1.

Step 5: Use row operations to eliminate the elements below the pivot element:

1 2 3 14

0 -1 -5 -17

0 0 -18 -60

Step 6: The augmented matrix is now in row echelon form. Back-substitute to find the values of the unknowns:

x_{3} = 1/3(3x_{3}) = 1/3(-60) = -20

x_{2} = 1/-1(-5x_{3} – 17) = 3

x_{1} = 1/1(14 – 2x_{2} – 3x_{3}) = 2

Therefore, the solution of the system of linear equations is x_{1} = 2, x_{2} = 3, x_{3} = -20.

**Apply the Gauss Elimination Method for solving the Linear Equation**

Let’s solve the following system of linear equations using the Gauss Elimination Method:

x + 2y – z = 3

3x – 4y + 2z = -1

2x + 3y – z = 4

Step 1: Write the augmented matrix for the system of equations:

1 2 -1 | 3

3 -4 2 | -1

2 3 -1 | 4

Step 2: Perform row operations to get zeros below the diagonal elements of the first column:

1 2 -1 | 3

0 -10 5 | -10

0 -1 1 | -2

Step 3: Perform row operations to get zeros below the diagonal element of the second column:

1 2 -1 | 3

0 -10 5 | -10

0 0 -3/5 | 6/5

Step 4: Solve for z:

-3/5 z = 6/5

z = -2

Step 5: Substitute the value of z into the second equation and solve for y:

-10y + 5(-2) = -10

-10y – 10 = -10

y = 0

Step 6: Substitute the values of y and z into the first equation and solve for x:

x + 2(0) – (-2) = 3

x + 2 = 3

x = 1

Therefore, the solution of the system of equations is x = 1, y = 0, z = -2.

**Recall Gauss-Jordan Elimination Method for solving the Linear Equations**

The Gauss-Jordan Elimination Method is a variant of the Gauss Elimination Method that involves additional steps to convert the matrix to row-echelon form and then back to reduced row-echelon form. The main steps of the Gauss-Jordan Elimination Method are as follows:

- Write the augmented matrix for the system of linear equations.
- Use elementary row operations to transform the matrix into row-echelon form.
- Use elementary row operations to further transform the matrix into reduced row-echelon form.
- Interpret the resulting matrix to find the solution to the system of linear equations.

In step 2, the goal is to transform the matrix into row-echelon form by performing the following operations:

- Swap two rows.
- Multiply a row by a nonzero constant.
- Add a multiple of one row to another row.

In row-echelon form, the leading coefficient of each row is a nonzero number, and the leading coefficient of each row is to the right of the leading coefficient of the row above it.

In step 3, the goal is to transform the matrix into reduced row-echelon form by performing the following operations:

- Divide each nonzero row by its leading coefficient.
- Add a multiple of one row to another row to create zeros above the leading coefficients.

In reduced row-echelon form, the leading coefficient of each row is 1, and there are zeros above and below each leading coefficient.

Once the matrix is in reduced row-echelon form, the solution to the system of linear equations can be found by interpreting the resulting matrix. Each nonzero row of the matrix corresponds to an equation in the system, and the nonzero entries in the row represent the coefficients of the variables. The rightmost entry of the row represents the constant term in the equation.

The solution to the system can be found by setting each variable corresponding to a nonzero row to the value of the constant term and setting the variables corresponding to the zero rows to arbitrary values.

The Gauss-Jordan Elimination Method is often preferred over the Gauss Elimination Method because it produces the solution directly, without the need for back substitution.

**Apply the Gauss-Jordan Elimination Method for solving the Linear Equation**

Let’s consider the following system of linear equations:

x + 2y – z = 3

2x – y + 3z = 9

3x + y – 2z = 0

To solve this system using the Gauss-Jordan Elimination Method, we start by writing the augmented matrix:

[1 2 -1 | 3]

[2 -1 3 | 9]

[3 1 -2 | 0]

We then use elementary row operations to transform the matrix into row-echelon form. The first step is to create a leading 1 in the first row:

[1 2 -1 | 3]

[0 -5 5 | 3]

[0 -5 1 |-9]

Next, we use elementary row operations to create zeros below the leading coefficient of the first row:

[1 2 -1 | 3]

[0 -5 5 | 3]

[0 0 -4 |-12]

Now we have the matrix in row-echelon form. We proceed to transform the matrix into reduced row-echelon form by dividing each nonzero row by its leading coefficient:

[1 2/(-1) -1/(-1) | 3/(-1)]

[0 -5/5 5/(-5) | 3/(-5)]

[0 0 -4/(-4) | -12/(-4)]

Simplifying, we get:

[1 -2 1 | -3]

[0 1 -1 | -3/5]

[0 0 1 | 3]

Finally, we interpret the resulting matrix to find the solution to the system of linear equations. The last row of the matrix corresponds to the equation 1z = 3, which gives us z = 3. Substituting z = 3 into the second row, we get:

1y – 1(3) = -3/5

Solving for y, we get y = -6/5. Substituting z = 3 and y = -6/5 into the first row, we get:

1x – 2(-6/5) + 1(3) = -3

Solving for x, we get x = 0.

Therefore, the solution to the system of linear equations is x = 0, y = -6/5, z = 3.

**Apply LU Decomposition Method for solving the Linear Equations**

The LU decomposition method is a numerical technique used to solve systems of linear equations. It decomposes a matrix into a lower triangular matrix (L) and an upper triangular matrix (U). The LU decomposition method can be used to solve linear equations in the form of Ax=b, where A is the coefficient matrix, x is the unknown vector, and b is the constant vector.

To apply the LU decomposition method, follow these steps:

- Decompose the coefficient matrix A into an upper triangular matrix U and a lower triangular matrix L, such that A = LU. This can be done using various techniques such as Gaussian elimination or Crout’s method.
- Rewrite the system of linear equations as LUx=b.
- Define a new variable y such that Ly=b. Solve for y by forward substitution.
- Once y is known, solve for x by backward substitution, such that Ux=y.
- The solution vector x obtained from step 4 is the solution to the system of linear equations.

Example: To apply LU decomposition to solve the system of linear equations:

3x + 2y – z = 1

2x – 2y + 4z = -2

-x + 1/2y – z = 0

We first write the system in the form Ax = b, where A is the coefficient matrix, x is the vector of unknowns, and b is the right-hand side vector.

A = [3 2 -1; 2 -2 4; -1 1/2 -1]

x = [x; y; z]

b = [1; -2; 0]

Next, we perform the LU decomposition of the matrix A. This can be done using Gaussian elimination with partial pivoting. The LU decomposition gives us:

A = LU

where L is a lower triangular matrix and U is an upper triangular matrix.

We then solve the system by first solving the equation Ly = b for y using forward substitution, and then solving the equation Ux = y for x using backward substitution.

- Perform LU Decomposition of A:

Start with A and initialize L as an identity matrix and U as A.

Swap rows if necessary so that the largest absolute value in the first column is in the first row.

Divide the first row of U by the first element to get a 1 in the (1,1) position.

Subtract the appropriate multiple of the first row from each of the subsequent rows to get zeros in the first column below the (1,1) position.

Repeat the above steps for the remaining columns to obtain U and L.

After the first step, we get:

L = [1 0 0; 2/3 1 0; -1/3 2/3 1]

U = [3 2 -1; 0 -8/3 11/3; 0 0 -4/3]

2. Solve Ly = b for y using forward substitution:

Substitute L and b into Ly = b to obtain:

[1 0 0; 2/3 1 0; -1/3 2/3 1][y_{1}; y_{2}; y_{3]} = [1; -2; 0]

Solve for y using forward substitution:

y_{1} = 1

2/3y_{1} + y_{2 }= -2 => y_{2} = -8/3

-1/3y_{1} + 2/3y_{2} + y_{3} = 0 => y_{3 }= 11/3

Thus, y = [1; -8/3; 11/3]

3. Solve Ux = y for x using backward substitution:

Substitute U and y into Ux = y to obtain:

[3 2 -1; 0 -8/3 11/3; 0 0 -4/3][x_{1}; x_{2}; x_{3}] = [1; -8/3; 11/3]

Solve for x using backward substitution:

-4/3x_{3} = 11/3 => x_{3} = -11/4

-8/3x_{2} + 11/3×3 = -8/3 => x_{2} = -7/3

3x_{1} + 2x_{2} – x_{3} = 1 => x_{1} = 1/3

Thus, x = [1/3; -7/3; -11/4]

Therefore, the solution to the system of linear equations is x = [1/3; -7/3, -11/4]

**Describe Jacobi Method for solving the Linear Equations**

The Jacobi method is an iterative method for solving a system of linear equations Ax = b. It is an iterative method because it starts with an initial guess for x and repeatedly updates the guess until a sufficiently accurate solution is obtained.

The Jacobi method works by rearranging the system of equations such that each variable is on the left-hand side of one equation and all the other terms are on the right-hand side. We can then update each variable by using the previous guess values of the other variables.

Specifically, we update each variable x_{i} using the equation:

x_{i} = (b_{i }– _{sum{j=1, j!=i}}^{n} a_{ij} x^{jk}) / a_{ii}

where k is the iteration number, xis the updated value of the ith variable in the (k+1)^{th} iteration, x_{j}^{k} is the j^{th} variable in the k^{th} iteration, a_{ij} is the element in the ith row and jth column of the matrix A, and a_{ii} is the diagonal element in the ith row and ith column of the matrix A.

The Jacobi method continues to update each variable in this way until a certain convergence criterion is met. One common convergence criterion is to check the difference between the previous and current values of x. The method terminates when the difference is below a specified tolerance level.

One advantage of the Jacobi method is that it is relatively easy to implement and can work for large sparse matrices. However, it can be slow to converge for certain types of matrices, such as those with large condition numbers.

**Apply Jacobi Method for solving the Linear Equations**

Let’s apply the Jacobi method to solve the following system of linear equations:

3x + 2y – z = 1

2x – 2y + 4z = -2

-x + 1/2y – z = 0

We can rewrite this system in the form Ax = b, where

A = [[3, 2, -1], [2, -2, 4], [-1, 1/2, -1]]

x = [x, y, z]

b = [1, -2, 0]

To apply the Jacobi method, we first rearrange each equation such that the variable we want to update is on the left-hand side, and all the other terms are on the right-hand side. This gives us:

x^{(k+1)} = (1 – 2y^{k} + z^{k}) / 3

y^{(k+1)} = (-2 – 2x^{k} + 4z^{k}) / -2

z^{(k+1) }= (0 + x^{k} – 1/2y^{k}) / -1

where x^{(k+1)}, y^{(k+1)}, and z^{(k+1)} are the updated values of x, y, and z in the (k+1)th iteration, and x^{k}, y^{k}, and z^{k} are the previous values of x, y, and z in the kth iteration.

We start by guessing initial values for x, y, and z. Let’s use x^{0} = 0, y^{0} = 0, and z^{0} = 0. We then substitute these values into the above equations to get:

x^{1} = (1 – 2(0) + 0) / 3 = 1/3

y^{1} = (-2 – 2(0) + 0) / -2 = 1

z^{1} = (0 + 0 – 1/2(0)) / -1 = 0

We now have our first updated values. We can repeat this process by plugging in these updated values into the equations to get our next updated values. We can keep doing this until we reach a sufficiently accurate solution.

Let’s perform one more iteration:

x^{2} = (1 – 2(1) + 0) / 3 = -1/3

y^{2} = (-2 – 2(1/3) + 4(0)) / -2 = -5/3

z^{2} = (0 + (1/3) – 1/2(1)) / -1 = ⅚

We can continue this process until the difference between the previous and current values of x, y, and z is below a specified tolerance level. The solution we obtain using the Jacobi method is x = -1/3, y = -5/3, z = 5/6.

**Describe Gauss-Seidel Method for solving the Linear Equations**

The Gauss-Seidel method is another iterative method for solving a system of linear equations Ax = b. Like the Jacobi method, it starts with an initial guess for x and repeatedly updates the guess until a sufficiently accurate solution is obtained. However, the Gauss-Seidel method uses the most recent updated values of the variables in each iteration, rather than the previous guess values as in the Jacobi method.

The Gauss-Seidel method works by updating each variable x_{i} using the most recent values of the other variables. Specifically, we update each variable x_{i} using the equation:

x_{i⁽k⁺¹⁾} = (b_{i – sum{j=1}}ⁱ⁻¹ a_{ij } x_{j⁽k⁺¹⁾} – sum_{{j=i+1}}^{n} a_{ij} * x_{j}^{k}) / a_{ii}

where k is the iteration number, x_{i⁽k⁺¹⁾}^{ }is the updated value of the ith variable in the (k+1)th iteration, x_{j}^{(k+1)} is the j^{th} variable in the (k+1)^{th }iteration, x_{j}^{k} is the jth variable in the kth iteration, a_{ij }is the element in the ith row and jth column of the matrix A, and a_{ii }is the diagonal element in the ith row and ith column of the matrix A.

The Gauss-Seidel method continues to update each variable in this way until a certain convergence criterion is met. One common convergence criterion is to check the difference between the previous and current values of x. The method terminates when the difference is below a specified tolerance level.

One advantage of the Gauss-Seidel method over the Jacobi method is that it can converge faster for certain types of matrices, such as those with diagonally dominant or symmetric positive definite matrices. However, it may not converge or converge slowly for other types of matrices.

**Apply Gauss-seidel Method for solving the Linear Equations**

Let’s apply the Gauss-Seidel method to solve the following system of linear equations:

3x + 2y – z = 1

2x – 2y + 4z = -2

-x + 1/2y – z = 0

We can rewrite this system in the form Ax = b, where

A = [[3, 2, -1], [2, -2, 4], [-1, 1/2, -1]]

x = [x, y, z]

b = [1, -2, 0]

To apply the Gauss-Seidel method, we start with an initial guess for x, y, and z. Let’s use x^{0} = 0, y^{0} = 0, and z^{0 }= 0. We then update each variable using the most recent values of the other variables, according to the formula:

x_{i}^{(k+1)} = (b_{i – sum{j=1}}^{{i-1}} a_{ij} x_{j}^{(k+1)} _{– sum{j=i+1}^{n} a_{ij} x_{j}^{k}) / a_{ii}

where i is the variable being updated, k is the iteration number, and x_{j}^{(k+1)} is the updated value of the jth variable in the (k+1)^{th} iteration.

For the first iteration, we have:

x^{1} = (1 – 2(0) + 0) / 3 = ⅓

y^{1} = (-2 – 2(1/3) + 4(0)) / -2 = -5/3

z^{1 }= (0 + (1/2)(-5/3) – (1)(0)) / -1 = 5/6

We can repeat this process until we reach a sufficiently accurate solution. Let’s perform one more iteration:

x^{2} = (1 – 2(5/6) + (-1)(0)) / 3 = -⅓

y^{2} = (-2 – 2(1/3) + 4(1/6)) / -2 = -5/3

z^{2} = (0 + (1/2)(-5/3) – (-1)(-1/3)) / -1 = 1

We can see that the updated values are getting closer to the actual solution. We can continue this process until the difference between the previous and current values of x, y, and z is below a specified tolerance level.

The solution we obtain using the Gauss-Seidel method is x = -1/3, y = -5/3, z = 1.

**Recall Eigenvalues and corresponding Eigenvectors**

Eigenvalues and eigenvectors are concepts from linear algebra that are used to understand certain properties of matrices.

An eigenvector of a matrix A is a non-zero vector v that satisfies the equation:

A * v = λ * v

where λ is a scalar known as the eigenvalue corresponding to the eigenvector v. In other words, multiplying the matrix A by the eigenvector v results in a new vector that is a scaled version of the original vector, where the scale factor is λ.

Eigenvalues and eigenvectors are important because they provide information about the behavior of the matrix A when it is used to transform vectors. Eigenvectors remain in the same direction when transformed by A, but their length is scaled by the corresponding eigenvalue λ. The eigenvalue λ gives information about how much a vector is stretched or compressed when transformed by A.

Matrices can have multiple eigenvectors and corresponding eigenvalues. In particular, an n x n matrix A has n eigenvalues and n linearly independent eigenvectors (unless the matrix is singular, in which case some eigenvalues may be zero).

Eigenvalues and eigenvectors have many applications in various fields such as physics, engineering, and computer science, and are used in various techniques such as principal component analysis, image compression, and data analysis.

**Describe the Rayleigh Power Method to evaluate Largest Eigenvalues and corresponding Eigenvectors**

The Rayleigh power method is a numerical algorithm used to find the largest eigenvalue and corresponding eigenvector of a given square matrix. The method can be used to find the largest or smallest eigenvalue, depending on whether the starting vector is chosen to be an approximation of the corresponding eigenvector for the largest or smallest eigenvalue.

Here are the steps of the Rayleigh power method:

- Start with an initial nonzero vector x
^{(0)}. - Multiply the matrix A by the initial vector to get a new vector y
^{(0)}: y^{(0)}= Ax^{(0)}. - Compute the Rayleigh quotient, which is defined as:

r(A, x^{(0)}) = (x^{(0)})T * A * x^{(0)}/ (x^{(0)})T * x^{(0)}

where (x(0))T denotes the transpose of x^{(0)}. - Normalize the vector y
^{(0)}by dividing it by its magnitude: x^{(1)}= y^{(0)}/ ||y^{(0)}||. - Repeat steps 2-4 using the vector x
^{(1)}instead of x^{(0)}until convergence is reached. - The final vector x
^{(k)}is the eigenvector corresponding to the largest eigenvalue of A, and the Rayleigh quotient r(A, x^{(k)}) is the corresponding eigenvalue.

The Rayleigh quotient is used to estimate the eigenvalue, and the normalized vector y^{(k)} is used to approximate the corresponding eigenvector. The method is iterative, and the convergence rate depends on the ratio of the largest eigenvalue to the second largest eigenvalue of the matrix A.

The Rayleigh power method can also be used to find the smallest eigenvalue and corresponding eigenvector by choosing an initial vector that is an approximation of the corresponding eigenvector for the smallest eigenvalue.