Numerical Technique-IV

Contents

**Describe Taylor Series method to solve the Differential Equations** 1

**Apply Taylor Series method to solve the Differential Equations** 2

**Describe Picard’s Iteration method to solve the given Initial-value problem** 4

**Apply Picard’s Iteration method to solve the given Initial-value problem** 6

**Apply Euler’s method to solve the Initial-value problem** 8

**Apply Euler’s Modified method to solve the Initial-value problem** 10

**Describe the Euler’s Modified method to solve the Initial-value problem** 11

**Apply the Euler’s Modified method to solve the Initial-value problem** 13

**Describe Runge-Kutta method to solve the Initial-value problem for First order, Second order etc.** 15

**Describe Milne’s Predictor and Corrector method to solve an initial value problem** 16

**Apply Milne’s Predictor and Corrector method to solve an Initial-value problem** 18

**Describe Adams-Bashforth Predictor and Corrector method to solve an initial value problem** 20

**Apply Adams-Bashforth Predictor and Corrector method to solve an Initial-value problem** 21

**Describe Taylor Series method to solve the Differential Equations**

The Taylor series method is a numerical method used to solve differential equations. The basic idea behind this method is to represent the solution of a differential equation as a power series, which can be written in terms of the derivatives of the solution at a specific point.

The general form of a Taylor series is:

f(x) = f(a) + f'(a)(x-a) + (1/2!)f”(a)(x-a)^2 + (1/3!)f”'(a)(x-a)^3 + …

where f(a) represents the value of the function at some initial point a, and f'(a), f”(a), f”'(a), etc. represent the first, second, third, etc. derivatives of the function evaluated at the same point.

To use the Taylor series method to solve a differential equation, we start by writing the differential equation as a series of derivatives of the unknown function y(x) evaluated at some initial point x0:

y(x) = y(x0) + y'(x0)(x-x0) + (1/2!)y”(x0)(x-x0)^2 + (1/3!)y”'(x0)(x-x0)^3 + …

Next, we substitute this series expansion into the differential equation and equate coefficients of each power of (x-x0). This gives us a system of algebraic equations for the coefficients of the series. Solving these equations yields the coefficients, and hence the series expansion of the solution.

Finally, we can use the series expansion to approximate the value of y at any other point x within the interval of convergence of the series. The accuracy of the approximation depends on the order of the series and the size of the interval of convergence.

In practice, the Taylor series method can be computationally expensive, especially for higher order differential equations. Therefore, it is often used in conjunction with other numerical methods, such as Euler’s method or Runge-Kutta methods, to improve the efficiency and accuracy of the solution.

**Apply Taylor Series method to solve the Differential Equations**

To apply the Taylor series method to solve a differential equation, we start by writing the differential equation as a series expansion of the unknown function y(x) evaluated at some initial point x0. We then substitute this series expansion into the differential equation and equate coefficients of each power of (x-x0). This gives us a system of algebraic equations for the coefficients of the series. Solving these equations yields the coefficients, and hence the series expansion of the solution.

Let’s consider the following differential equation:

y”(x) + y(x) = 0

with initial conditions y(0) = 1 and y'(0) = 0.

To apply the Taylor series method to this differential equation, we start by writing the series expansion of y(x) around x0 = 0:

y(x) = y(0) + y'(0)x + (1/2!)y”(0)x^2 + (1/3!)y”'(0)x^3 + …

Differentiating the series expansion, we get:

y'(x) = y'(0) + y”(0)x + (1/2!)y”'(0)x^2 + …

y”(x) = y”(0) + (1/2!)y”'(0)x + …

Substituting these series expansions into the differential equation, we get:

(y”(0) + (1/2!)y”'(0)x + …) + (y(0) + y'(0)x + (1/2!)y”(0)x^2 + (1/3!)y”'(0)x^3 + …) = 0

Equating coefficients of each power of x, we get:

- y”(0) + y(0) = 0 (coefficient of x^0)
- y'(0) + y”(0)/2 = 0 (coefficient of x^1)
- y”(0)/2 + y'(0)/2 + y(0)/6 = 0 (coefficient of x^2)

Solving these equations for y”(0), y'(0), and y(0), we get:

y”(0) = -y(0)

y'(0) = 0

y(0) = 1

Substituting these values back into the series expansion, we get:

y(x) = 1 – (1/2!)x^2 + (1/4!)x^4 – (1/6!)x^6 + …

This is the series expansion of the solution to the differential equation. We can use this series to approximate the value of y(x) at any other point x within the interval of convergence of the series. The accuracy of the approximation depends on the order of the series and the size of the interval of convergence.

**Describe Picard’s Iteration method to solve the given Initial-value problem**

Picard’s iteration method is a recursive numerical method used to solve initial-value problems for first-order ordinary differential equations. The method involves approximating the solution to the differential equation by iteratively improving upon an initial guess.

Let’s consider the following initial-value problem:

y’ = f(x,y), y(x0) = y0

where f(x,y) is a known function of x and y, and (x0, y0) is the initial point.

The Picard’s iteration method involves the following steps:

- Start with an initial guess for the solution, say y0(x) = y0.
- For each iteration k, compute the next approximation yk(x) as:

y_k(x) = y_0 + ∫ f(s, y_{k-1}(s)) ds from x0 to x

where the integral is taken over the interval [x0, x] and y_{k-1}(s) is the previous approximation.

- Repeat step 2 until the desired level of accuracy is achieved.

In other words, at each iteration, we integrate the differential equation with the previous approximation y_{k-1}(x) as the initial condition, to obtain the new approximation y_k(x).

The Picard’s iteration method guarantees the convergence of the sequence {yk(x)} to the true solution of the initial-value problem, under certain conditions on the function f(x,y) and the interval [x0, x]. These conditions include the continuity of f(x,y) and its partial derivative with respect to y, as well as the Lipschitz condition on f(x,y) with respect to y.

In practice, the Picard’s iteration method may be computationally expensive, especially for higher order differential equations or when the interval of interest is large. Therefore, it is often used in conjunction with other numerical methods, such as Euler’s method or Runge-Kutta methods, to improve the efficiency and accuracy of the solution.

**Apply Picard’s Iteration method to solve the given Initial-value problem**

Let’s consider the initial-value problem:

y’ = x + y, y(0) = 1

To apply Picard’s iteration method to solve this problem, we first write the initial guess as y0(x) = 1. Then, we compute the first approximation y1(x) as:

y1(x) = y0 + ∫(x+ y0)dx from 0 to x

Integrating the right-hand side with respect to x, we get:

y1(x) = 1 + ∫(x+1)dx from 0 to x

y1(x) = 1 + (1/2)x^2 + x

Now, we can use y1(x) as the initial condition to compute the second approximation y2(x):

y2(x) = y0 + ∫(x+ y1(x))dx from 0 to x

Integrating the right-hand side with respect to x, we get:

y2(x) = 1 + ∫(x+ (1/2)x^2 + x + 1)dx from 0 to x

y2(x) = 1 + (1/2)x^2 + (1/2)x^3 + (1/3)x^3 + x^2 + x

Continuing this process, we can obtain higher order approximations y3(x), y4(x), and so on. However, for simplicity, let’s stop at y2(x) and compare it with the true solution.

The true solution of the initial-value problem is given by:

y(x) = 2e^x – x – 1

Comparing y2(x) with the true solution, we can see that they are not exactly the same. However, as we continue to iterate, the approximations will become more accurate.

In practice, we may need to perform many iterations to obtain a sufficiently accurate solution. Therefore, Picard’s iteration method may not be the most efficient numerical method for solving initial-value problems. Other numerical methods, such as Euler’s method or Runge-Kutta methods, may be more suitable for larger problems or when a high level of accuracy is required.

**Apply Euler’s method to solve the Initial-value problem**

Euler’s method is a simple numerical method used to solve initial-value problems for first-order ordinary differential equations. The method involves approximating the solution to the differential equation by using a first-order Taylor series approximation.

Let’s consider the following initial-value problem:

y’ = f(x,y), y(x0) = y0

where f(x,y) is a known function of x and y, and (x0, y0) is the initial point.

The Euler’s method involves the following steps:

- Start with an initial point (x0, y0).
- Choose a step size h.
- For each iteration k, compute the next approximation yk as:

yk+1 = yk + h * f(xk, yk)

where xk+1 = xk + h.

In other words, at each iteration, we use the current point (xk, yk) to estimate the slope of the solution at that point, and then use this estimate to update the solution to the next point (xk+1, yk+1).

To apply Euler’s method to solve the initial-value problem y’ = x + y, y(0) = 1, we can choose a step size h and compute the approximations as follows:

Step size h = 0.1

x0 = 0, y0 = 1

Iteration 1:

x1 = 0 + 0.1 = 0.1

y1 = 1 + 0.1 * (0 + 1) = 1.1

Iteration 2:

x2 = 0.1 + 0.1 = 0.2

y2 = 1.1 + 0.1 * (0.1 + 1.1) = 1.221

Continuing this process, we can obtain higher order approximations y3, y4, and so on. The accuracy of the method depends on the step size h, with smaller step sizes resulting in more accurate approximations.

Comparing the approximations obtained by Euler’s method with the true solution, which is y(x) = 2e^x – x – 1, we can see that the approximations are not exact, but they become closer to the true solution as we increase the number of iterations or decrease the step size.

**Apply Euler’s Modified method to solve the Initial-value problem**

Euler’s modified method, also known as the improved Euler’s method or the Heun’s method, is a modification of Euler’s method that improves its accuracy. The method involves using a midpoint approximation to estimate the slope of the solution at each iteration.

Let’s consider the following initial-value problem:

y’ = f(x,y), y(x0) = y0

where f(x,y) is a known function of x and y, and (x0, y0) is the initial point.

The Euler’s modified method involves the following steps:

- Start with an initial point (x0, y0).
- Choose a step size h.
- For each iteration k, compute the midpoint approximation y^k as:

y^k = yk + h/2 * f(xk, yk)

- Use the midpoint approximation to estimate the slope of the solution at (xk + h/2, y^k):

f(xk + h/2, y^k)

- Compute the next approximation yk+1 as:

yk+1 = yk + h * f(xk + h/2, y^k)

where xk+1 = xk + h.

In other words, at each iteration, we use the current point (xk, yk) to estimate the slope of the solution at that point, and then use this estimate to update the solution to the midpoint approximation y^k. We then use the midpoint approximation to estimate the slope at the midpoint (xk + h/2, y^k) and use this estimate to update the solution to the next point (xk+1, yk+1).

To apply Euler’s modified method to solve the initial-value problem y’ = x + y, y(0) = 1, we can choose a step size h and compute the approximations as follows:

Step size h = 0.1

x0 = 0, y0 = 1

Iteration 1:

x1 = 0 + 0.1 = 0.1

y^1 = 1 + 0.1/2 * (0 + 1) = 1.05

yk+1 = 1 + 0.1 * (0.1 + 1.05) = 1.155

Iteration 2:

x2 = 0.1 + 0.1 = 0.2

y^2 = 1.155 + 0.1/2 * (0.1 + 1.155) = 1.2025

yk+1 = 1.155 + 0.1 * (0.2 + 1.2025) = 1.31425

Continuing this process, we can obtain higher order approximations y3, y4, and so on. The accuracy of the method is higher than Euler’s method, with smaller step sizes resulting in even more accurate approximations.

Comparing the approximations obtained by Euler’s modified method with the true solution, which is y(x) = 2e^x – x – 1, we can see that the approximations are much closer to the true solution than those obtained by Euler’s method.

**Describe the Euler’s Modified method to solve the Initial-value problem**

Euler’s Modified method is also known as the Heun’s method or the improved Euler’s method. It is a numerical method used to solve first-order ordinary differential equations (ODEs) and is an extension of the Euler’s method. The Euler’s Modified method is designed to improve the accuracy of the Euler’s method by using a midpoint approximation to estimate the slope of the solution at each iteration.

Consider the following initial-value problem:

y’ = f(x,y), y(x0) = y0

where f(x,y) is a known function of x and y, and (x0, y0) is the initial point.

The Euler’s Modified method involves the following steps:

- Start with an initial point (x0, y0).
- Choose a step size h.
- For each iteration k, compute the midpoint approximation y^k as:

y^k = yk + h/2 * f(xk, yk)

- Use the midpoint approximation to estimate the slope of the solution at (xk + h/2, y^k):

f(xk + h/2, y^k)

- Compute the next approximation yk+1 as:

yk+1 = yk + h * f(xk + h/2, y^k)

where xk+1 = xk + h.

In other words, at each iteration, we use the current point (xk, yk) to estimate the slope of the solution at that point, and then use this estimate to update the solution to the midpoint approximation y^k. We then use the midpoint approximation to estimate the slope at the midpoint (xk + h/2, y^k) and use this estimate to update the solution to the next point (xk+1, yk+1).

The Euler’s Modified method has a higher order of accuracy compared to Euler’s method. It is a second-order method and therefore has a local truncation error of O(h^3) compared to Euler’s method, which has a local truncation error of O(h^2).

However, the Euler’s Modified method still suffers from numerical instability and may produce inaccurate results for some problems. For example, it may produce oscillations or instability in stiff equations.

Overall, the Euler’s Modified method is a useful and simple numerical method for solving first-order ODEs, especially for non-stiff problems.

**Apply the Euler’s Modified method to solve the Initial-value problem**

Consider the initial value problem:

y’ = x + y, y(0) = 1

We will use the Euler’s Modified method with a step size of h = 0.1 to approximate the solution of the initial-value problem at x = 0.5.

Using the Euler’s Modified method, we have the following iterative formula:

y(k+1) = y(k) + h/2 * [f(x(k), y(k)) + f(x(k)+h, y(k)+hf(x(k), y(k)))]

where f(x, y) = x + y.

Using the initial condition y(0) = 1, we have:

y(1) = y(0) + h/2 * [f(0, 1) + f(0.1, 1 + 0.1f(0, 1))]

= 1 + 0.1/2 * [(0+1) + (0.1 + 1 + 0.1(0+1))]

= 1.105

y(2) = y(1) + h/2 * [f(0.1, 1.105) + f(0.2, 1.105 + 0.1f(0.1, 1.105))]

= 1.105 + 0.1/2 * [(0.1+1.105) + (0.2 + 1.205 + 0.1(0.1+1.105))]

= 1.2225

y(3) = y(2) + h/2 * [f(0.2, 1.2225) + f(0.3, 1.2225 + 0.1f(0.2, 1.2225))]

= 1.2225 + 0.1/2 * [(0.2+1.2225) + (0.3 + 1.44625 + 0.1(0.2+1.2225))]

= 1.352

Continuing in this way, we can approximate the solution of the initial-value problem at x = 0.5.

Therefore, using the Euler’s Modified method with a step size of h = 0.1, we obtain an approximation of the solution of the initial-value problem at x = 0.5 as y(5) = 2.111.

**Describe Runge-Kutta method to solve the Initial-value problem for First order, Second order etc.**

The Runge-Kutta method is a numerical algorithm for solving ordinary differential equations (ODEs) of the form:

y'(t) = f(t, y(t))

where y(t) is the unknown function of time and f(t, y) is a known function. The initial value problem for this ODE is:

y(t0) = y0

where t0 is the initial time and y0 is the initial value of y at that time.

The Runge-Kutta method approximates the solution of this ODE by computing values of y at discrete points in time. The method starts with the initial condition and then iteratively computes new values of y at each time step. The general form of the Runge-Kutta method can be expressed as follows:

k1 = f(tn, yn)

k2 = f(tn + h/2, yn + h/2 * k1)

k3 = f(tn + h/2, yn + h/2 * k2)

k4 = f(tn + h, yn + h * k3)

yn+1 = yn + h/6 * (k1 + 2k2 + 2k3 + k4)

where tn is the current time, yn is the current value of y, h is the time step size, k1, k2, k3, and k4 are intermediate values computed at different points in the time step, and yn+1 is the new value of y at the next time step.

For a first-order ODE, the Runge-Kutta method with the above coefficients is known as the classical fourth-order Runge-Kutta method. For a second-order ODE, the method can be extended by introducing a new variable z(t) = y'(t), which reduces the second-order ODE to a system of two first-order ODEs. The method can then be applied to this system to obtain approximate values of y and z at each time step.

Overall, the Runge-Kutta method is a powerful and widely used numerical technique for solving initial-value problems for ODEs of various orders. It provides a good balance between accuracy and computational efficiency and is especially useful for problems where an analytic solution is not available.

**Describe Milne’s Predictor and Corrector method to solve an initial value problem**

Milne’s Predictor and Corrector method is a numerical algorithm for solving initial value problems for first-order ordinary differential equations. It is a fourth-order method, which means that it has an error of O(h^4) for a step size of h. The method is based on the Adams-Moulton method, which uses backward differences to estimate the derivative at the next time step.

The method involves two steps: a predictor step and a corrector step.

- Predictor Step:

In the predictor step, the method uses the fourth-order Adams-Bashforth formula to predict the value of y at the next time step:

y_n+4 = y_n + (4/3)h * (2f_n – f_n-1 + 2f_n-2)

where y_n is the current value of y, f_n = f(t_n, y_n), and h is the step size. The predictor step uses the values of f at the current and three previous time steps to estimate the value of y at the next time step.

- Corrector Step:

In the corrector step, the method uses the fourth-order Adams-Moulton formula to correct the predicted value of y:

y_n+4* = y_n+3 + (h/3) * (f_n+1* + 4f_n + f_n-1*)

where f_n+1* = f(t_n+4, y_n+4) is the estimated value of f at the next time step, and y_n+3 is the value of y at the previous time step. The corrector step uses the corrected value of f_n+1* to improve the estimate of y_n+4.

The method then repeats these two steps, using the corrected value of y_n+4 as the new initial value for the predictor step, and continuing until the desired time is reached.

Overall, Milne’s Predictor and Corrector method is an efficient and accurate numerical technique for solving initial value problems for first-order ordinary differential equations. It provides a good balance between accuracy and computational efficiency, and is especially useful for problems where an analytic solution is not available.

**Apply Milne’s Predictor and Corrector method to solve an Initial-value problem**

Sure! Let’s solve the following initial value problem using Milne’s Predictor and Corrector method:

y'(t) = -2ty(t), y(0) = 1, 0 <= t <= 1

- Predictor Step:

For the first predictor step, we need to use the fourth-order Adams-Bashforth formula to predict the value of y at t=0.4. Since we need four previous values of f to compute the formula, we will use the initial condition and the Euler’s method for the first three time steps:

- t0 = 0, y0 = 1, f0 = -2t0y0 = -2
- t1 = 0.1, y1 = y0 + h*f0 = 1 – 0.2 = 0.8, f1 = -2t1y1 = -0.16
- t2 = 0.2, y2 = y1 + h*f1 = 0.8 – 0.032 = 0.768, f2 = -2t2y2 = -0.3072
- t3 = 0.3, y3 = y2 + h*f2 = 0.768 – 0.18432 = 0.58368, f3 = -2t3y3 = -1.00224

Now we can use these four values of f to compute the predicted value of y at t=0.4:

y4 = y3 + (4/3)h*(2f3 – f2 + 2f1) = 0.58368 + (4/3)0.1(2*(-1.00224) – (-0.3072) + 2*(-0.16)) = 0.47792

- Corrector Step:

For the corrector step, we need to use the fourth-order Adams-Moulton formula to correct the predicted value of y at t=0.4. Since we already have the estimated value of f at t=0.4, we can directly use it in the formula:

y4* = y3 + (h/3)(f4 + 4f3 + f2) = 0.58368 + 0.1/3(-1.38918 + 4*(-1.00224) – 0.3072) = 0.47661

This corrected value is closer to the true value of y(0.4), so we will use it as the new initial value for the next predictor step.

- Predictor and Corrector Steps:

We can now repeat the predictor and corrector steps to compute the values of y at the remaining time steps. Here is a table of the computed values:

**Describe Adams-Bashforth Predictor and Corrector method to solve an initial value problem**

The Adams-Bashforth Predictor and Corrector method is a numerical method used to approximate the solution to an initial value problem (IVP) in ordinary differential equations.

The method is an explicit multistep method that uses previous values of the solution and its derivatives to estimate future values. The method uses a predictor-corrector approach to improve the accuracy of the solution.

The predictor step uses the previous solution values and derivatives to estimate the value of the solution at the next time step. The predictor formula is derived from interpolating the previous values using polynomial interpolation. For example, the Adams-Bashforth 2-step method uses the following predictor formula:

y_n+1_pred = y_n + h*(3/2f_n – 1/2f_n-1)

where y_n is the solution at the current time step, h is the step size, f_n is the derivative of the solution at the current time step, and f_n-1 is the derivative at the previous time step.

After the predictor step, a corrector step is used to refine the estimate of the solution at the next time step. The corrector formula is derived by using the predicted value as the initial value for an implicit method. For example, the Adams-Bashforth 2-step method uses the following corrector formula:

y_n+1 = y_n + h*(1/2f_n+1_pred + 1/2f_n)

where f_n+1_pred is the derivative of the predicted solution at the next time step.

This process can be repeated to obtain approximations to the solution at subsequent time steps. The Adams-Bashforth method is useful when the derivatives of the solution can be efficiently computed and is typically more accurate than one-step methods like Euler’s method.

**Apply Adams-Bashforth Predictor and Corrector method to solve an Initial-value problem**

To apply the Adams-Bashforth Predictor and Corrector method to solve the initial value problem:

y’ = -2y + 4t

y(0) = 1

over the interval [0,1] using a step size of h=0.1, we need to follow these steps:

Step 1: Approximate the solution at t=0.1 using a one-step method such as Euler’s method.

y1 = y0 + hf(t0, y0)

= 1 + 0.1(-21 + 40)

= 0.8

Step 2: Use the Adams-Bashforth predictor method to compute the approximate values of y at t=0.2, 0.3, 0.4, …, 1.

yi+1_pred = yi + h/2 * [3f(ti, yi) – f(ti-1, yi-1)]

y2_pred = y1 + 0.1/2 * [3*(-2y1 + 40.1) – (-21 + 40)]

= 0.643

y3_pred = y2 + 0.1/2 * [3*(-2y2 + 40.2) – (-2y1 + 40.1)]

= 0.513

y4_pred = y3 + 0.1/2 * [3*(-2y3 + 40.3) – (-2y2 + 40.2)]

= 0.413

y5_pred = y4 + 0.1/2 * [3*(-2y4 + 40.4) – (-2y3 + 40.3)]

= 0.337

y6_pred = y5 + 0.1/2 * [3*(-2y5 + 40.5) – (-2y4 + 40.4)]

= 0.279

y7_pred = y6 + 0.1/2 * [3*(-2y6 + 40.6) – (-2y5 + 40.5)]

= 0.236

y8_pred = y7 + 0.1/2 * [3*(-2y7 + 40.7) – (-2y6 + 40.6)]

= 0.205

y9_pred = y8 + 0.1/2 * [3*(-2y8 + 40.8) – (-2y7 + 40.7)]

= 0.183

y10_pred = y9 + 0.1/2 * [3*(-2y9 + 40.9) – (-2y8 + 40.8)]

= 0.168

Step 3: Use the Adams-Bashforth corrector method to refine the approximate values of y at t=0.2, 0.3, 0.4, …, 1.

yi+1_corr = yi + h/2 * [f(ti+1, yi+1_pred) + f(ti, yi) – f(ti-1, yi-1)]

y2_corr = y1 + 0.1/2 * [(-2y2_pred + 40.2 – 2y1 + 40.1) + (-2y1 + 40)]

= 0.661

y3_corr = = y2 + 0.1/2 * [(-2y3_pred + 40.3 – 2y2 + 40.2) + (-2y2 + 40.1)]

= 0.528

y4_corr = y3 + 0.1/2 * [(-2y4_pred + 40.4 – 2y3 + 40.3) + (-2y3 + 40.2)]

= 0.425

y5_corr = y4 + 0.1/2 * [(-2y5_pred + 40.5 – 2y4 + 40.4) + (-2y4 + 40.3)]

= 0.346

y6_corr = y5 + 0.1/2 * [(-2y6_pred + 40.6 – 2y5 + 40.5) + (-2y5 + 40.4)]

= 0.285

y7_corr = y6 + 0.1/2 * [(-2y7_pred + 40.7 – 2y6 + 40.6) + (-2y6 + 40.5)]

= 0.238

y8_corr = y7 + 0.1/2 * [(-2y8_pred + 40.8 – 2y7 + 40.7) + (-2y7 + 40.6)]

= 0.203

y9_corr = y8 + 0.1/2 * [(-2y9_pred + 40.9 – 2y8 + 40.8) + (-2y8 + 40.7)]

= 0.179

y10_corr = y9 + 0.1/2 * [(-2y10_pred + 41 – 2y9 + 40.9) + (-2y9 + 40.8)]

= 0.163

Therefore, the approximate solution to the initial value problem using the Adams-Bashforth Predictor and Corrector method with a step size of h=0.1 over the interval [0,1] is:

y(0.1) = 0.661

y(0.2) = 0.528

y(0.3) = 0.425

y(0.4) = 0.346

y(0.5) = 0.285

y(0.6) = 0.238

y(0.7) = 0.203

y(0.8) = 0.179

y(0.9) = 0.163

y(1.0) = 0.153