6.3 Multistep Methods

INTRODUCTION

Euler’s, the improved Euler’s, and the Runge–Kutta methods are examples of single-step or starting methods. In these methods each successive value yn + 1 is computed based only on information about the immediately preceding value yn. On the other hand, multistep or continuing methods use the values from several computed steps to obtain the value of yn + 1. There are a large number of multistep method formulas for approximating solutions of DEs, but since it is not our intention to survey the vast field of numerical procedures, we will consider only one such method here.

Adams–Bashforth–Moulton Method

The multistep method discussed in this section is called the Adams–Bashforth–Moulton method after the English astronomer/mathematician John Couch Adams (1819–1892) who co-discovered the planet Neptune, the English applied mathematician Francis Bashforth (1819–1912), and the American astronomer Forest Ray Moulton (1872–1952). Like the improved Euler’s method it is a predictor–corrector method—that is, one formula is used to predict a value , which in turn is used to obtain a corrected value yn + 1. The predictor in this method is the Adams–Bashforth formula

(1)

for n ≥ 3. The value of is then substituted into the Adams–Moulton corrector

(2)

Notice that formula (1) requires that we know the values of y0, y1, y2, and y3 in order to obtain y4. The value of y0 is, of course, the given initial condition. Since the local truncation error of the Adams–Bashforth–Moulton method is O(h5), the values of y1, y2, and y3 are generally computed by a method with the same error property, such as the fourth-order Runge–Kutta formula.

EXAMPLE 1 Adams–Bashforth–Moulton Method

Use the Adams–Bashforth–Moulton method with h = 0.2 to obtain an approximation to y(0.8) for the solution of

y′ = x + y − 1,     y(0) = 1.

SOLUTION

With a step size of h = 0.2, y(0.8) will be approximated by y4. To get started we use the RK4 method with x0 = 0, y0 = 1, and h = 0.2 to obtain

y1 = 1.02140000,     y2 = 1.09181796,     y3 = 1.22210646.

Now with the identifications x0 = 0, x1 = 0.2, x2 = 0.4, x3 = 0.6, and f(x, y) = x + y − 1, we find

With the foregoing values, the predictor (1) then gives

= y3 + (55 − 59 + 37 − 9) = 1.42535975.

To use the corrector (2) we first need

= f(x4, ) = 0.8 + 1.42535975 − 1 = 1.22535975.

Finally, (2) yields

y4 = y3 + (9 + 19 − 5 + ) = 1.42552788.

You should verify that the exact value of y(0.8) in Example 1 is y(0.8) = 1.42554093.

Stability of Numerical Methods

An important consideration in using numerical methods to approximate the solution of an initial-value problem is the stability of the method. Simply stated, a numerical method is stable if small changes in the initial condition result in only small changes in the computed solution. A numerical method is said to be unstable if it is not stable. The reason that stability considerations are important is that in each step after the first step of a numerical technique we are essentially starting over again with a new initial-value problem, where the initial condition is the approximate solution value computed in the preceding step. Because of the presence of round-off error, this value will almost certainly vary at least slightly from the true value of the solution. Besides round-off error, another common source of error occurs in the initial condition itself; in physical applications the data are often obtained by imprecise measurements.

One possible method for detecting instability in the numerical solution of a specific initial-value problem is to compare the approximate solutions obtained when decreasing step sizes are used. If the numerical method is unstable, the error may actually increase with smaller step sizes. Another way of checking stability is to observe what happens to solutions when the initial condition is slightly perturbed (for example, change y(0) = 1 to y(0) = 0.999).

For a more detailed and precise discussion of stability, consult a numerical analysis text. In general, all of the methods we have discussed in this chapter have good stability characteristics.

Advantages/Disadvantages of Multistep Methods

Many considerations enter into the choice of a method to solve a differential equation numerically. Single-step methods, particularly the Runge–Kutta method, are often chosen because of their accuracy and the fact that they are easy to program. However, a major drawback is that the right-hand side of the differential equation must be evaluated many times at each step. For instance, the RK4 method requires four function evaluations for each step. On the other hand, if the function evaluations in the previous step have been calculated and stored, a multistep method requires only one new function evaluation for each step. This can lead to great savings in time and expense.

As an example, to solve y′ = f(x, y), y(x0) = y0 numerically using n steps by the RK4 method requires 4n function evaluations. The Adams–Bashforth multistep method requires 16 function evaluations for the Runge–Kutta fourth-order starter and n − 4 for the n Adams–Bashforth steps, giving a total of n + 12 function evaluations for this method. In general the Adams–Bashforth multistep method requires slightly more than a quarter of the number of function evaluations required for the RK4 method. If the evaluation of f(x, y) is complicated, the multistep method will be more efficient.

Another issue involved with multistep methods is how many times the Adams–Moulton corrector formula should be repeated in each step. Each time the corrector is used, another function evaluation is done, and so the accuracy is increased at the expense of losing an advantage of the multistep method. In practice, the corrector is calculated once, and if the value of yn + 1 is changed by a large amount, the entire problem is restarted using a smaller step size. This is often the basis of the variable step size methods, whose discussion is beyond the scope of this text.

6.3 Exercises Answers to selected odd-numbered problems begin on page ANS-15.

  1. Find the analytic solution of the initial-value problem in Example 1. Compare the actual values of y(0.2), y(0.4), y(0.6), and y(0.8) with the approximations y1, y2, y3, and y4.
  2. Write a computer program to implement the Adams–Bashforth–Moulton method.

In Problems 3 and 4, use the Adams–Bashforth–Moulton method to approximate y(0.8), where y(x) is the solution of the given initial-value problem. Use h = 0.2 and the RK4 method to compute y1, y2, and y3.

  1. y′ = 2x − 3y + 1,     y(0) = 1
  2. y′ = 4x − 2y,     y(0) = 2

In Problems 5–8, use the Adams–Bashforth–Moulton method to approximate y(1.0), where y(x) is the solution of the given initial-value problem. First use h = 0.2 and then use h = 0.1. Use the RK4 method to compute y1, y2, and y3.

  1. y′ = 1 + y2,     y(0) = 0
  2. y′ = y + cos x,     y(0) = 1
  3. y′ = (xy)2,     y(0) = 0
  4. y′ = xy + ,     y(0) = 1