Numerical integration

Solving differential equations numerically is a very mature field with multiple algorithms carefully explained in multiple textbooks. However, when it comes down to actually solving them in practice by nonspecialists, a lot of people, myself included, will often resort to the Euler method. There are two main reasons for this. The first is that it is trivial to remember and code and the second is that Moore’s law has increased the speed of computing sufficiently that we can get away with it. However, I think it’s time to get out of the eighteenth century and at least move into the nineteenth.

Suppose you have a differential equation \dot{x}=f(x,t), where x can be a scalar or a vector in an arbitrary number of dimensions.  The Euler method simply discretizes the time derivative via

\frac{dx}{dt} \rightarrow \frac{x(t+h)-x(h)}{h}

so that x at time step t+h is given by

x(t+h)=x(t)+h*f(x(t),t)

Now what could be simpler than that? There are two problems with this algorithm. The first is that it is only accurate to order h so you need to have very small steps to simulate accurately. The second is that the Euler method is unstable if the step size is too large. The way around these limitations is to use very small time steps, which is computationally expensive.

The second most popular algorithm is probably (4th order) Runge-Kutta, which can generally use larger time steps for the same accuracy. However, a higher order algorithm is not necessarily faster because what really slows down a numerical scheme is the number of function evaluations it needs to make and 4th order Runge-Kutta needs to make a lot of function evaluations. Runge-Kutta is also plagued by the same stability problems as Euler and that can also limit the maximum allowed step size.

Now, these problems have been known for decades if not centuries and work-arounds do exist. In fact, the most sophisticated solvers such as CVODE tend to use adaptive multi-step methods, like Adams-Bashforth (AB) or it’s implicit cousin Adams-Moulton and this is what we should all be implementing in our own codes. Below I will give simple pseudo-code to show how a 2nd order Adams-Bashforth scheme is not much more complicated to code than Euler and uses the same number of function evaluations. In short, with not much effort, you can speed your code immensely by simply switching to AB.

The cleverness of AB is that it uses previous steps to refine the estimate for the next step. In this way, you get a higher order scheme without having to re-evaluate the right hand side of your ODE. You simply reuse your computations. It’s the ultimate green scheme. For the same ODE example above 2nd order AB is simply

x(t+2h)=x(t+h)+h\left(\frac{3}{2}f(x(t+h),t+h)-\frac{1}{2}f(x(t),t)\right)

You just need to store the previous two evaluations to compute the next step.  To start the algorithm, you can just use an Euler algorithm to get the first step.  

Here is the 2nd order AB algorithm written in pseudo Julia code for a one dimensional version of the ODE above.

# make x a two dimensional vector for the two time steps

# Set initial condition

x[1]=0;

# Store evaluation of function f(x,t) for later use

fstore[1]= f(x[1],h)

# Take Euler step

x[2] = x[1] + h*fstore[1]

# store function at time = 2h

fstore[2]  = f(x[2],2*h)

# Store computed values for output

xout[1]=x[1]

xout[2]=x[2]

# Take AB steps

for t = 3:Tfinal

     # set update indices for x based on parity of t

     index2 = t%2+1

     index1 = (t-1)%2+1

     # 2nd order AB step

     x[index1] = x[index2]+h*(1.5*fstore[2]-0.5*fstore[1]))

     # update stored function

     fstore[index1] = f(x[index1],t*h)

     xout[t]=x[index1]

end

return xout

Advertisements

2 thoughts on “Numerical integration

  1. Very cool! Is there a rule of thumb about what order of Adams-Bashforth you need in order to get good accuracy and stability, based on inspection of the ODE or of the size or path of the deltas from the Euler method?

    Like

  2. @RickG 2nd order Adams-Bashforth is basically the logical 2nd order version of Euler. Hence, if Euler is accurate at some given time step then AB2 can be run at approximate square root of that step for the same accuracy. It won’t be exact because the constant prefactor will matter. You can estimate that prefactor by linearizing your equations around some point.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s