next up previous index
Next: Markov and Stochastic Models Up: Methods for Solving ODEs Previous: Using Matrix Algebra   Index

Click for printer friendely version of this HowTo

Numerical Approximations

Finding an analytical soltion to a differential equation is not always a practical option. Numerical approximations lead to solutions that are much more readily available, however, there are a number of issues related to the approximation that should be understood. The trick to constructing a viable numerical solution of a differential is identifying a reliable approximation of the derivative and then selecting a step size that results in both a stable and physically meaningful solution.

\framebox{\parbox{3in}{\setlength{\parindent}{11pt}\noindent{\bf
THE MAIN IDEA ...
...dition, and use the Taylor series to
guide the approximation of the solution. }}

Errors acquired during the construction of a numerical solution arise from two sources: roundoff and truncation. Roundoff error arises from the limited precision of computer arithmetic. The problem is compounded in that the binary representation of many fractions is irrational, enhancing the effects of the roundoff error. For example, 1/10 is irrational in the binary system whereas 1/8 is rational. For this reason, we often choose discretization intervals that are powers of 1/2 instead of powers of 1/10.

The second source of error is called truncation error. This error arises when we make discrete approximations of continuous functions. This error can be, to a certain extent, limited by making the step-sizes in the discrete function as small as possible. The Taylor series, which provided a means for creating approximate functions, also allows us to evaluate the truncation error. We often evaluate the quality of a numerical solution by estimating the error incurred with our functional approximations.

Forward Euler Methodno_title

Example 2.10.5.2 (Forward Euler Method)  

We start with a simple first order initial value problem (IVP)

$\displaystyle \frac{\textrm{d}y}{\textrm{d}t} = f(y,t)\textrm{, and } y(t_0) = y_0$ (2.10.17)

where f(y,t) is some linear or non-linear function of y and t and $ y(t_0)$ is the initial value of y at time $ t_0$.

Finding approximate values for points other than $ t_0$ is simply a matter of figuring out what step size, $ h$, of the independent variable, $ t$ to use. Figure 2.10.8 shows the idea of extending the solution using a forward Euler solution.

Figure: The general idea for extending a solution using a forward Euler method.
\includegraphics[width=3in]{numerical_step}

The strategy is to start at a point and extend the solution with a step size, h. So we start at $ (y_0,t_0)$, compute the amount of y we must add, $ f(y_0,t_0) * h$ and then use this point to repeat the algorithm:

  1. $ t_{i+1} = t_i + h$
  2. $ y_{i+1} = y_i + h*f(y_i,t_i)$

This is called the Forward Euler method because the right hand side, $ f(y,t)$ is evaluated at the initial point. Now, is this a good method? We'll now look at a Taylor expansion of our method, and estimate the error and the stability of the solution. By stability we mean will the numeric solution reflect the analytic solution for arbitrary conditions (like step size and nature of the function, $ f(y,t)$).

Now, using the Taylor series, Equation 2.2.1, we will expand $ y(t)$ around $ y(t_i)$ and assess the truncation error:

$\displaystyle y(t_{i+1}) = y(t_i) + hy'(t_i) + \frac{h^2}{2}y''(t_i) + O(h^3).$ (2.10.18)

The three left terms represent the truncated Taylor series (Euler's Method) while the right two terms represent the local truncation error. We are assuming that $ h < 1$ and thus, the $ O(h^3)$ term indicates that all terms that follow in the series will be less than $ h^3$. We see that the method is accurate to order $ h$, and not the higher order terms ($ h^2$ or $ h^3$), so we say that its first order accurate.

In order to detrmine how stable this method is for approximating an ordinary differential equation, we simply try to detrmine whether or not it will converge or diverge for large values of $ i$. For example, we will use a simple function: $ f(y, t) = \lambda y$ so that

$\displaystyle f(y, t) = \frac{\textrm{d}y}{\textrm{d}t} = \lambda y\textrm{, and } y(0) = y_0.$    

Using this to replace the function, $ f(y,t)$, in Equation 2.10.19, we have

$\displaystyle y_{i+1} = y_i + h \lambda y_i = (1 + h\lambda )y_i.$    

Now, if you start at the initial value, $ y_0$, and repeatedly apply the above equation, you'll find that

$\displaystyle y_{1}$ $\displaystyle = (1 + h\lambda)y_0$    
$\displaystyle y_{2}$ $\displaystyle = (1 + h\lambda)(1 + h\lambda)y_0$    
$\displaystyle \vdots$ $\displaystyle \quad \quad \quad \vdots$    
$\displaystyle y_{i+1}$ $\displaystyle = (1 + h\lambda)^{i+1} y_0.$    

We call the multiplier, $ 1 + h\lambda$ the amplification factor. Clearly if $ h\lambda > 0$ then the solution will blow up after a few steps. Thus, the solution is only stable when $ \lambda < 0$, since the step size, $ h$, can never be negative itself. $ \vert\boldsymbol{\vert}$

Backward Euler Methodno_title

Example 2.10.5.4 (Backward Euler Method)  

The Forward Euler method approximates the points $ y_{i+1}$ by starting from some initial point, $ y_0$ and moving to the right using the derivatave as calculated at $ y_{i}$. An alternative to this is to start from $ y_0$ and move to the right using the derivative as calculated at $ y_{i+1}$. That is, we evaluate the function, $ f(y,t)$ at $ y_{i+1}, t_{i+1}$. To do to this, we rewrite the second formula in the numerical approximation as

$\displaystyle y_{i+1} = y_i + hf(y_{i+1},t_{i+1}).$ (2.10.19)

Now we let $ f(y, t) = \lambda y$ and solve for $ y_{i+1}$ and find that

$\displaystyle y_{i+1} = y_i + h\lambda y_{i+1},$    

and thus,

$\displaystyle y_{i+1} = \left(\frac{1}{1 - h\lambda} \right) y_i.$    

Starting at the initial condition, $ y_0, t_0$ we see that

$\displaystyle y_1$ $\displaystyle = \left(\frac{1}{1 - h\lambda} \right) y_0$    
$\displaystyle y_2$ $\displaystyle = \left(\frac{1}{1 - h\lambda} \right)\left(\frac{1}{1 - h\lambda} \right) y_0$    
$\displaystyle \vdots$ $\displaystyle \quad \quad \quad \vdots$    
$\displaystyle y_{i+1}$ $\displaystyle = \left(\frac{1}{1 - h\lambda} \right)^i y_0,$    

and the amplification factor is now $ < 1$ for all values of $ \lambda < 0$ (again, since the step size, $ h$, can never be negative). Thus, this method only provides a stable solution when $ \lambda < 0$. This method is referred to as an implicit method, since the function is evaluated at a solution point, yet to be determined. For all linear functions, $ f$, a nice iterative equation can be determined. When $ f$ is non-linear, then typically the function is linearized (with a Taylor series, of course) and each step in the solution is interated until the fuction is well approximated by the Taylor approximation. $ \vert\boldsymbol{\vert}$

Leap Frog Methodno_title

Example 2.10.5.6 (Leap Frog Method)  

The error associated with the simple Euler method can be improved by realizing that for both the forward (explicit) and backward (implicit) Euler methods, there is an asymmetry between approximating the derivative and evaluating the function. For the explicit method, the function (right hand side) is evaluated at the left side of the derivative while for the implicit method, $ f(y,t)$ is evaluated at the right side of the derivative. The price for this approximation is that we now need to know two values of y $ y_i$ and $ y_{i-1}$ before we can extend the solution to the next point. We can also evaluate $ f(y,t)$ at the midpoint of the derivative, and this is called the Leap Frog Method.

$\displaystyle \frac{\textrm{d}y}{\textrm{d}t} = \frac{y_{i+1} - y_{i-1}}{2h}$    

so that

$\displaystyle y_{i+1} - y_{i-1} = 2hf(y_i,t_i)$    

As before, we can evaluate the accuracy of this strategey by comparing to the Taylor expansion of the function. We first make a Taylor approximation for both $ y_{i+1}$ and $ y_{i-1}$:

$\displaystyle y(t_{i+1}) = y(t_i) + hy'(t_i) + \frac{h^2}{2}y''(t_i) + O(h^3),$    

and

$\displaystyle y(t_{i+1}) = y(t_i) - hy'(t_i) + \frac{h^2}{2}y''(t_i) - O(h^3).$    

Now, subtracting the two series, we see that the the 2nd order terms cancel. Thus, this method is 2nd order accurate. $ \vert\boldsymbol{\vert}$

Runge-Kutta Methodno_title

Example 2.10.5.8 (Runge-Kutta Method)  

The solution accuracy can be further improved by building a strategy around the leap-frog method and the forward Euler method. This mixture results in a Runge-Kutta method that has even more accuracy. The half step is computed with the forward Euler method and then the full step is computed with the leap-frog method.

$\displaystyle y_{i+1/2} = y_i + \frac{h}{2}f(y_i,t_i)$ (2.10.20)

$\displaystyle y_{i+1} = y_i + h f(y_{i+1/2},t_{i+1/2}) + O(h^3)$ (2.10.21)

Like the leap frog method, this method is 2nd order accurate. The Runge-Kutta method applies to a class of methods where intermediate steps are taken. For example, we can make 4 evaluations and make a procedure that is 4th order accurate:

$\displaystyle k_1 = f(y_i,t_i)$ (2.10.22)

$\displaystyle k_2 = f(y_i+\frac{k_1}{2},t_i+\frac{h}{2})$ (2.10.23)

$\displaystyle k_3 = f(y_i+\frac{k_2}{2},t_i+\frac{h}{2})$ (2.10.24)

$\displaystyle k_4 = f(y_i+k_3,t_i+h)$ (2.10.25)

$\displaystyle y_{i+1} = f(y_i,t_i) + \frac{k_1}{6} + \frac{k_2}{3} + \frac{k_3}{3} + \frac{k_4}{6} + O(h^5)$ (2.10.26)

This procedure is a very popular procedure, and will usually do you right. It is safe, accurate, and except for really wierd models, will provide reliable solutions. $ \vert\boldsymbol{\vert}$

Fortunately, octave (matlab) has some very nice tools for solving odes for us. Here are two little scripts just to indicate what is needed. The first segment defines the ode as a function, in this case du/dt = u(1-u)(u-a) + stim where stim is the stimulation that forces the system to switch from one stable state to another:

> more trigger.m

function xdot=trigger(x,t)

% Trigger threshold for stim = 0.25 is 1.039 time units

xdot = zeros(1,1);
a = 0.25;
stim = 0;

if t < 1.039
   stim = 0.25;
end

xdot(1) = x(1)*(1.0-x(1))*(x(1)-a) + stim;

endfunction

To to actually solve the ode, we execute the following piece of code within matlab which calls the ode solver: lsode and then plots the results

> more fhn_trigger.m

t = linspace(0,50,400);

%x0 = [0.0; 0.0];
x0 = [0.0];

y = lsode("trigger",x0,t);

plot(t,y);

z = [t' y];
save -ascii plot.dat z;


next up previous index
Next: Markov and Stochastic Models Up: Methods for Solving ODEs Previous: Using Matrix Algebra   Index

Click for printer friendely version of this HowTo

Frank Starmer 2004-05-19
>