Euler's Method for Numerical ODEs

April 14, 2026

Problem

Approximate the solution of y′ = x + y with y(0) = 1 on 0 ≤ x ≤ 1 using Euler's method with step size h = 0.2. Compare against the exact solution y = 2e^x − x − 1.

Explanation

The simplest numerical ODE method

Most ODEs you meet in the wild — especially non-linear ones — have no closed-form solution. Numerical methods approximate the solution by stepping along the slope field in small increments. Euler's method is the most basic such method, and understanding it well sets you up for every more-sophisticated solver (Runge-Kutta, Adams, adaptive steppers).

The idea: given y=f(x,y)y' = f(x, y) and y(x0)=y0y(x_0) = y_0, you know the slope at every point. Start at (x0,y0)(x_0, y_0), take a short step of width hh along the tangent line, then recompute the slope at the new point and step again.

The update rule is yn+1=yn+hf(xn,yn),xn+1=xn+h\boxed{\, y_{n+1} = y_n + h \cdot f(x_n, y_n), \qquad x_{n+1} = x_n + h \,}

Geometrically: replace each infinitesimal tangent segment of the true solution with a finite chord of length hh.

The given IVP

y=x+y,y(0)=1,h=0.2,x[0,1].y' = x + y, \qquad y(0) = 1, \qquad h = 0.2, \qquad x \in [0, 1].

So f(x,y)=x+yf(x, y) = x + y. Five steps to reach x=1x = 1.

Step-by-step Euler computation

Start at (x0,y0)=(0,1)(x_0, y_0) = (0, 1).

Step 1 (x0=0x_0 = 0):

  • slope = f(0,1)=0+1=1f(0, 1) = 0 + 1 = 1.
  • y1=y0+hslope=1+0.21=1.200y_1 = y_0 + h \cdot \text{slope} = 1 + 0.2 \cdot 1 = 1.200.
  • x1=0.2x_1 = 0.2.

Step 2 (x1=0.2x_1 = 0.2):

  • slope = f(0.2,1.2)=0.2+1.2=1.4f(0.2, 1.2) = 0.2 + 1.2 = 1.4.
  • y2=1.2+0.21.4=1.480y_2 = 1.2 + 0.2 \cdot 1.4 = 1.480.
  • x2=0.4x_2 = 0.4.

Step 3 (x2=0.4x_2 = 0.4):

  • slope = f(0.4,1.48)=1.88f(0.4, 1.48) = 1.88.
  • y3=1.48+0.21.88=1.856y_3 = 1.48 + 0.2 \cdot 1.88 = 1.856.
  • x3=0.6x_3 = 0.6.

Step 4 (x3=0.6x_3 = 0.6):

  • slope = f(0.6,1.856)=2.456f(0.6, 1.856) = 2.456.
  • y4=1.856+0.22.456=2.347y_4 = 1.856 + 0.2 \cdot 2.456 = 2.347.
  • x4=0.8x_4 = 0.8.

Step 5 (x4=0.8x_4 = 0.8):

  • slope = f(0.8,2.347)=3.147f(0.8, 2.347) = 3.147.
  • y5=2.347+0.23.147=2.977y_5 = 2.347 + 0.2 \cdot 3.147 = 2.977.
  • x5=1.0x_5 = 1.0.

Numerical estimate: y(1)2.977y(1) \approx 2.977.

Exact solution — compare

yy=xy' - y = x is a first-order linear ODE (#174, #175). Integrating factor μ=ex\mu = e^{-x}: (exy)=exx(e^{-x} y)' = e^{-x} \cdot x

Integrate by parts: exy=(x+1)ex+C    y=(x+1)+Cex.e^{-x} y = -(x + 1) e^{-x} + C \implies y = -(x + 1) + C e^{x}.

Apply y(0)=1y(0) = 1: 1=1+C    C=21 = -1 + C \implies C = 2. y(x)=2exx1.y(x) = 2 e^{x} - x - 1.

Exact value at x=1x = 1: y(1)=2e11=2e23.4366y(1) = 2e - 1 - 1 = 2e - 2 \approx 3.4366.

Error: 3.43662.977=0.45963.4366 - 2.977 = 0.4596, or about 13% relative error. Ouch — Euler's method is crude.

Why the error is so large — local vs global

Each step introduces a local truncation error that scales as O(h2)O(h^2) (it's the neglected second-derivative term in the Taylor expansion): y(xn+1)=y(xn)+hy(xn)+h22y(ξn)y(x_{n+1}) = y(x_n) + h y'(x_n) + \tfrac{h^2}{2} y''(\xi_n)

We keep the first two terms (that's Euler) and drop the O(h2)O(h^2). Accumulated over N=(xfinalx0)/hN = (x_{\text{final}} - x_0)/h steps, errors compound to a global error of O(h)O(h). Euler is first-order accurate: halving hh halves the error (roughly).

How to do better

  • RK2 (midpoint / improved Euler): use the slope at the midpoint or an average of start/end slopes. Global error O(h2)O(h^2).
  • RK4 (classic Runge-Kutta): weighted average of 4 slopes per step. Global error O(h4)O(h^4). Standard workhorse.
  • Adaptive step size (RK45, Dormand-Prince): adjust hh on the fly to keep the estimated error under a target. Used by scipy odeint, MATLAB ode45, etc.
  • Implicit methods (backward Euler, BDF): for stiff ODEs where explicit methods require extremely small hh to stay stable.

Euler is almost never used in production — but it's the conceptual foundation for everything else.

Stability — not just accuracy

For y=λyy' = -\lambda y with λ>0\lambda > 0 (a decaying ODE), Euler gives yn+1=(1λh)yny_{n+1} = (1 - \lambda h) y_n. If h>2/λh > 2/\lambda, the factor 1λh>1|1 - \lambda h| > 1 and the numerical solution grows without bound — completely wrong. Stability requires h<2/λh < 2/\lambda, which can force absurdly small hh for stiff ODEs with large λ\lambda. This is why implicit methods exist.

Error halves when hh halves

If you run the same problem with h=0.1h = 0.1 (10 steps), the error at x=1x = 1 is roughly half what it was with h=0.2h = 0.2. With h=0.05h = 0.05, another factor of 2. This linear-in-hh decay is the signature of a first-order method.

(Contrast RK4, where halving hh cuts the error by a factor of 1616.)

Where Euler's method shows up pedagogically

  • Teaching the concept of numerical integration without the machinery of higher-order methods.
  • Simple game-physics simulations: for objects with small velocities and short time steps, Euler is fine. Most game engines start with Euler and graduate to Verlet or RK2 when stability matters.
  • First-pass sanity checks: "does the ODE behave qualitatively as I expect?" A quick Euler run answers that.
  • Inside more complex methods: multi-step methods like Adams-Bashforth use past Euler-like evaluations.

Common mistakes

  • Using f(xn,yn+1)f(x_n, y_{n+1}) instead of f(xn,yn)f(x_n, y_n). That would be backward Euler (implicit), a different and more stable method, but not what "Euler's method" normally means.
  • Forgetting to advance xx. Each step must update both ynyn+1y_n \to y_{n+1} and xnxn+1x_n \to x_{n+1}.
  • Comparing to exact solution at intermediate xnx_n assuming the exact is at yny_n. The numerical yny_n is an approximation; the exact value at xnx_n is different.
  • Not matching units. If your ODE is physical (units of tt = seconds, units of yy = metres), your step size hh is in seconds and your ff has units of metres/second.

Try it in the visualization

Draw the slope field for y=x+yy' = x + y, overlay the exact solution y=2exx1y = 2e^x - x - 1, and plot the Euler "stairstep" of tangent segments. Slide hh down from 0.5 toward 0.02 and watch the stairstep hug the exact curve more tightly. Overlay an error curve yexact(xn)yn|y_{\text{exact}}(x_n) - y_n| to see the global error shrinking.

Interactive Visualization

Parameters

0.20
1.50
10.00
1.00
Your turn

Got your own math or physics problem?

Turn any problem into an interactive visualization like this one — powered by AI, generated in seconds. Free to try, no credit card required.

Sign Up Free to Try It30 free visualizations every day
Euler's Method for Numerical ODEs | MathSpin