Numerical methods for ordinary differential equations
Adapted from Wikipedia · Discoverer experience
Numerical methods for ordinary differential equations are ways to find answers to math problems called ordinary differential equations. These equations describe how things change, like how fast a car accelerates or how a plant grows over time. Often, we can't solve these equations exactly using simple math, so we use special steps, or algorithms, to get really close answers instead.
These methods are very important in real life. Engineers use them to design bridges and machines. Doctors use them to understand how medicines work in our bodies. Even economists use them to predict how money and markets will change.
The process of using these steps to find answers is sometimes called "numerical integration", although that term can also mean something else — finding the area under a curve. Instead of solving the equations perfectly, we use algorithms to calculate good guesses.
Ordinary differential equations show up in many areas of science. Physics uses them to describe motion and forces. Chemistry uses them to study reactions. Biology uses them to model living things. And economics uses them to understand how resources change over time. Sometimes, even harder math problems called numerical partial differential equations are broken down into ordinary differential equations so we can solve them more easily.
The problem
A first-order differential equation is an Initial value problem (IVP). This means we know the starting point of a problem and want to predict what happens next. Many real-world problems, like how things move or grow over time, can be described this way.
Higher-order problems can be broken down into several first-order problems. For example, a problem about how something changes twice over time can be split into two simpler problems. There are also special problems called boundary value problems (BVPs), where we know conditions at different points, and these need different solving methods.
| y ′ ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 , {\displaystyle y'(t)=f(t,y(t)),\qquad y(t_{0})=y_{0},} | 1 |
Methods
Numerical methods help us find answers to complex math problems called ordinary differential equations, which often cannot be solved exactly. These methods are split into two main types: linear multistep methods and Runge–Kutta methods. Some methods are explicit, meaning they use known values to find new ones, while others are implicit, requiring solving equations to find new values.
One simple method is the Euler method. Imagine moving along a curve by taking small steps in the direction of the curve’s slope. This method is easy but not always very accurate. Another method, the backward Euler method, is more complex but can handle certain problems better. There are also more advanced methods like exponential integrators and Runge–Kutta methods, which provide better accuracy by considering more points or past values.
Main article: Euler method
Further information: Backward Euler method
Further information: Exponential integrator
| y ′ ( t ) ≈ y ( t + h ) − y ( t ) h , {\displaystyle y'(t)\approx {\frac {y(t+h)-y(t)}{h}},} | 2 |
| y ( t + h ) ≈ y ( t ) + h f ( t , y ( t ) ) . {\displaystyle y(t+h)\approx y(t)+hf(t,y(t)).} | 3 |
| y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).} | 4 |
| y ′ ( t ) ≈ y ( t ) − y ( t − h ) h , {\displaystyle y'(t)\approx {\frac {y(t)-y(t-h)}{h}},} | 5 |
| y n + 1 = y n + h f ( t n + 1 , y n + 1 ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n+1},y_{n+1}).} | 6 |
| y ′ ( t ) = − A y + N ( y ) , {\displaystyle y'(t)=-A\,y+{\mathcal {N}}(y),} | 7 |
| y n + 1 = e − A h y n + A − 1 ( 1 − e − A h ) N ( y ( t n ) ) . {\displaystyle y_{n+1}=e^{-Ah}y_{n}+A^{-1}\left(1-e^{-Ah}\right){\mathcal {N}}{\left(y(t_{n})\right)}\ .} | 8 |
Analysis
Numerical analysis looks at how we design and check numerical methods for solving problems in math. Three big ideas in this area are convergence, order, and stability.
Convergence means that as we make our steps smaller, the answer we get gets closer to the real answer. If a method is convergent, it means it can give us good approximations.
Order tells us how good the approximation is. A higher order means the method is more accurate with smaller steps.
Stability is about whether small errors in the process grow or stay small. A stable method keeps errors under control, which is very important for solving certain tricky problems.
History
Here is a timeline of important developments in numerical methods for solving differential equations.
In 1768, Leonhard Euler published his method, a way to approximate solutions. In 1824, Augustin Louis Cauchy proved that Euler’s method works well. In 1895, Carl Runge published the first Runge–Kutta method, and in 1901, Martin Kutta described a popular version of this method. Later advances continued to improve these techniques.
Numerical solutions to second-order one-dimensional boundary value problems
Boundary value problems are puzzles where we know the starting and ending points of a function and want to find the function in between. We solve these problems by turning them into simpler puzzles using a method called the Finite Difference Method. This method uses nearby points to estimate slopes and curves in the function.
For example, we might want to solve a problem where the curve starts at zero and ends at one. By breaking the space into small steps and using simple formulas to estimate the curve’s shape at each step, we can set up a system of equations. Solving this system gives us the values of the function at each point, building the whole curve step by step.
This article is a child-friendly adaptation of the Wikipedia article on Numerical methods for ordinary differential equations, available under CC BY-SA 4.0.
Safekipedia