Numerical methods for ordinary differential equations
Adapted from Wikipedia · Adventurer experience
Numerical methods for ordinary differential equations are ways to solve math problems called ordinary differential equations. These equations describe how things change, like how fast a car speeds up or how a plant grows.
We often can't solve these equations exactly with simple math. So, we use special steps, or algorithms, to find answers that are very close.
These methods are important in real life. Engineers use them to design bridges and machines. Doctors use them to learn how medicines work in our bodies. Economists use them to predict changes in money and markets.
The process of using these steps is sometimes called "numerical integration". This term can also mean finding the area under a curve. Instead of solving equations perfectly, we use algorithms to make good guesses.
Ordinary differential equations are used in many areas of science. Physics uses them to describe motion and forces. Chemistry uses them to study reactions. Biology uses them to model living things. And economics uses them to understand how resources change over time. Sometimes, more complex problems called numerical partial differential equations are broken down into ordinary differential equations to make them easier to solve.
The problem
A first-order differential equation is an Initial value problem (IVP). This means we know where we start and want to know what happens next. Many real-world problems, like how things move or grow over time, can be described this way.
Higher-order problems can be split into several first-order problems. For example, a problem about how something changes twice over time can be split into two simpler problems. There are also special problems called boundary value problems (BVPs), where we know conditions at different points, and these need different solving methods.
| y ′ ( t ) = f ( t , y ( t ) ) , y ( t 0 ) = y 0 , {\displaystyle y'(t)=f(t,y(t)),\qquad y(t_{0})=y_{0},} | 1 |
Methods
Numerical methods help us solve complex math problems called ordinary differential equations, which often do not have exact answers. These methods are split into two main types: linear multistep methods and Runge–Kutta methods.
Some methods are simple, using known values to find new ones. Others are more complex and need extra steps to find new values.
One simple method is the Euler method. Imagine moving along a curve by taking small steps in the direction of the curve’s slope. This method is easy but not always very accurate. Another method, the backward Euler method, is more complex but can handle certain problems better. There are also more advanced methods like exponential integrators and Runge–Kutta methods, which give better results by looking at more points or past values.
Main article: Euler method
Further information: Backward Euler method
Further information: Exponential integrator
| y ′ ( t ) ≈ y ( t + h ) − y ( t ) h , {\displaystyle y'(t)\approx {\frac {y(t+h)-y(t)}{h}},} | 2 |
| y ( t + h ) ≈ y ( t ) + h f ( t , y ( t ) ) . {\displaystyle y(t+h)\approx y(t)+hf(t,y(t)).} | 3 |
| y n + 1 = y n + h f ( t n , y n ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n},y_{n}).} | 4 |
| y ′ ( t ) ≈ y ( t ) − y ( t − h ) h , {\displaystyle y'(t)\approx {\frac {y(t)-y(t-h)}{h}},} | 5 |
| y n + 1 = y n + h f ( t n + 1 , y n + 1 ) . {\displaystyle y_{n+1}=y_{n}+hf(t_{n+1},y_{n+1}).} | 6 |
| y ′ ( t ) = − A y + N ( y ) , {\displaystyle y'(t)=-A\,y+{\mathcal {N}}(y),} | 7 |
| y n + 1 = e − A h y n + A − 1 ( 1 − e − A h ) N ( y ( t n ) ) . {\displaystyle y_{n+1}=e^{-Ah}y_{n}+A^{-1}\left(1-e^{-Ah}\right){\mathcal {N}}{\left(y(t_{n})\right)}\ .} | 8 |
Analysis
Numerical analysis studies how we create and test methods to solve math problems. Three key ideas are convergence, order, and stability.
Convergence means that when we make our steps smaller, our answer gets closer to the true answer. If a method is convergent, it can give us good guesses.
Order shows how good the guess is. A higher order means the method is more precise with smaller steps.
Stability checks if small mistakes in the process stay small or get bigger. A stable method keeps mistakes under control, which is important for solving difficult problems.
History
Here is a timeline of important developments in numerical methods for solving differential equations.
In 1768, Leonhard Euler published his method, a way to find answers to problems. In 1824, Augustin Louis Cauchy showed that Euler’s method works well. In 1895, Carl Runge published the first Runge–Kutta method, and in 1901, Martin Kutta described a popular version of this method. Later, more improvements were made to these techniques.
Numerical solutions to second-order one-dimensional boundary value problems
Boundary value problems are puzzles. We know the starting and ending points of a line, but we need to find the line in between. We solve these puzzles by turning them into simpler ones. We use a method called the Finite Difference Method. This method looks at nearby points to guess the slopes and curves of the line.
For example, imagine a line that starts at zero and ends at one. We can break the space into small steps. Then we use easy math to guess the shape of the line at each step. This helps us make a set of equations. Solving these equations gives us the values of the line at each point. This builds the whole line step by step.
This article is a child-friendly adaptation of the Wikipedia article on Numerical methods for ordinary differential equations, available under CC BY-SA 4.0.
Safekipedia