Press "Enter" to skip to content

Finite Differences, Part 1

This is the very first post for my website so I’ve decided to start with something easy: finite differences. (This article assumes that you know the basic of calculus.)

What is a finite difference?
Finite differences are a method to approximate the derivative of a function by evaluating that function at a handful of points.

Why?
Students are typically introduced to derivatives by finding the derivative of known analytical functions like \(x\), \(\sin(x)\), and \(\exp(x)\). When computing the derivative in most science and engineering situations, the exact analytical form of the function is typically not known — but we can evaluate the function several times.

One straight-forward example of this is if the function that we would like to differentiate is the result of an actual experiment. Suppose we are measuring the drag on an airfoil in a wind tunnel. As the angle-of-attack (AoA) of the airfoil changes, the drag also changes. We can use finite differences to compute the derivative of drag with respect to AoA since we are able to repeat the experiment. This is equivalent to evaluating a function multiple times.

How?
Now that I’ve given a very basic definition of what finite differences are and one of their possible uses, let’s dive into the details.

Hopefully you have been introduced to Taylor series. This is a method of expanding a function as a summation of its derivatives centered around the point \(x_0\):
$$f(x) = f(x_0) + \frac{df}{dx}(x_0)\frac{(x – x_0)}{1!} + \frac{d^2f}{dx^2}\frac{(x – x_0)^2}{2!} + \ldots$$
or:
$$f(x) = \sum_{n=0}^{\infty}\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n.$$

Note that if the function \(f(x)\) is analytic, this expression is exact. If we truncate this series, it is now an approximation. Near \(x_0\) the error is proportional to the largest neglected term in the series. For instance if we only retain the first two terms, the approximation error is \(O(x-x_0)^2\). The function approximation can be written as:
$$f(x) = f(x_0) + \frac{df}{dx}\bigg\vert_{x_0}(x – x_0) + O(x-x_0)^2.$$
Solving the above equation for \(\frac{df}{dx}\bigg\vert_{x_0}\) and letting \(x = x_1\) we get a finite difference approximation for the first derivative of \(f(x)\) at \(x_0\):
$$\frac{df}{dx}\bigg\vert_{x_0} = \frac{f(x_1) – f(x_0)}{x_1 – x_0} + O(x_1 – x_0).$$
Since the truncation error proportional to \(O(x_1 – x_0)^1\) we say that this finite difference approximation is first-order accurate. The good thing about this derivative approximation is that it only requires the function to be evaluated twice. The downside is that we often need a better approximation than this. Future posts will explain how to find higher-order approximations and also approximations for higher derivatives.

Be First to Comment

Leave a Reply

Your email address will not be published. Required fields are marked *