# Numerical Methods (NM)

# Solutions of equations

**TO DO: Rate of convergence - NMe3; not covered until chapter 6**

**e1-4** Bisection method:

**1)**Find an interval estimate (two $x$-coordinates which straddle the root, one each side). These can be confirmed by checking for a change of sign**2)**Find the mid-point and calculate $f(x)$**3)**Out of the three $x$-coordinates now available, find the smallest interval which has a change of sign**4)**Repeat steps 2 and 3 until the root is approximated to the required accuracy**5)**Stop once the left and right interval are equal when rounded to the required number of decimal places/significant figures

## Newton-Raphson method:

- This method draws a tangent at a starting approximation. The $x$-coordinate where this tangent intersects the $x$-axis is used as the approximation for the next iteration. As the iterations progress, the value of $x$ should converge towards the true value
- The initial approximation is written as $x_0$
- Use the following equation (2
^{nd}page of the exam formula booklet):$x_{n+1} = x_{n} - \frac{f(x_{n})}{f'(x_{n})}$x_{n+1} = x_{n} - \frac{f(x_{n})}{f'(x_{n})} - In most cases, if two iterations produce the same approximation when rounded to the required number of decimal places/significant figures, no further iterations will be required. To confirm this, use the change of sign method to check the error bounds (like in the C3 coursework)
- This method has second order convergence

## Secant method:

- This is similar to the Newton-Raphson method, but doesn't require differentiation. Instead, two points are connected to create an approximation for the tangent to the curve. The $x$-intercept of this tangent then creates the next point
**1)**Start with two $x$-estimates; $x_0$ and $x_1$. They can be either straddling the root or with both on the same side of it**2)**Calculate $x_2$ with$x_2 = \frac{x_{0}f(x_{1})\ -\ x_{1}f(x_{0})}{f(x_{1})\ -\ f(x_{0})}$x_2 = \frac{x_{0}f(x_{1})\ -\ x_{1}f(x_{0})}{f(x_{1})\ -\ f(x_{0})} - You must memorize this equation. The order of the subscripts is 0110 10
- In terms of $x_r$ and $x_{r-1}$: $x_{r+1} = \frac{x_{r-1}f(x_{r})\ -\ x_{r}f(x_{r-1})}{f(x_{r})\ -\ f(x_{r-1})}$
x_{r+1} = \frac{x_{r-1}f(x_{r})\ -\ x_{r}f(x_{r-1})}{f(x_{r})\ -\ f(x_{r-1})}

## Quick secant method on a Casio calculator:

- Enter $x_0$ and save to memory slot A
- Enter $x_1$ and save to memory slot B
- Use the secant formula with As and Bs (e.g. for $f(x) = 4x^3 - 5x + 1$):

$\frac{A(4B^{3}-5B+1)\ -\ B(4A^{3}-5A+1)}{(4B^{3}-5B+1)-(4A^{3}-5A+1)}$ - Press =
- Now save this to slot C
- Copy the contents of slot B to slot A [ALPHA+B, STO+A (note: don't press ALPHA before pressing A on the second step)]
- Copy the contents of slot C to slot B [ALPHA+C, STO+B (same applies here)]
- Use the up arrow (if you followed the above steps exactly you'll need to press it 3 times) to return to the equation in terms of A and B
- Now repeat again from the step 'press ='
**Summary**: save $x_0$ to A and $x_1$ to B. Enter the equation and press =. Save the answer to slot C. Copy slot B to A and then C to B. Return to the original equation, press = and repeat.

## False position (linear interpolation) method:

- This method uses two starting approximations, one each side of the root
- It then connects them with the line to approximate a tangent, and uses the intersection of that line with the $x$-axis as the next value. This value is used alongside whichever starting approximation is on the opposite side of the root to it - so a change of sign check is required after each iteration
- $a$ and $b$ are used as the approximations, the root must between these two values
- The formula is the same as with the secant method:$c = \frac{af(b)-bf(a)}{f(b)-f(a)}$
c = \frac{af(b)-bf(a)}{f(b)-f(a)} - After applying this formula, check the sign on $a$, $b$ and $c$ and use $c$, and either $a$ or $b$ (the signs must be different)
- This method has first order convergence

## Fixed-point iteration method (rearranging to $x=g(x)$ form):

**1)**Rearrange the equation, e.g. $f(x) = 0$ into the form $x = g(x)$**2)**Now use this as an iterative formula; $x_{n+1} = g(x_{n})$, repeating for the necessary number of iterations- If the value of $x$ diverges from the root, try another rearrangement - this method only works when $|g'(x)| < 1$

# Errors

**v1-5** Notation:

- $X$ is the symbol for an approximation of $x$

## Error:

- Error$\ = X - x$
- Absolute error$\ = |\rm{error}| = |X - x|$
- Relative error$\ = \frac{X - x}{x}$
- Absolute relative error$\ = |\frac{X-x}{x}|$
- Percentage error$\ = |\frac{X-x}{x}| \times 100$

## Intervals:

- The interval
*1.25 ≤ $x$ ≤ 1.35*can be written as [1.25, 1.35] - The interval
*1.25 ≤ $x$ < 1.35*can be written as [1.25, 1.35) - the bracket represents greater than/less than, and square bracket is greater than or equal to/less than or equal to

## Function error propogation:

- If there is an error in $x$, then $f(x)$ will also contain this error. The $\pm$ will be the error in $x$ multiplied by $f'(x)$. As a formula this is written as $f(x + h) \approx f(x) + hf'(x)$
- For example, if we know that $x = 3 \pm 2$, $h = 2$

- So the median value is $f(x)$

- The upper bound is approximately $f(x+2)$

- If we don't have the original function definition, $f(x+2)$ can be approximated with the above formula

# Numerical differentiation

**c1-2** The forward difference method:

- This method calculates the gradient of a tangent drawn to the curve at the point $x$. To measure the gradient of the tangent, it uses a second point $f(x + h)$
- $f'(x) \approx$ $\frac{f(x + h)\ -\ f(x)}{h}$ where $h$ is a small number
- As $h$ decreases, the approximation will become more accurate provided that $f(x)$ can be calculated to a high accuracy

## The central difference method:

- This method instead calculates the gradient of a line drawn through $f(x - h)$ and $f(x + h)$
- $f'(x) \approx$ $\frac{f(x + h)\ -\ f(x - h)}{2h}$

# Numerical integration

**c3-4** The midpoint rule:

- This method splits the curve into equal width strips, makes each strip a rectangle up to the height of the curve, and sums the area of each rectangle
- Use $M_n = h \, (\mathrm{sum\ of\ } f(\mathrm{midpoint}))$ where $n$ is the number of strips and $h$ is the strip width
- $n$ is usually a power of $2$, starting at $n = 2^0 = 1$
- This is a second-order method, because absolute error is proportional to $h^2$. Therefore, doubling $n$ would reduce the error by a factor of $2^2 = 4$

## The trapezium rule:

- This method creates trapezia of equal width touching the curve, and sums their areas
- Use the formula $T_n = \frac{h}{2}(\, f_0 + f_n + 2(f_1 + f_2 + f_3 + ... + f_{n-1})\,)$ where $n$ is the number of strips, $h$ is the width of each strip and $f_k$ is $f(a + kh)$ ($a$ is the lower bound of the integral)
- A shortcut is to use the relationship $T_{2n} = \frac{T_n + M_n}{2}$. With this method, you only need to compute the first trapezium rule estimate provided that you have more midpoint rule estimates
- Like the midpoint rule, this method is also second-order

## Simpson's rule:

- The midpoint rule is about twice as accurate as the trapezium rule for $n$ strips
- The midpoint and trapezium rule estimates will straddle the exact answer
- These two properties can be used to derive the Simpson's rule formula to more accurately determine the value of the integral without increasing the strip count
- $S_{2n} =$ $\frac{2M_n + T_n}{3}$
- The following can also be derived: $S_{2n} =$ $\frac{4T_{2n} - T_n}{3}$
- Simpson's rule is fourth-order. In other words, doubling $n$ would reduce the error by a factor of $2^4 = 16$
- With two successive Simpson's estimates, the formula $S_\infty =$ $\frac{16S_{2n} - S_n}{15}$ will extrapolate it to infinity for the absolute best estimate with this strip count [coursework only, not required in the exam]

# Approximations to functions

**f1-2** Newton's forward difference interpolation:

- This method is used to create a polynomial equation to fit a set of points which are equally spaced in the $x$ axis
- The $x$ coordinates are called $x_0, x_1, x_2, ...$ and the $y$ coordinates for these $x$ values are called $f_0, f_1, f_2, ...$
**1)**Create a finite difference table as shown below:table { border-collapse: collapse;}table, td { border: 1px solid black; padding: 2px;}

Continue this for as many columns as necessary until you reach a value of $0$.$x_i$ $f_i$ $\Delta f_i$ $\Delta^2 f_i$ $x_0$ $f_0$ $= f_1 - f_0$ $x_1$ $f_1$ $= \Delta f_1 - \Delta f_0$ $= f_2 - f_1$ $x_2$ $f_2$

If all of the values in $\Delta ^k f_i$ are equal, then the polynomial is of $k$^{th}order**2)**Now use Newton's interpolating polynomial:

$f(x) = f_0 +$ $\frac{x - x_0}{h}$$\Delta f_0 +$ $\frac{(x - x_0)(x - x_1)}{2!h^2}$$\Delta^2 f_0 +$ $\frac{(x - x_0)(x - x_1)(x - x_2)}{3!h^3}$$\Delta^3 f_0 + ...$- Remember that you may not have to simplify this if you are only asked to find $f(x)$ for a value of $x$ - just substitute this $x$ value into the unsimplified interpolating polynomial

## Lagrange's method:

- This method does not require equally spaced points
- Lowercase letters are used to represent $x$ coordinates and uppercase letters represent $y$ coordinates: e.g. $(a, A), (b, B), (c, C), ...$
- For a quadratic, $f(x) =$ $\frac{A(x-b)(x-c)}{(a-b)(a-c)}$ $+$ $\frac{B(x-c)(x-a)}{(b-c)(b-a)}$ $+$ $\frac{C(x-a)(x-b)}{(c-a)(c-b)}$
- With higher order polynomials, the above can be extended (e.g. for a cubic, the first term would be $\frac{A(x-b)(x-c)(x-d)}{(a-b)(a-c)(a-d)}$)
- In the exam, the required degree will be $n - 1$ where $n$ is the number of points