Final Exam

Due: Fri 5/18

This set of problems constitutes the the final exam for the course. The work you submit should be your own. MATLAB should be used for all problems that require computation. However, you cannot use MATLAB's symbolic toolbox.

Please submit this assignment in my office no later than 6pm on Friday 5/18 (last day of exams).

I will be available by email to address questions or mistakes that may arise. Please email me if you feel a problem is unclear

No late assignments will be accepted.


  1. [20 pts] This problem considers the consequences of rounding using double precision. Assume that when a number is entered into the computer the "round to nearest" rule is used, and if there is a tie then the smaller value is picked (this rule is used to make the problem easier).

    1. For what real numbers $x$ will the computer claim the inequalities $1 < x < 2$ hold?

    2. For what real numbers will the computer claim $x=4$?

    3. Suppose it is stated that there is a floating-point number $x_f$ that is the exact solution of $x^2 - 2 = 0$. Why is this not possible? Also suppose $\bar{x}_f$ and $\bar{\bar{x}}_f$ are the closest floating point numbers to the left and right of $\sqrt{2}$ respectively. What does $\bar{\bar{x}}_f - \bar{x}_f$ equal?

  2. [20 pts] This exercise explores how to use Newton's method to evaluate an inverse function. That is, given $y= g(x)$ for some function $g$, then the inverse function satisfies $x= g^{-1}(y)$. The problem is, given y, what is the value of $x$?

    1. Assuming $y$ is given, and setting $f(x) = y - g(x)$, show that Newton's method gives

      \[ x_{i+1} = x_i + \frac{y- g(x_i)}{ g^\prime(x_i)}, \quad i=0,1, \ldots \]
    2. Use the result from part (a) to evaluate $e^2$ and $e^{-3}$ using the $\ln(x)$ function.

    3. Use the result from part (a) to evaluate $\cos^{-1}(1/2)$ and $\cos^{-1}(1/3)$.

    4. The error function is defined as

      \[ \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-s^2}\mathrm{d}s. \]

      The inverse error function is denoted as $\mathrm{erf}^{-1}(x)$. Use the result from part (a) to evaluate $\mathrm{erf}^{-1}(1/2)$ and $\mathrm{erf}^{-1}(1/3)$. You can use MATLAB's erf command to evaluate the error function.

  3. [20 pts] This problem concerns the equation $\mathbf{Ax = b}$, where

    \[ \mathbf{A} = \begin{pmatrix}1 & -1 \\ 0 & \alpha\end{pmatrix}, \quad \alpha>0. \]
    1. For what values of $\alpha$ is this matrix ill-conditioned? Make sure to identify what norm you are using.

    2. Let $\mathbf{x_c}$ be the computed solution. Suppose the residual $\mathbf{r =b - Ax_{c}}$ is small but non-zero. For what values of $\alpha$, if any, will the error $\mathbf{e = x - x_c}$ be be large?

    3. Suppose the error $\mathbf{e}$ is small but non-zero. For what values of $\alpha$, if any, will the residual $\mathbf{r}$ be large?

  4. [30 pts] This problem concerns Runge's function

    \[ f(x) = \frac{1}{1+x^2} \quad x\in [-4,4]. \]
    1. Find the global polynomial interpolants of $f(x)$ over the interval $[-4,4]$. Using $n = 3$, $5$, and $9$ equally spaced interpolations points. Write the exact expressions in standard, Newton and Langrange forms.

    2. Plot $f(x)$ and these three interpolants on a single graph. Which approximation might be the best?

  5. [30pts] This problem concerns finding least square polynomial approximations for Runge's function

    \[ f(x) = \frac{1}{1+x^2}, \quad x\in[-4,4]. \] using the inner product \[ \langle f,g\rangle = \int_{-4}^4 f(x)g(x)\mathrm{d}x. \]
    1. Find the polynomials $P_2(x)$, $P_4(x)$, $P_8(x)$ of degree's $2,4$ and $8$ which minimize

      \[ \int_{-4}^4 (P(x) - f(x))^2\mathrm{d}x \]
    2. Plot $f(x)$ and these three polynomials $P_2, P_4, P_8$ on a single graph. Which approximation might be the best?

  6. [20pts] Occasionally you will see someone try to adjust their data in an attempt to use Simpson’s rule. To explain, the goal is to evaluate the integral

    \[ \int_{x_{i-1}}^{x_{i+1}}f(x)\mathrm{d}x \]

    Assume that the value of $f(x)$ is known at $x_{i−1}$ and $x_{i+1}$ but not at $x_i$. The question is, can you use the data to find an approximation for $f(x_i)$ that will enable you to use Simpson’s rule, and in the process get a better result than you would get using the trapezoidal rule?

    1. What approximation of the integral is obtained using the trapezoidal rule?

    2. One possibility for approximating $f(x_i)$ is to use piecewise linear interpolation using the two data points $(x_{i−1}, f_{i−1})$ and $(x_{i+1}, f_{i+1})$. Doing this, and inserting the resulting approximation for $f(x_i)$ into Simpson’s rule, what results? How does this differ from your answer in part (a)?

    3. Suppose one just assumes that there are constants A and B so that $f_i = Af_{i−1} + Bf_{i+1}$. With this, Simpson’s rule reduces to an integration rule of the form

      \[ \int_{x_{i-1}}^{x_{i+1}} f(x)\mathrm{d}x = w_1 f(x_{i-1}) + w_2 f(x_{i+1}). \]

      What do $w_1$ and $w_2$ have to be to maximize the precision (the degree of the polynomial for which it is exact)? How does this differ from your answer in part (a)?

  7. [20pts] To solve the differential equation $y = f(t, y)$ one can use the finite difference equation

    \[ y_{i+1} = y_i + h\left[\theta f(t_i,y_i) + (1-\theta)f(t_{i+1},y_{i+1})\right] \]

    where $\theta$ is a number that satisfies $0 \leq \theta\leq 1$ (one first chooses $\theta$ from this interval and then uses the above formula to calculate the solution).

    1. For what value(s) of $\theta$ is the method explicit, and for which value(s) of $\theta$ is it implicit?

    2. For what value(s) of $\theta$ is the method A-stable? (This means no condition on $h$!)

    3. For what value(s) of $\theta$ is this method second-order accurate? What is its order of accuracy for the other values of $\theta$?

  8. [30pts] This problem concerns the IVP involving the Bernoulli equation

    \[ y^\prime + y^3 = \frac{y}{a+t}, \quad \text{and}\quad t>0 \]

    where $y(0) = 1$. You are to solve this problem using both the trapezoidal and RK4 methods.

    1. Verify that the exact solution is

      \[ y(t) = \frac{a+t}{\sqrt{\beta + \tfrac{2}{3}(a+t)^3}} \]

      and determine $\beta$ from the initial condition.

    2. Assuming $a = 0.01$, on the same axes plot the exact and the two numerical solutions for $0 \leq t \leq 3$ in the case of when the number of time-steps is $N = 80$.

    3. Redo (b) for $N = 20$, $N = 40$, and $N = 160$ (there should be one plot for each $N$). If one of the methods is unstable you can exclude it from the plot (for that value of $M$) but make sure to state this in your write-up.

  9. [10pts] Suppose you want to compute the solution of

    \[ y^\prime = e^{-y} + t^5 \]

    for $0\leq t \leq 1$ using $100$ time-steps ($h = 0.01$). You have the option to try several methods, specifically: (i) Forward Euler, (ii) Backward Euler, (iii) trapezoidal method, (iv) RK2, and (v) RK4.

    1. Which one would you expect to complete the calculation the fastest? Why?

    2. Which one would you expect to be the most accurate? Why?

    3. If stability is a concern which method would be best? Why?