diff --git a/.buildinfo b/.buildinfo new file mode 100644 index 0000000..f79839b --- /dev/null +++ b/.buildinfo @@ -0,0 +1,4 @@ +# Sphinx build info version 1 +# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. +config: a04813c0986e8ad766c0ad5c80985d25 +tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 0000000..e69de29 diff --git a/README.md b/README.md new file mode 100644 index 0000000..3a28b4f --- /dev/null +++ b/README.md @@ -0,0 +1,3 @@ +#GitHub Pages + +Last update of sphinx html documentation from [731a4bf](https://github.com/rhwhite/numeric_2024/tree/731a4bf692f1979a2e0aab3b9d4012aa60f2a4b9) diff --git a/_images/C_cycle_problem.png b/_images/C_cycle_problem.png new file mode 100644 index 0000000..e73aa73 Binary files /dev/null and b/_images/C_cycle_problem.png differ diff --git a/_images/beta.png b/_images/beta.png new file mode 100644 index 0000000..5f52b21 Binary files /dev/null and b/_images/beta.png differ diff --git a/_images/det-plot.png b/_images/det-plot.png new file mode 100644 index 0000000..ece01d0 Binary files /dev/null and b/_images/det-plot.png differ diff --git a/_images/error.png b/_images/error.png new file mode 100644 index 0000000..f14b550 Binary files /dev/null and b/_images/error.png differ diff --git a/_images/euler.png b/_images/euler.png new file mode 100644 index 0000000..63772f4 Binary files /dev/null and b/_images/euler.png differ diff --git a/_images/float1.png b/_images/float1.png new file mode 100644 index 0000000..12910f8 Binary files /dev/null and b/_images/float1.png differ diff --git a/_images/midpoint.png b/_images/midpoint.png new file mode 100644 index 0000000..661b2d5 Binary files /dev/null and b/_images/midpoint.png differ diff --git a/_images/steady_b.png b/_images/steady_b.png new file mode 100644 index 0000000..f1ab7ff Binary files /dev/null and b/_images/steady_b.png differ diff --git a/_images/steady_bw.png b/_images/steady_bw.png new file mode 100644 index 0000000..fc2eb73 Binary files /dev/null and b/_images/steady_bw.png differ diff --git a/_images/steady_g.png b/_images/steady_g.png new file mode 100644 index 0000000..1205e44 Binary files /dev/null and b/_images/steady_g.png differ diff --git a/_images/steady_w.png b/_images/steady_w.png new file mode 100644 index 0000000..ffd7339 Binary files /dev/null and b/_images/steady_w.png differ diff --git a/_images/taylor.png b/_images/taylor.png new file mode 100644 index 0000000..5d686ec Binary files /dev/null and b/_images/taylor.png differ diff --git a/_images/temp_b.png b/_images/temp_b.png new file mode 100644 index 0000000..a688ba9 Binary files /dev/null and b/_images/temp_b.png differ diff --git a/_images/temp_bw.png b/_images/temp_bw.png new file mode 100644 index 0000000..1b54d76 Binary files /dev/null and b/_images/temp_bw.png differ diff --git a/_images/temp_g.png b/_images/temp_g.png new file mode 100644 index 0000000..a1b71ec Binary files /dev/null and b/_images/temp_g.png differ diff --git a/_images/temp_w.png b/_images/temp_w.png new file mode 100644 index 0000000..56b26d8 Binary files /dev/null and b/_images/temp_w.png differ diff --git a/_sources/getting_started.rst.txt b/_sources/getting_started.rst.txt new file mode 100644 index 0000000..dabc31f --- /dev/null +++ b/_sources/getting_started.rst.txt @@ -0,0 +1,12 @@ +Getting started +=============== + +.. toctree:: + + Installation + + Python introduction + + Next steps + + VS code notes diff --git a/_sources/getting_started/installing_jupyter.ipynb.txt b/_sources/getting_started/installing_jupyter.ipynb.txt new file mode 100644 index 0000000..337b31d --- /dev/null +++ b/_sources/getting_started/installing_jupyter.ipynb.txt @@ -0,0 +1,186 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Student installs\n", + "\n", + "**If you already have conda or anaconda installed, skip to `Git install` below**\n", + "\n", + "\n", + "\n", + "## For MacOS new installs\n", + "\n", + "\n", + "1. Download miniconda from https://docs.conda.io/en/latest/miniconda.html -- choose the correct `Miniconda3 MacOSX 64-bit pkg` file for your Mac (Intel chip or new M1/M2 Silicon) from the menu and run it, agreeing to the licences and accepting all defaults. You should install for \"just me\"\n", + "\n", + "2. To test your installation, open a fresh terminal window and at the prompt type `which conda` (unless you are using zsh. In that case use `whence -p conda`). You should see something resembling the following output, with your username instead of `phil`:\n", + "\n", + "```\n", + "% which conda\n", + "/Users/phil/opt/miniconda3/bin/conda\n", + "```\n", + "\n", + "## For Windows new installs\n", + "\n", + "1. Download miniconda from https://docs.conda.io/en/latest/miniconda.html -- choose the `Miniconda3 Windows 64-bit`. download from the menu and run it, agreeing to the licences and accepting all defaults.\n", + "\n", + "The installer should suggest installing in a path that looks like:\n", + "\n", + "```\n", + "C:\\Users\\phil\\Miniconda3\n", + "```\n", + "\n", + "2. Once the install completes hit the windows key and start typing `anaconda`. You should see a shortcut that looks like:\n", + "\n", + "```\n", + "Anaconda Powershell Prompt\n", + "(Miniconda3)\n", + "```\n", + "\n", + "**Note that Windows comes with two different terminals `cmd` (old) and `powershell` (new). Always select the powershell version of the anaconda terminal**\n", + "\n", + "3. Select the short cut. If the install was successful you should see something like:\n", + "\n", + "```\n", + "(base) (Miniconda3):Users/phil>\n", + "```\n", + "with your username substituted for phil.\n", + "\n", + "Some useful troubleshooting websites if you have issues getting conda installed on windows: \n", + "https://stackoverflow.com/questions/54501167/anaconda-and-git-bash-in-windows-conda-command-not-found\n", + "https://stackoverflow.com/questions/44597662/conda-command-is-not-recognized-on-windows-10\n", + "\n", + "## Git install\n", + "\n", + "Inside your powershell or MacOs terminal, install git using conda:\n", + "\n", + "```\n", + "conda install git\n", + "```\n", + "\n", + "and then set it up\n", + "\n", + "```\n", + "git config --global user.name \"Phil\"\n", + "git config --global user.email phil@example.com\n", + "```\n", + "\n", + "## Github account\n", + "\n", + "To use the course materials and to work collaboratively for the project, you will need a github account. Sign up for a free account at https://github.com if you don't already have one - you will need to use the same address you configured git for above. \n", + "\n", + "Once you have your github account, you will need to set up a secure way to connect. If you think you might use github a lot, we recommend setting up an ssh connection - this is a longer set-up process, but then quicker each time you want to connect to git. Follow the instructions here: \n", + "\n", + "https://docs.github.com/en/authentication/connecting-to-github-with-ssh\n", + "\n", + "A quicker set-up is to create a Personal Access Token. Follow the instructions here:\n", + "https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic\n", + "\n", + "Once you have a personal access token, you can enter it instead of your password when performing Git operations over HTTPS (see https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#using-a-personal-access-token-on-the-command-line). \n", + "\n", + "## Fork the course repository into your github account\n", + "\n", + "Now go to the course website at https://github.com/rhwhite/numeric_2024 and fork the repository. The 'fork' button is on the upper right. This creates a 'fork' or copy of the current status of the repository in your account. \n", + "\n", + "You now have your own fork of the course repository and should be taken to that page. Its name will be YourGitHubId/numeric_2024\n", + "\n", + "## Setting up the course repository\n", + "\n", + "In the terminal, change directories to your home directory (called `~` for short) and make a new directory\n", + "called `repos` to hold the course notebook repository. Change into `repos` and clone the course (do change YourGitHubId to your actual git hub id):\n", + "\n", + "```\n", + "cd ~\n", + "mkdir repos\n", + "cd repos\n", + "git clone https://github.com/YourGitHubId/numeric_2024.git\n", + "```\n", + "\n", + "## Creating the course environment\n", + "\n", + "In the terminal, execute the following commands:\n", + "\n", + "```\n", + "cd numeric_2024\n", + "conda env create -f envs/environment.yaml\n", + "conda activate numeric_2024\n", + "```\n", + "\n", + "## Opening the notebook folder and working with lab 1\n", + "\n", + "To make it possible to pull down changes to the repository (for example, as I write this section only lab1 and lab2 are available) you need to work in a copy of the notebook. So always copy the notebook to a new name. See below an example for lab1. I suggest you use your name rather than phil!\n", + "\n", + "```\n", + "cd ~/repos/numeric_2024/notebooks\n", + "cp lab1/01-lab1.ipynb lab1/phil-lab1.ipynb\n", + "jupyter lab\n", + "```\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.5" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/getting_started/python.ipynb.txt b/_sources/getting_started/python.ipynb.txt new file mode 100644 index 0000000..ea84fe2 --- /dev/null +++ b/_sources/getting_started/python.ipynb.txt @@ -0,0 +1,229 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# The command line shell and git\n", + "\n", + " - The default shell on OSX is bash, which is taught in this set of\n", + " lessons: or in this\n", + " [detailed bash reference](https://programminghistorian.org/en/lessons/intro-to-bash)\n", + " - if you are on Windows, powershell is somewhat similar -- here is\n", + " a table listing commands for both shell side by side taken from\n", + " this in-depth [powershell tutorial](https://programminghistorian.org/en/lessons/intro-to-powershell#quick-reference)\n", + "\n", + "## Using the command line\n", + "\n", + "### Powershell and Bash common commands\n", + "\n", + "* To go to your $HOME folder:\n", + " \n", + "```\n", + "cd ~\n", + "\n", + "or\n", + "\n", + "cd $HOME\n", + "```\n", + "\n", + "* To open explorer or finder for the current folder:\n", + "\n", + "```\n", + "windows explorer do:\n", + "\n", + " start .\n", + "\n", + "MacOs finder do:\n", + "\n", + " open .\n", + " \n", + "```\n", + " \n", + "* To move up one folder:\n", + "\n", + "```\n", + "cd ..\n", + "```\n", + "\n", + "* To save typing, remember that hitting the tab key completes filenames\n", + "\n", + "### To configure powershell on windows\n", + "\n", + "* first start a powershell terminal with admin privileges, then type:\n", + "\n", + " `set-executionpolicy remotesigned`\n", + " \n", + "* then, in your miniconda3 powershel profile, do:\n", + "\n", + " `Test-Path $profile`\n", + " \n", + " to see whether you have an existing profile.\n", + " \n", + "* if you don't have a profile, then do the following (this will overwrite an existing profile, so be aware):\n", + "\n", + " `New-Item –Path $Profile –Type File –Force`\n", + " \n", + "* To add to your profile, open with:\n", + "\n", + " `start $profile`\n", + " \n", + "### To configure bash or zsh on MacOS\n", + "\n", + "* open a terminal then type either\n", + "\n", + " `open .bash_profile`\n", + " \n", + " or for Catalina\n", + " \n", + " `open .zshenv`\n", + " \n", + " \n", + "\n", + "### Bash and powershell command reference\n", + "\n", + "| Cmdlet | Alias | Bash Equivalent | Description |\n", + "| ------- | ------- | ------- | ------- |\n", + "| `Get-ChildItem` | `gci` | `ls` | List the directories and files in the current location. | \n", + "| `Set-Location` | `sl` | `cd` | Change to the directory at the given path. Typing `..` rather than a path will move up one directory. |\n", + "| `Push-Location` | `pushd` | `pushd` | Changes to the directory. |\n", + "| `Pop-Location` | `popd` | `popd` | Changes back to the previous directory after using `pushd` |\n", + "| `New-Item` | `ni` | (`touch`) | Creates a new item. Used with no parameter, the item is by default a file. Using `mkdir` is a shortcut for including the parameter `-ItemType dir`. |\n", + "| `mkdir` | none | `mkdir` | Creates a new directory. (See `New-Item`.) |\n", + "| `Explorer` | `start .` | `open .`) | Open something using File Explorer (the GUI) |\n", + "| `Remove-Item` | `rm` | `rm` | Deletes something. Permanently! |\n", + "| `Move-Item` | `mv` | `mv` | Moves something. Takes two arguments - first a filename (i.e. its present path), then a path for its new location (including the name it should have there). By not changing the path, it can be used to rename files. |\n", + "| `Copy-Item` | `cp` | `cp` | Copies a file to a new location. Takes same arguments as move, but keeps the original file in its location. |\n", + "| `Write-Output` | `write` | `echo` | Outputs whatever you type. Use redirection to output to a file. Redirection with `>>` will add to the file, rather than overwriting contents. |\n", + "| `Get-Content` | `gc` | `cat` | Gets the contents of a file and prints it to the screen. Adding the parameter `-TotalCount` followed by a number x prints only the first x lines. Adding the parameter `-Tail` followed by a number x prints only the final x lines. |\n", + "| `Select-String` | `sls` | (`grep`) | Searches for specific content. |\n", + "| `Measure-Object` | `measure` | (`wc`) | Gets statistical information about an object. Use `Get-Content` and pipe the output to `Measure-Object` with the parameters `-line`, `-word`, and `-character` to get word count information. |\n", + "| `>` | none | `>` |Redirection. Puts the output of the command to the left of `>` into a file to the right of `>`. |\n", + "| `\\|` | none | `\\|` |Piping. Takes the output of the command to the left and uses it as the input for the command to the right. |\n", + "| `Get-Help` | none | `man` | Gets the help file for a cmdlet. Adding the parameter `-online` opens the help page on TechNet. |\n", + "| `exit` | none | `exit` | Exits PowerShell |\n", + "\n", + "Remember the keyboard shortcuts of `tab` for auto-completion and the up and down arrows to scroll through recent commands. These shortcuts can save a lot of typing!\n", + "\n", + "## Git\n", + "\n", + "- A good place to go to learn git fundamentals is this lesson\n", + " \n", + "\n", + "## Pulling changes from the github repository\n", + "\n", + "When we commit changes to the master branch and push to our github\n", + "repository, you'll need to fetch those changes to keep current. To do\n", + "that:\n", + "\n", + "1) go to your fork of the repository on github. You should see a statement like \"This branch is 1 commit ahead, 2 commits behind rwhite:main\".\n", + "\n", + "2) click 'Fetch upstream' beside that statement and \"Fetch and merge\". Now the statement should be something like \"This branch is 2 commits ahead of rhwhite:main\"\n", + "\n", + "3) pull the changes into your own computer space. Make sure you aren't going to clobber any of your own files:\n", + " \n", + " git status\n", + " \n", + " you can ignore \"untracked files\", but pay attention to any files\n", + " labeled \"modified\". Those will be overwritten when you reset to our\n", + " commit, so copy them to a new name or folder.\n", + "\n", + "5) Get the new commit with\n", + " \n", + " git pull\n", + " \n", + "# Books and tutorials\n", + "\n", + " - We will be referring to Phil Austin's version of David Pine's Introduction to Python:\n", + " http://phaustin.github.io/pyman. The notebooks for each chapter are included\n", + " in the [numeric_students/pyman](https://github.com/phaustin/numeric_students/tree/downloads/pyman) folder.\n", + " - If you are new to python, I would recommend you also go over the\n", + " following short ebook in detail:\n", + " - Jake Vanderplas' [Whirlwind tour of\n", + " Python](https://github.com/jakevdp/WhirlwindTourOfPython/blob/f40b435dea823ad5f094d48d158cc8b8f282e9d5/Index.ipynb)\n", + " is available both as a set of notebooks which you can clone from\n", + " github or as a free ebook:\n", + " \n", + " - to get the notebooks do:\n", + " \n", + " git clone \n", + " - We will be referencing chapters from:\n", + " - A Springer ebook from the UBC library: [Numerical\n", + " Python](https://login.ezproxy.library.ubc.ca/login?qurl=https%3a%2f%2flink.springer.com%2fopenurl%3fgenre%3dbook%26isbn%3d978-1-4842-0554-9)\n", + " - with code on github:\n", + " \n", + " git clone\n", + " \n", + " - Two other texts that are available as a set of notebooks you can\n", + " clone with git:\n", + " - \n", + " - \n", + " - My favorite O'Reilly book is:\n", + " - [Python for Data\n", + " Analysis](http://shop.oreilly.com/product/0636920023784.do)\n", + " - Some other resources:\n", + " - If you know Matlab, there is [Numpy for Maltab\n", + " users](http://wiki.scipy.org/NumPy_for_Matlab_Users)\n", + " - Here is a [python\n", + " translation](http://nbviewer.jupyter.org/gist/phaustin/1af744215e51562d010b9f6a19c0724c)\n", + " by [Don\n", + " MacMillen](http://blogs.siam.org/from-matlab-guide-to-ipython-notebook/)\n", + " of [Chapter 1 of his matlab\n", + " guide](http://clouds.eos.ubc.ca/~phil/courses/atsc301/downloads_pw/matlab_guide_2nd.pdf)\n", + " - [Numpy beginners\n", + " guide](http://www.packtpub.com/numpy-mathematical-2e-beginners-guide/book)\n", + " - [Learning\n", + " Ipython](http://www.packtpub.com/learning-ipython-for-interactive-computing-and-data-visualization/book)\n", + " - [The official Python\n", + " tutorial](http://docs.python.org/tut/tut.html)\n", + " - [Numpy\n", + " cookbook](http://www.packtpub.com/numpy-for-python-cookbook/book)\n", + " - A general computing introduction: [How to think like a computer\n", + " scientist](http://www.openbookproject.net/thinkcs/python/english3e)\n", + " with an [interactive\n", + " version](http://interactivepython.org/courselib/static/thinkcspy/index.html)\n", + " - [Think Stats](http://greenteapress.com/wp/think-stats-2e/)\n", + " - [Think Bayes](http://greenteapress.com/wp/think-bayes/)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.1" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/_sources/getting_started/vscode.ipynb.txt b/_sources/getting_started/vscode.ipynb.txt new file mode 100644 index 0000000..31e1666 --- /dev/null +++ b/_sources/getting_started/vscode.ipynb.txt @@ -0,0 +1,104 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# VScode notes #\n", + "\n", + "## Install vscode from https://code.visualstudio.com/download\n", + "\n", + "Install the command line version as well. On windows, this should be done as part of the install. On MacOS, you need to open the vscode command pallette (⌘shift-P) and type\n", + "```\n", + "Shell Command: Install code in PATH\n", + "```\n", + "\n", + "once that is done, you should be able to start vscode from the command line in a particular\n", + "folder by typing:\n", + "\n", + "```\n", + "code .\n", + "```\n", + "\n", + "## Suggested Extensions ##\n", + "\n", + "On the left you will see four boxes, one moved up. Here you can add extensions. You will need some just to run notebooks etc. We suggest:\n", + "- Python\n", + "- Pylance (this one installed with Python for me)\n", + "- Jupyter (again came with Python for me)\n", + "- C/C++\n", + "- Clipboard\n", + "- Code Spell Checker\n", + "- Gitlens\n", + "\n", + "## Notebooks ##\n", + "\n", + "In class I will demonstrate using VSCode with notebooks and with python modules. When you open either, you will be asked to choose your kernel (numeric_2022) and an interpreter (the python associated with numeric_2022).\n", + "\n", + "The notebooks are not VSCode ready and you will see non-rendered pieces. Technology changes and we are always behind.\n", + "\n", + "I will show you some of the strengths of VSCode for editing notebooks focusing on its real editor powers: spellchecking and multiple corrections\n", + "\n", + "## Python Modules ##\n", + "\n", + "I will show you in class some of the super features of editing in VScode including:\n", + "- code colouring\n", + "- built in information on functions\n", + "- click on variable, see everywhere it is used\n", + "- checks alignment (whitespace)\n", + "- marks changes you've made\n", + "- typo in variable leading to undefined\n", + "- undefined function: colour changes to white\n", + "- making a change, then using the git integration to save, stage and commit\n" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs", + "text_representation": { + "extension": ".py", + "format_name": "percent", + "format_version": "1.3", + "jupytext_version": "1.3.2" + } + }, + "kernelspec": { + "display_name": "Python 3.7.6 64-bit ('numeric': conda)", + "language": "python", + "name": "python37664bitnumericcondabd5c031d404d4597ae8310d0bb6bf5f0" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.6" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/grad_schedule.rst.txt b/_sources/grad_schedule.rst.txt new file mode 100644 index 0000000..2f6990a --- /dev/null +++ b/_sources/grad_schedule.rst.txt @@ -0,0 +1,86 @@ +Dates for Graduate Class (EOSC 511) +===================================== + +January +------- +* Monday Jan 8 : First Class + +* Jan 15, 2 pm: Lab 1 Reading and Objectives Quiz + +* Monday Jan 15: Second Class + +* Jan 19, 6 pm: Lab 1 Assignment + +* January 19: Last date for withdrawal from class without a "W" standing + +* Jan 22, 2 pm: Lab 2 Reading and Objectives Quiz + +* Monday Jan 22: Third Class + +* Jan 26, 6 pm:Lab 2 Assignment + +* Jan 29, 2 pm: Lab 3 Reading and Objectives Quiz + +* Monday Jan 29: Fourth Class + +February +---------- +* Feb 2, 6 pm: Lab 3 Assignment + +* Feb 5, 2 pm: Lab 4 Reading and Objectives Quiz + +* Monday, Feb 5: Fifth Class + +* Feb 9, 6 pm: Lab 4 Assignment + +* Monday Feb 12: Sixth Class + +* Feb 16, 6 pm: Miniproject + +* Feb 19-23: Reading Week, No class + +* Feb 26, 2 pm: Lab 5a Reading and Objectives Quiz + +* Monday Feb 26: Seventh Class + +March +----- + +* Mar 1, 6 pm: Lab 5a Assignment + +* Mar 1, 6 pm: Teams for Projects (a list of names due) + +* March 1: Last date to withdraw from course with 'W' + +* Mar 4, 2 pm: Lab 7a Reading and Objectives Quiz + +* Monday Mar 4: Eighth Class + +* Mar 8, 6 pm: Lab 7a Assignment + +* Monday Mar 11: Ninth Class + +* Mar 15, 6 pm: Project Proposal + +* Mar 15, 6 pm: First iPeer evaluation + +* Mar 18, 2 pm: Optional Labs Reading and Objectives Quiz + +* Monday Mar 18: Tenth Class + +* Mar 22, 6 pm: Optional Labs Assignment + +* Monday Mar 25: Eleventh Class: Project Proposal Presentations + +April +----- + +* Monday, Apr 8: Last Class + +* Apr 8, in class, Project Presentation (for teams that did not + present in March) + +* Apr 8, 6 pm, Second iPeer evaluation + +* Apr 12, 6 pm: Project + diff --git a/_sources/gradsyllabus.rst.txt b/_sources/gradsyllabus.rst.txt new file mode 100644 index 0000000..7a19d72 --- /dev/null +++ b/_sources/gradsyllabus.rst.txt @@ -0,0 +1,262 @@ +Graduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: EOSC 511 / ATSC 506 +============================================================================================= + +Course Purpose +-------------- + +The students completing this course will be able to apply standard +numerical solution techniques to problems in Oceanographic, Atmospheric +and Earth Science. + +Meeting Times +------------- +See canvas course page for scheduled class times and location + + +Instructors +----------- + +| Rachel White, rwhite@eoas.ubc.ca +| Susan Allen, sallen@eoas.ubc.ca + +See canvas course page for office hour locations + +Prerequisites +------------- + +The course assumes a mathematics background including vector calculus +and linear algebra. Students weak in either of these areas will be +directed to readings to strengthen their knowledge. Programming +experience is greatly recommended. + +Course Structure +---------------- + +This course is not lecture based. The course is an interactive, computer +based laboratory course. The computer will lead you through the +laboratory (like a set of lab notes) and you will answer problems most +of which use the computer. The course consists of three parts. A set of +required interactive, computer based laboratory exercises, a choice of +elective laboratory exercises and a project. The project will be a +group project determined through consultation between the instructors, the students and their +supervisors. + +During the meeting times, there will be group worksheets to delve +into the material, brief presentations to help with technical +matters, time to ask questions in a group format and also individually +and time to read and work on the laboratories. + +You can use a web-browser to examine the course exercises. Point your +browser to: + +https://rhwhite.github.io/numeric_2024/notebook_toc.html + + +Grades +------ + + - Laboratory Exercises 15% (individual with collaboration, satisfactory/unsatisfactory grading) + - Quizzes 5% (individual) + - Worksheets 5% (group) + - Mini-project 15% (individual with collaboration) + - Project Proposal 10% (group) + - Project 40% (group) + - Project Oral Presentation 10% (group) + + +There will be 7 assigned exercise sets or 'Laboratory Exercises' based on the labs. +Note that these are not necessarily the same as the problems in the +lab and will generally be a much smaller set. Laboratory exercises +can be worked with partners or alone. Each student must upload their +own solution in their own words. + +The laboratory exercise sets are to be uploaded to the course CANVAS page. +Sometimes, rather than a large series of plots, you may wish to +include a summarizing table. If you do not understand the scope of a +problem, please ask. Help with the labs is +available 1) through piazza (see CANVAS) so you can contact your classmates +and ask them 2) during the weekly scheduled lab or 3) directly from the +instructors during the scheduled office hours (see canvas). + +Laboratory exercises will be graded as 'excellent', 'satisfactory' or 'unsatisfactory'. +Your grade on canvas will be given as: + +1.0 = excellent + +0.8 = satisfactory + +0 = unsatisfactory + +Grades will be returned within a week of the submission deadline. +If you receive a grade of 'satisfactory' or 'unsatisfactory' on your first submission, +you will be given an opportunity to resubmit the problems you got incorrect to try to +improve your grade. To get a score of 'excellent' on a resubmission, you must include +a full explanation of your understanding of why your initial answer was incorrect, and +what misconception, or mistake, you have corrected to get to your new answer. Resubmissions +will be due exactly 2 weeks after the original submission deadline. It is your responsibility +to manage the timing of the resubmission deadlines with the next laboratory exercise. + +Your final Laboratory Exercise grade will be calculated from the number of excellent, satisfactory +and unsatisfactory grades you have from the 7 exercises: +5 or more submissions at 'Excellent', none 'Unsatisfactory': 100% +3 or more submissions at 'Excellent', none 'Unsatisfactory': 90% +1 or fewer 'Excellent', none 'Unsatisfactory': 80% +1 'Unsatisfactory' submission: 70% +2 'Unsatisfactory' submissions: 60% +3 'Unsatisfactory' submissions: 50% +4 or more 'Unsatisfactory' submissions: 0%: +[1]_ + +Quizzes are done online, reflect the learning objectives of each lab +and are assigned to ensure you do the reading with enough depth to +participate fully in the class worksheets and have the background to +do the Laboratory Exercises. There will be a "grace space" policy +allowing you to miss one quiz. + +The in-class worksheets will be marked for a complete effort. There +will be a “grace space” policy allowing you to miss one class +worksheet. The grace space policy is to accommodate missed classes due +to illness, “away games” for athletes etc. In-class paper worksheets +are done as a group and are to handed in (one worksheet only per +group) at the end of the worksheet time. + +**The project will be done in groups of three to four. The project topic is to be chosen in consultation with your research supervisors and the instructors. The subject of these projects has to be ocean or atmosphere related unless the group has identified an outside supervisor who is willing to provide subject specific advice. Students without ocean/atmosphere expertise can join a ocean/atmopsheric sciences group - it will be up to the group to figure out where and how they can best contribute to the project.** + + +Assignments, quizzes, mini-projects and the project are expected on +time. Late mini-projects and projects will be marked and then the mark will be multiplied by +:math:`(0.9)^{\rm (number\ of\ days\ or\ part\ days\ late)}`. + +Set Laboratories +---------------- + +Recommended timing. Problems to be handed in can be found on the +webpage. + +- Laboratory One: One Week + +- Laboratory Two: One Week + +- Laboratory Three: One Week + +- Laboratory Four: One and a Half Weeks + +- Laboratory Five: Half a Week + +- Laboratory Seven: One Week + +Elective Laboratories +--------------------- + +Choose the one large lab (10 points) or two small labs (5 points). Time scale: one and a half weeks. + +ODE’s +~~~~~ + +- Rest of Lab 5 (5 points) + +- Lab 6 (5 points) + +PDE’s +~~~~~ + +- End of Lab 7 (5 points) + +- Lab 8 (10 points) + +- Lab 10 (5 points) + +FFT's +~~~~~ + +- Lab 9 (5 points) + + +Project +------- + +- Done in groups of three or four. Chosen in consultation with your research supervisors and the + instructors. Should be chosen before the elective labs. + +- Time scale three and half weeks. + + +University Statement on Values and Policies +------------------------------------------- + +UBC provides resources to support student learning and to maintain +healthy lifestyles but recognizes that sometimes crises arise and so +there are additional resources to access including those for survivors +of sex- ual violence. UBC values respect for the person and ideas of +all members of the academic community. Harassment and discrimination +are not tolerated nor is suppression of academic freedom. UBC provides +appropriate accommodation for students with disabilities and for +religious and cultural observances. UBC values academic honesty and +students are expected to acknowledge the ideas generated by others and +to uphold the highest academic standards in all of their +actions. Details of the policies and how to access support are +available here + +https://senate.ubc.ca/policies-resources-support-student-success. + + +Supporting Diversity and Inclusion +----------------------------------- + +Atmospheric Science, Oceanography and the Earth Sciences have been +historically dominated by a small subset of +privileged people who are predominantly male and white, missing out on +many influential individuals thoughts and +experiences. In this course, we would like to create an environment +that supports a diversity of thoughts, perspectives +and experiences, and honours your identities. To help accomplish this: + + - Please let us know your preferred name and/or set of pronouns. + - If you feel like your performance in our class is impacted by your experiences outside of class, please don’t hesitate to come and talk with us. We want to be a resource for you and to help you succeed. + - If an approach in class does not work well for you, please talk to any of the teaching team and we will do our best to make adjustments. Your suggestions are encouraged and appreciated. + - We are all still learning about diverse perspectives and identities. If something was said in class (by anyone) that made you feel uncomfortable, please talk to us about it + + +Academic Integrity +------------------ + +Students are expected to learn material with honesty, integrity, and responsibility. + + - Honesty means you should not take credit for the work of others, + and if you work with others you are careful to give them the credit they deserve. + - Integrity means you follow the rules you are given and are respectful towards others + and their attempts to do so as well. + - Responsibility means that you if you are unclear about the rules in a specific case + you should contact the instructor for guidance. + +The course will involve a mixture of individual and group work. We try +to be flexible about this as my priority is for you to learn the +material rather than blindly follow rules, but there are +rules. Plagiarism (i.e. copying of others work) and cheating (not +following the rules) can result in penalties ranging from zero on an +assignment to failing the course. + +**For due dates etc, please see the Detailed Schedule.** + +Not feeling well before class? +------------------------------- +What to do if you’re sick: If you’re sick, it’s important that you stay home, no matter what you think +you may be sick with (e.g., cold, flu, other). If you do miss class because of illness: +• Make a connection early in the term to another student or a group of students in the class. You can +help each other by sharing notes. If you don’t yet know anyone in the class, post on Piazza to connect +with other students. +• Consult the class resources on this website and on canvas. We will post the materials for each class day. +• In this class, the marking scheme is intended to provide flexibility so that you can prioritize your health +and are still be able to succeed. As such, there is a “grace space” policy allowing you to miss one in-class worksheet and one +pre-class quiz with no penalty. +• If you are concerned that you will miss a particular key activity due to illness, contact us to discuss. + +If an instructor is sick: we will do our best to stay well, but if either of us is ill, here is what you can +expect: +• The other instructor will substitute +• Your TA may help run a class +• We may have a synchronous online session or two. If this happens, you will receive an email. + +.. [1] + For assignments with a late penalty, we will consider grades of >=85% as Excellent, 60-85\% as Satisfactory, and below 60% as Unsatisfactory. + diff --git a/_sources/index.rst.txt b/_sources/index.rst.txt new file mode 100644 index 0000000..7583939 --- /dev/null +++ b/_sources/index.rst.txt @@ -0,0 +1,23 @@ +Numerical Techniques for Atmosphere, Ocean and Earth Scientists +=============================================================== + +Welcome! Start by reading the appropriate syllabus below. The "Getting Started" page will take you through the setup you will need to participate in this course - we will help you go through this in the first class. + +.. toctree:: + :maxdepth: 1 + + Undergrad Syllabus + + Undergrad Detailed Schedule + + Grad Syllabus + + Grad Detailed Schedule + + Optional texts + + Getting started + + Labs + + Rubrics diff --git a/_sources/notebook_toc.rst.txt b/_sources/notebook_toc.rst.txt new file mode 100644 index 0000000..0fc017d --- /dev/null +++ b/_sources/notebook_toc.rst.txt @@ -0,0 +1,16 @@ +Numeric notebooks +================= + +.. toctree:: + :maxdepth: 1 + + Lab 1 + Lab 2 + Lab 3 + Lab 4 + Lab 5 + Lab 6 + Lab 7 + Lab 8 + Lab 9 + Lab 10 diff --git a/_sources/notebooks/lab1/01-lab1.ipynb.txt b/_sources/notebooks/lab1/01-lab1.ipynb.txt new file mode 100644 index 0000000..74c624c --- /dev/null +++ b/_sources/notebooks/lab1/01-lab1.ipynb.txt @@ -0,0 +1,1662 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "# Laboratory 1: An Introduction to the Numerical Solution of Differential Equations: Discretization (Jan 2024)\n", + "\n", + "John M. Stockie" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems\n", + "\n", + "- [Problem One](#Problem-One)\n", + "- [Problem Two](#Problem-Two)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "\n", + "The examples and exercises in this lab are meant to illustrate the\n", + "limitations of analytical solution techniques, using several\n", + "differential equation models for simple physical systems. This is the\n", + "prime motivation for the use of numerical methods.\n", + "\n", + "After completing this lab, you will understand the process of\n", + "*discretizing* a continuous problem, and be able to derive a simple\n", + "finite difference approximation for an ordinary or partial differential\n", + "equation. The examples will also introduce the concepts of *accuracy*\n", + "and *stability*, which will be discussed further in Lab 2.\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- Define the term or identify: Ordinary Differential Equation, Partial\n", + " Differential Equation, Linear equation, Non-linear equation, Initial\n", + " value problem, Boundary value problem, Open Domain, and Closed\n", + " Domain.\n", + "\n", + "- Define the term, identify or perform: Forward difference\n", + " discretization, Backward difference discretization, and Centre\n", + " difference discretization.\n", + "\n", + "- Define the term: Interpolation, Convergence, and Instability.\n", + "\n", + "- Define the term or perform: Linear interpolation.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. However, if you would like additional background on any of\n", + "the following topics, then refer to the sections indicated below:\n", + "\n", + "- **Differential Equations:**\n", + "\n", + " -  [Strang (1986)](#Ref:Strang), Chapter 6 (ODE’s).\n", + "\n", + " -  [Boyce and DiPrima (1986)](#Ref:BoyceDiPrima) (ODE’s and PDE’s).\n", + "\n", + "- **Numerical Methods:**\n", + "\n", + " -  [Strang (1986)](#Ref:Strang), Section 5.1.\n", + "\n", + " -  [Garcia (1994)](#Ref:Garcia), Sections 1.4–1.5, Chapter 2 (a basic introduction to\n", + " numerical methods for problems in physics).\n", + "\n", + " -  [Boyce and DiPrima (1986)](#Ref:BoyceDiPrima), Sections 8.1–8.5, 8.7, 8.8." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Running Code Cells\n", + "\n", + "\n", + "The next cell in this notebook is a code cell. Run it by selecting it and hitting ctrl enter, or by selecting it and hitting the run button (arrow to right) in the notebook controls." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "# import plotting package and numerical python package for use in examples later\n", + "import matplotlib.pyplot as plt\n", + "# import the numpy array handling library\n", + "import numpy as np\n", + "# import the quiz script\n", + "import context\n", + "from numlabs.lab1 import quiz1 as quiz" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction: Why bother with numerical methods?\n", + "\n", + "\n", + "In introductory courses in ordinary and partial differential equations\n", + "(ODE’s and PDE’s), many analytical techniques are introduced for\n", + "deriving solutions. These include the methods of undetermined\n", + "coefficients, variation of parameters, power series, Laplace transforms,\n", + "separation of variables, Fourier series, and phase plane analysis, to\n", + "name a few. When there are so many analytical tools available, one is\n", + "led to ask:\n", + "\n", + "> *Why bother with numerical methods at all?*\n", + "\n", + "The fact is that the class of problems that can be solved analytically\n", + "is *very small*. Most differential equations that model physical\n", + "processes cannot be solved explicitly, and the only recourse available\n", + "is to use a numerical procedure to obtain an approximate solution of the\n", + "problem.\n", + "\n", + "Furthermore, even if the equation can be integrated to obtain a closed\n", + "form expression for the solution, it may sometimes be much easier to\n", + "approximate the solution numerically than to evaluate it analytically.\n", + "\n", + "In the following two sections, we introduce two classical physical\n", + "models, seen in most courses in differential equations. Analytical\n", + "solutions are given for these models, but then seemingly minor\n", + "modifications are made which make it difficult (if not impossible) to\n", + "calculate actual solution values using analytical techniques. The\n", + "obvious alternative is to use numerical methods." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Ordinary Differential Equations\n", + "\n", + "\n", + "[lab1:sec:odes]: <#3.1-Ordinary-Differential-Equations> \"ODES\"\n", + "\n", + "In order to demonstrate the usefulness of numerical methods, let’s start\n", + "by looking at an example of a *first-order initial value problem* (or\n", + "*IVP*). In their most general form, these equations look like\n", + "\n", + "(Model ODE)\n", + "$$\\begin{array}{c}\n", + " {\\displaystyle \\frac{dy}{dt} = f(y,t),} \\\\\n", + " \\; \\\\\n", + " y(0) = y_0, \n", + " \\end{array}$$\n", + "\n", + "where\n", + "\n", + "- $t$ is the *independent variable* (in many physical systems, which\n", + " change in time, $t$ represents time);\n", + "\n", + "- $y(t)$ is the unknown quantity (or *dependent variable*) that we\n", + " want to solve for;\n", + "\n", + "- $f(y,t)$ is a known function that can depend on both $y$ and $t$;\n", + " and\n", + "\n", + "- $y_0$ is called the *initial value* or *initial condition*, since it\n", + " provides a value for the solution at an initial time, $t=0$ (the\n", + " initial value is required so that the problem has a unique\n", + " solution).\n", + "\n", + "This problem involves the first derivative of the solution, and also\n", + "provides an initial value for $y$, and hence the name “first-order\n", + "initial value problem”.\n", + "\n", + "Under certain very general conditions on the right hand side function\n", + "$f$, we know that there will be a unique solution to the problem ([Model ODE](#lab1:eq:modelode)).\n", + "However, only in very special cases can we actually write down a\n", + "closed-form expression for the solution.\n", + "\n", + "In the remainder of this section, we will leave the general equation,\n", + "and investigate a specific example related to heat conduction. It will\n", + "become clear that it is the problems which *do not have exact solutions*\n", + "which are the most interesting or meaningful from a physical standpoint.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### *Example One*\n", + "\n", + "\n", + "> Consider a small rock, surrounded by air or water,\n", + "which gains or loses heat only by conduction with its surroundings\n", + "(there are no radiation effects). If the rock is small enough, then we\n", + "can ignore the effects of diffusion of heat within the rock, and\n", + "consider only the flow of heat through its surface, where the rock\n", + "interacts with the surrounding medium.\n", + "\n", + "> It is well known from experimental observations that the rate at which\n", + "the temperature of the rock changes is proportional to the difference\n", + "between the rock’s surface temperature, $T(t),$ and the *ambient\n", + "temperature*, $T_a$ (the ambient temperature is simply the temperature\n", + "of the surrounding material, be it air, water, …). This relationship is\n", + "expressed by the following ordinary differential equation\n", + "
\n", + "(Conduction 1d)\n", + "$$% \\textcolor[named]{Red}{\\frac{dT}{dt}} = -\\lambda \\,\n", + "% \\textcolor[named]{Blue}{(T-T_a)} .\n", + " \\underbrace{\\frac{dT}{dt}}_{\\begin{array}{c} \n", + " \\mbox{rate of change}\\\\\n", + " \\mbox{of temperature}\n", + " \\end{array}}\n", + " = -\\lambda \\underbrace{(T-T_a)}_{\\begin{array}{c} \n", + " \\mbox{temperature}\\\\\n", + " \\mbox{difference}\n", + " \\end{array}} .$$\n", + "
\n", + " \n", + ">and is commonly known as *Newton’s\n", + "Law of Cooling*. (The parameter $\\lambda$ is defined to be\n", + "$\\lambda = \\mu A/cM$, where $A$ is the surface area of the rock, $M$ is\n", + "its mass, $\\mu$ its thermal conductivity, and $c$ its specific heat.)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Quiz on Newton's Law of Cooling \n", + "\n", + "\n", + "$\\lambda$ is positive? True or False?\n", + "\n", + "In the following, replace 'xxxx' by 'True', 'False', 'Hint 1' or 'Hint 2' and run the cell ([how to](#Running-Code-Cells))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.conduction_quiz(answer = 'xxxx'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we assume that $\\lambda$ is a constant, then the solution to this\n", + "equation is given by \n", + "\n", + "
\n", + "(Conduction solution)\n", + "$$T(t) = T_a + (T(0)-T_a)e^{-\\lambda t},$$\n", + "
\n", + "\n", + "where $T(0)$ is the initial temperature.\n", + "\n", + "**Mathematical Note:** Details of the solution can be found in the [Appendix](#Solution-to-the-Heat-Conduction-Equation)\n", + "\n", + "\n", + "In order to obtain realistic value of the parameter $\\lambda$, let our\n", + "“small” rock be composed of granite, with mass of $1\\;gram$, which\n", + "corresponds to a $\\lambda \\approx 10^{-5}\\;sec^{-1}$.\n", + "\n", + "Sample solution curves are given in Figure [Conduction](#lab1:fig:conduction)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='conduction/conduction.png',width='60%') " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure: Conduction Plot of solution curves $T(t)$ for $T_0=-10,15,20,30$; parameter\n", + "values: $\\lambda=10^{-5}$, $T_a=20$.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Demo: Conduction\n", + "[lab1:demo:conduction]: <#Demo:-Conduction> \"Conduction Demo\"\n", + "\n", + "Here is an interactive example that investigates the behaviour of the solution.\n", + "\n", + "The first we import the function that does the calculation and plotting. You need to run this cell ([how to](#Running-Code-Cells)) to load it. Loading it does not run the function. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from numlabs.lab1 import temperature_conduction as tc" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You need to call the function. Simpliest call is next cell. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# simple call to temperature demo\n", + "tc.temperature()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After running as is try changing To = To (the initial temperature), Ta = Ta (the ambient temperature) or la = λ (the effective conductivity) to investigate changes in the solution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# setting different values \n", + "# (note this uses the defaults again as written, you should change the values)\n", + "tc.temperature(Ta = 20, To = np.array([-10., 10., 20., 30.]), la = 0.00001)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example Two\n", + "\n", + "\n", + "\n", + "Suppose that the rock in the previous\n", + "example has a $\\lambda$ which is *not* constant. For example, if that\n", + "the rock is made of a material whose specific heat varies with the\n", + "temperature or time, then $\\lambda$ can be a function of $T$ or $t$.\n", + "This might happen if the material composing the rock undergoes a phase\n", + "transition at a certain critical temperature (for example, a melting ice\n", + "pellet). The problem is now a *non-linear* one, for which analytical\n", + "techniques may or may not provide a solution.\n", + "\n", + "If $\\lambda=\\lambda(T)$, a function of temperature only, then the exact\n", + "solution may be written as\n", + "$$T(t) = T_a + \\exp{\\left[-\\int^{t}_{0} \\lambda(T(s))ds \\right]},$$\n", + "which involves an integral that may or may not be evaluated\n", + "analytically, in which case we can only approximate the integral.\n", + "Furthermore, if $\\lambda$ is a function of both $T$ and $t$ which is\n", + "*not separable* (cannot be written as a product of a function of $T$ and\n", + "$t$), then we may not be able to write down a closed form for the\n", + "solution at all, and we must resort to numerical methods to obtain a\n", + "solution.\n", + "\n", + "Even worse, suppose that we don’t know $\\lambda$ explicitly as a\n", + "function of temperature, but rather only from experimental measurements\n", + "of the rock (see Figure [Table](#lab1:fig:table) for an example). \n", + "\n", + "| i | Temperature ($T_i$) | Measured $\\lambda_i$ |\n", + "| - | :------------------: | :-------------------: |\n", + "| 0 | -5.0 | 2.92 |\n", + "| 1 | -2.0 | 1.59 |\n", + "| 2 | 1.0 | 1.00 |\n", + "| 3 | 4.0 | 2.52 |\n", + "| 4 | 7.0 | 3.66 | \n", + "| 5 | 10.0 | 4.64 |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename=\"table/table-interp.png\",width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Table: A rock with $\\lambda$ known only at a sequence of discrete temperature\n", + "values, from experimental measurements. The function $\\lambda(T)$ can be\n", + "represented approximately using linear interpolation (and the resulting\n", + "approximate function can then be used to solve the problem\n", + "numerically.\n", + "
\n", + "\n", + "Then there is\n", + "no way to express the rock’s temperature as a function, and analytical\n", + "methods fail us, since we do not know the values at points between the\n", + "given values. One alternative is to approximate $\\lambda$ at\n", + "intermediate points by joining successive points with straight lines\n", + "(this is called *linear interpolation*), and then use the resulting\n", + "function in a numerical scheme for computing the solution.\n", + "\n", + "As the above example demonstrates, even for a simple ODE such as [1-d conduction](#lab1:eq:conduction1d), there\n", + "are situations where analytical methods are inadequate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Partial Differential Equations\n", + "\n", + "\n", + "#### Example Three\n", + "\n", + "[lab1:exm:diffusion1d]: <#Example-Three> \"Example 3\"\n", + "\n", + "The rock in [Example One](#Example-One) was\n", + "considered to be small enough that the effects of heat diffusion in the\n", + "interior were negligible in comparison to the heat lost by conduction\n", + "through its surface. In this example, consider a rock that is *not\n", + "small*, and whose temperature changes are dominated by internal\n", + "diffusion effects. Therefore, it is no longer possible to ignore the\n", + "spatial dependence in the problem.\n", + "\n", + "For simplicity, we will add spatial dependence in one direction only,\n", + "which corresponds to a “one-dimensional rock”, or a thin rod. Assume\n", + "that the rod is insulated along its sides, so that heat flows only along\n", + "its length, and possibly out the ends (see Figure [Rod](#lab1:fig:rock-1d))." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='conduction/rod.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Rod: A thin rod can be thought of as a model for a one-dimensional\n", + "rock.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Consequently, the temperature varies only with position, $x$, and time,\n", + "$t$, and can be written as a function $u(x,t)$. The temperature in the\n", + "rod is governed by the following PDE $$u_t = \\alpha^2 u_{xx},$$ for\n", + "which we have to provide an initial temperature $$u(x,0) = u_0(x),$$ and\n", + "boundary values $$u(0,t)=u(1,t)=0,$$ where\n", + "\n", + "- $\\alpha^2$ is the *thermal diffusivity* of the material,\n", + "\n", + "- $u_0(x)$ is the initial temperature distribution in the rod, and\n", + "\n", + "- the boundary conditions indicate that the ends of the rod are held\n", + " at constant temperature, which we’ve assumed is zero.\n", + "\n", + "Thermal diffusivity is a quantity that depends only on the material from\n", + "which the bar is made. It is defined by\n", + "$$\\alpha^2 = \\frac{\\kappa}{\\rho c},$$\n", + "where $\\kappa$ is the thermal\n", + "conductivity, $\\rho$ is the density, and $c$ is the specific heat. A\n", + "typical value of the thermal diffusivity for a granite bar is\n", + "$0.011\\;cm^2/sec$, and $0.0038\\;cm^2/sec$ for a bar made of brick.\n", + "\n", + "Using the method of *separation of variables*, we can look for a\n", + "temperature function of the form $u(x,t)=X(x) \\cdot T(t)$, which leads\n", + "to the infinite series solution\n", + "$$u(x,t) = \\sum_{n=1}^\\infty b_n e^{-n^2\\pi^2\\alpha^2 t}\\sin{(n\\pi x)},$$\n", + "where the series coefficients are\n", + "$$b_n = 2 \\int_0^1 u_0(x) \\sin{(n\\pi x)} dx.$$\n", + "\n", + "**Mathematical Note:** Details of the derivation can be found in any introductory text in PDE’s\n", + "(for example, [Boyce and DiPrima (1986)](#Ref:BoyceDiPrima) [p. 549])." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We do manage to obtain an explicit formula for the solution, which can\n", + "be used to calculate actual values of the solution. However, there are\n", + "two obvious reasons why this formula is not of much practical use:\n", + "\n", + "1. The series involves an infinite number of terms (except for very\n", + " special forms for the initial heat distribution … such as the one\n", + " shown below). We might be able to truncate the series, since each\n", + " term decreases exponentially in size, but it is not trivial to\n", + " decide how many terms to choose in order to get an accurate answer\n", + " and here we are already entering the realm of numerical\n", + " approximation.\n", + "\n", + "2. Each term in the series requires the evaluation of an integral. When\n", + " these cannot be integrated analytically, we must find some way to\n", + " approximate the integrals … numerical analysis rears its head once\n", + " again!\n", + "\n", + "For most physical problems, an analytical expression cannot be obtained,\n", + "and the exact formula is not of much use.\n", + "\n", + "However, consider a very special case, when the initial temperature\n", + "distribution is sinusoidal, $$u_0(x) = \\sin(\\pi x).$$ For this problem,\n", + "the infinite series collapses into a single term\n", + "$$u(x,t) = e^{-\\pi^2\\alpha^2t}\\sin{\\pi x}.$$\n", + "\n", + "Sample solution curves are given in Figure [1d Diffusion](#lab1:fig:diffusion-1d)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='diffusion/diffusion.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure 1d-diffusion Temperature vs. position curves at various times, for heat diffusion\n", + "in a rod with sinusoidal initial temperature distribution and parameter\n", + "value $\\alpha=0.2$.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Movie: Diffusion\n", + "Here is a movie of the exact solution to the diffusion problem. Run the cell ([how to](#Running-Code-Cells)), then run the video. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "import IPython.display as display\n", + "\n", + "vid = display.YouTubeVideo(\"b4D2ktTtw7E\", modestbranding=1, rel=0, width=800)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Summary\n", + "\n", + "This section is best summed up by the insightful comment of [Strang (1986)](#Ref:Strang)\n", + "[p. 587]:\n", + "\n", + "**Nature is nonlinear.**\n", + "\n", + "Most problems arising in physics (which are non-linear) cannot be solved\n", + "analytically, or result in expressions that have little practical value,\n", + "and we must turn to numerical solution techniques." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Discretization\n", + "\n", + "\n", + "When computing analytical solutions to differential equations, we are\n", + "dealing with *continuous functions*; i.e. functions that depend continuously\n", + "on the independent variables. A computer, however, has only finite\n", + "storage capacity, and hence there is no way to represent continuous\n", + "data, except approximately as a sequence of *discrete* values.\n", + "\n", + "### Example Four\n", + "\n", + "> We already saw an example of a discrete function in\n", + "Example [Two](#Example-Two) where the rate function $\\lambda$, depended on the temperature. If $\\lambda$ is not known by\n", + "some empirical formula, then it can only be determined by experimental\n", + "measurements at a discrete set of temperature values. In\n", + "Figure [Table](#lab1:fig:table), $\\lambda$ is given at a sequence of six\n", + "temperature points ($(T_i, \\lambda_i)$, for $i = 0, 1, \\dots, 5)$),\n", + "and so is an example of a *discrete function*.\n", + "\n", + "> The process of interpolation, which was introduced in\n", + "Example [Two](#Example-Two), will be considered in more\n", + "detail next.\n", + "\n", + "### Example Five\n", + "\n", + "\n", + "> Consider the two continuous functions\n", + "$$f(x)=x^3-5x \\;\\; {\\rm and} \\;\\; g(x)=x^{2/3} .$$\n", + "(In fact, $g(x)$ was the function used to generate the values $\\lambda(T)$ in\n", + "[Example Two](#Example-Two)).\n", + "\n", + "> The representation of functions using mathematical notation or graphs is\n", + "very convenient for mathematicians, where continuous functions make\n", + "sense. However, a computer has a limited storage capacity, and so it can\n", + "represent a function only at a finite number of discrete points $(x, y)$.\n", + "\n", + "> One question that arises immediately is: *What do we do if we have to\n", + "determine a value of the function which is not at one of the discrete\n", + "points?* The answer to this question is to use some form of\n", + "*interpolation* – namely to use an approximation procedure\n", + "to estimate values of the function at points between the known values.\n", + "\n", + "> For example, linear interpolation approximates the function at\n", + "intermediate points using the straight line segment joining the two\n", + "neighbouring discrete points. There are other types of interpolation\n", + "schemes that are more complicated, a few of which are:\n", + "\n", + ">- quadratic interpolation: every two sucessive points are joined by a\n", + " quadratic polynomial.\n", + "\n", + ">- cubic splines: each pair of points is joined by a cubic polynomial\n", + " so that the function values and first derivatives match at each\n", + " point.\n", + "\n", + ">- Fourier series: instead of polynomials, uses a sum of $\\sin nx$ and\n", + " $\\cos nx$ to approximate the function (Fourier series are useful in\n", + " analysis, as well as spectral methods).\n", + "\n", + ">- Chebyshev polynomials: another type of polynomial approximation\n", + " which is useful for spectral methods.\n", + "\n", + ">- …many others …\n", + "\n", + ">For details on any of these interpolation schemes, see a numerical\n", + "analysis text such as that by [Burden and Faires (1981)](#Ref-BurdenFaires)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "An application of linear interpolation to discrete versions of the\n", + "functions $f$ and $g$ is shown in Figure [f and g](#lab1:fig:discrete-f)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='discrete/f.png', width='60%') " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='discrete/g.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + ">
\n", + "Figure f and g: The functions $f$ and $g$ are known only at discrete points. The\n", + "function can be approximated at other values by linear interpolation,\n", + "where straight line segments are used to join successive points.\n", + "
\n", + "\n", + "> Depending on the function, or number of location of the points chosen,\n", + "the approximation may be more or less accurate. In\n", + "Figure [f and g](#lab1:fig:discrete-f), it is not clear which function is\n", + "approximated more accurately. In the graph of $f(x)$, the error seems to\n", + "be fairly small throughout. However, for the function $g(x)$, the error\n", + "is large near $x=0$, and then very small elsewhere. This problem of\n", + "*accuracy* of discrete approximations will come up again and again in\n", + "this course." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Demo: Interpolation\n", + "[lab1:demo:discrete]: <#Demo:-Interpolation> \"Interpolation Demo\"\n", + "Here is an interactive example demonstrating the use of interpolation (linear and cubic) in approximating functions. \n", + "\n", + "The next cell imports a module containing two python functions that interpolate the two algebraic functions, f and g ([Figure f and g](#lab1:fig:discrete-f)). You need to run this cells ([how to](#Running-Code-Cells)) to load them." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from numlabs.lab1 import interpolate as ip" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once you have loaded the module, you can call the interpolation routines as ip.interpol_f(pn) and ip.interpol_g(pn). pn is the number of points used the interpolation. Watch what changing pn does to the solutions." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ip.interpol_f(6)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ip.interpol_g(6)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Interpolation Quiz \n", + "\n", + "\n", + " The accuracy of an approximation using\n", + "linear or cubic interpolation improves as the number of points is\n", + "increased. True or False?\n", + "\n", + "In the following, replace 'xxxx' by 'True', 'False', or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.interpolation_quiz(answer = 'xxx'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When solving differential equations numerically, it is essential to\n", + "reduce the continuous problem to a discrete one. The basic idea is to\n", + "look for an approximate solution, which is defined at a finite number of\n", + "discrete points. This set of points is called a *grid*. Consider the\n", + "one-dimensional conduction problem of Example [One, Conduction](#Example-One),\n", + "which in its most general form reads\n", + "\n", + "
\n", + "(Conduction Equation)\n", + " $$\\frac{dT}{dt} = -\\lambda(T,t) \\, (T-T_a),$$\n", + "
\n", + "\n", + " with initial temperature $T(0)$.\n", + "\n", + "When we say we want to design a numerical procedure for solving this\n", + "initial value problem, what we want is a procedure for constructing a\n", + "sequence of approximations,\n", + "$$T_0, \\, T_1, \\, \\ldots, \\, T_i, \\, \\ldots,$$\n", + "defined at a set of\n", + "discrete $t$-points, \n", + "$$t_0\n", + "Figure Grid: A grid of equally-spaced points, $t_i=t_0+i\\Delta t$, for $i=0,1,2,\\ldots$.\n", + "\n", + "\n", + "This process of reducing a continuous problem to one in a finite number\n", + "of discrete unknowns is called *discretization*. The actual mechanics of\n", + "discretizing differential equations are introduced in the following\n", + "section." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Discretization Quiz \n", + "\n", + "\n", + "What phrase best describes \"discretization\"?\n", + "\n", + "**A** The development and analysis of methods for the\n", + "solution of mathematical problems on a computer.\n", + "\n", + "**B** The process of replacing continuous functions by\n", + "discrete values.\n", + "\n", + "**C** Employing the discrete Fourier transform to analyze the\n", + "stability of a numerical scheme.\n", + "\n", + "**D** The method by which one can reduce an initial value\n", + "problem to a set of discrete linear equations that can be solved on a\n", + "computer. \n", + "\n", + "In the following, replace 'x' by 'A', 'B', 'C', 'D' or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.discretization_quiz(answer = 'x'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Summary\n", + "\n", + "The basic idea in this section is that continuous functions can be\n", + "approximated by discrete ones, through the process of\n", + "*discretization*. In the course of looking at discrete\n", + "approximations in the interactive example, we introduced the idea of the\n", + "*accuracy* of an approximation, and showed that increasing the accuracy\n", + "of an approximation is not straightforward.\n", + "\n", + "We introduced the notation for approximate solutions to differential\n", + "equations on a grid of points. The mechanics of discretization as they\n", + "apply to differential equations, will be investigated further in the\n", + "remainder of this Lab, as well as in Lab Two." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Difference Approximations to the First Derivative\n", + "\n", + "\n", + "It only remains to write a discrete version of the differential equation ([Conduction Equation](#lab1:eq:conduction))\n", + "involving the approximations $T_i$. The way we do this is to approximate\n", + "the derivatives with *finite differences*. If this term is new to you,\n", + "then you can think of it as just another name for a concept you have\n", + "already seen in calculus. Remember the *definition of the derivative of\n", + "a function $y(t)$*, where $y^\\prime(t)$ is written as a limit of a\n", + "divided difference:\n", + "\n", + "
\n", + "(limit definition of derivative)\n", + "$$y^\\prime(t) = \\lim_{\\Delta t\\rightarrow 0} \\frac{y(t+\\Delta t)-y(t)}{\\Delta t}.\n", + " $$ \n", + "
\n", + "\n", + " We can apply the same idea to approximate\n", + "the derivative $dT/dt=T^\\prime$ in ([Conduction Equation](#lab1:eq:conduction)) by the *forward difference formula*,\n", + "using the discrete approximations, $T_i$:\n", + "\n", + "
\n", + "(Forward Difference Formula)\n", + "$$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_i}{\\Delta t}.$$\n", + "
\n", + "\n", + "### Example Six\n", + "\n", + "In order to understand the ability of the formula ([Forward Difference Formula](#lab1:eq:forward-diff)) to approximate the\n", + "derivative, let’s look at a specific example. Take the function\n", + "$y(x)=x^3-5x$, and apply the forward difference formula at the point\n", + "$x=1$. The function and its tangent line (the short line segment with\n", + "slope $y^\\prime(1)$) are displayed in Figure [Tangents](#lab1:fig:deriv)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='deriv/deriv.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + ">
\n", + "Figure Tangents: Plot of the function $y=x^3-5x$ and the forward difference\n", + "approximations to the derivative for various values of $\\Delta t$\n", + "
\n", + "\n", + "> Each of the remaining line segments represents the forward difference\n", + "approximation to the tangent line for different values of $\\Delta t$, which are\n", + "simply *the secant lines through the points $(t, y(t))$ and\n", + "$(t+\\Delta t, y(t+\\Delta t))$*. Notice that the approximation improves as $\\Delta t$ is\n", + "reduced. This motivates the idea that grid refinement improves the\n", + "accuracy of the discretization …but not always (as we will see in the\n", + "coming sections)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Investigation\n", + "\n", + "Investigate the use of the forward difference approximation of the derivative in the following interactive example. \n", + "\n", + "The next cell loads a python function that plots a function f(x) and approximates its derivative at $x=1$ based on a second x-point that you chose (xb). You need to run this cell ([how to](#Running-Code-Cells)) to load it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from numlabs.lab1 import derivative_approx as da" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once you have loaded the function you can call it as da.plot_secant(xb) where xb the second point used to estimate the derivative (slope) at $x=1$. You can compare the slope of the estimate (straight line) to the slope of the function (blue curve)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "da.plot_secant(2.)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Forward Euler Method\n", + "\n", + "\n", + "\n", + "We can now write down a discrete version of our model ODE problem ([Conduction Equation](#lab1:eq:conduction)) at any\n", + "point $t_i$ by\n", + "\n", + "1. discretizing the derivative on the left hand side (for example,\n", + " using the forward difference approximation ([Forward Difference Formula](#lab1:eq:forward-diff));\n", + "\n", + "2. evaluating the right hand side function at the discrete point $t_i$.\n", + "\n", + "The discrete form of the problem is\n", + "\n", + "$$\\frac{T_{i+1}-T_i}{\\Delta t} = \\lambda(T_i,t_i) \\, (T_i-T_a),$$\n", + "or, after rearranging, \n", + "\n", + "
\n", + "$$T_{i+1} = T_i + \\Delta t \\, \\lambda(T_i,t_i) \\, (T_i-T_a).$$ \n", + "
\n", + "\n", + "This formula is called the\n", + "*Forward Euler method* (since it uses forward differences). Notice that\n", + "this formula relates each discrete solution value to the solution at the\n", + "preceding $t$-point. Consequently, if we are given an initial value\n", + "$T(0)$, then all subsequent values of the solution are easily computed.\n", + "\n", + "(**Note:** The forward Euler formula for the more general\n", + "first-order IVP in ([Model ODE](#lab1:eq:modelode')) is simply $y_{i+1} = y_i + \\Delta t f(y_i,t_i)$.)\n", + "\n", + "#### Example Seven\n", + "\n", + "\n", + "> Let us now turn to another example in atmospheric\n", + "physics to illustrate the use of the forward Euler method. Consider the\n", + "process of condensation and evaporation in a cloud. The *saturation\n", + "ratio*, $S$, is the ratio of the vapour pressure to the vapour pressure\n", + "of a plane surface of water at temperature $T$. $S$ varies in time\n", + "according to the \n", + "\n", + ">
\n", + "(saturation development equation)\n", + "$$\\frac{dS}{dt} = \\alpha S^2 + \\beta S + \\gamma,$$ \n", + "
\n", + "\n", + "> where $\\alpha$, $\\beta$ and $\\gamma$\n", + "are complicated (but constant) expressions involving the physical\n", + "parameters in the problem (and so we won’t reproduce them here).\n", + "\n", + "> What are some physically reasonable values of the parameters (other than\n", + "simply $\\alpha<0$ and $\\gamma>0$)?\n", + "\n", + "> [Chen (1994)](#Ref:Chen) gives a detailed derivation of the equation, which is a\n", + "non-linear, first order ODE (i.e. non-linear in the dependent variable $S$,\n", + "and it contains only a first derivative in the time variable). Chen also\n", + "derives an analytical solution to the problem which takes a couple pages\n", + "of messy algebra to come to. Rather than show these details, we would\n", + "like to use the forward Euler method in order to compute the solution\n", + "numerically, and as we will see, this is actually quite simple." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> Using the ([forward difference formula](#lab1:eq:forward-diff)), the discrete form of the ([saturation development equation](#lab1:eq:saturation)) is\n", + "$$S_{i+1} = S_i + \\Delta t \\left( \\alpha S_i^2 + \\beta S_i + \\gamma \\right).$$\n", + "Consider an initial saturation ratio of $0.98$,\n", + "and take parameter values $\\alpha=-1$, $\\beta=1$ and $\\gamma=1$. The\n", + "resulting solution, for various values of the time step $\\Delta t$,is plotted in\n", + "Figure [Saturation Time Series](#lab1:fig:saturation)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='feuler/sat2.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Saturation Time Series: Plot of the saturation ratio as a function of time using the Forward\n", + "Euler method. “nt” is the number of time steps.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> There are two things to notice here, both related to the importance of\n", + "the choice of time step $\\Delta t$:\n", + "\n", + "> - As $\\Delta t$ is reduced, the solution appears to *converge* to one solution\n", + " curve, which we would hope is the exact solution to the differential\n", + " equation. An important question to ask is: *When will the numerical\n", + " method converge to the exact solution as $\\Delta t$ is reduced?*\n", + "\n", + "> - If $\\Delta t$ is taken too large, however, the numerical solution breaks down.\n", + " In the above example, the oscillations that occur for the largest\n", + " time step (when $nt=6$) are a sign of *numerical\n", + " instability*. The differential problem is stable and exhibits\n", + " no such behaviour, but the numerical scheme we have used has\n", + " introduced an instability. An obvious question that arises is: *How\n", + " can we avoid introducing instabilities in a numerical scheme?*\n", + "\n", + "> Neither question has an obvious answer, and both issues will be\n", + "investigated further in Lab 2." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Other Approximations\n", + "\n", + "\n", + "Look again at the ([limit definition of derivative](#lab1:eq:defn-deriv)), and notice that an\n", + "equivalent expression for $T^\\prime$ is\n", + "\n", + "
\n", + "$$T^\\prime(t) = \\lim_{\\Delta t\\rightarrow 0} \\frac{T(t)-T(t-\\Delta t)}{\\Delta t}.$$ \n", + "
\n", + " \n", + "From this, we can derive the *backward\n", + "difference formula* for the first derivative,\n", + "\n", + "
\n", + "(Backward Difference Formula)\n", + "$$T^\\prime(t_i) \\approx \\frac{T_i-T_{i-1}}{\\Delta t},$$ \n", + "
\n", + "\n", + "and similarly the *centered difference formula* \n", + "\n", + "
\n", + "(Centered Difference Formula)\n", + "$$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_{i-1}}{2 \\Delta t}.$$\n", + "
\n", + "\n", + "The corresponding limit formulas are equivalent from a mathematical standpoint, **but the discrete formulas are not!** In particular, the accuracy and stability of numerical schemes derived from the three difference formulas: ([Forward Difference Formula](#lab1:eq:forward-diff')), ([Backward Difference Formula](#lab1:eq:backward-diff')) and ([Centered Difference Formula](#lab1:eq:centered-diff))\n", + " are quite different. More will said on this in the next Lab." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Summary\n", + "\n", + "This section introduces the use of the forward difference formula to\n", + "discretize the derivatives in a first order differential equation. The\n", + "resulting numerical scheme is called the forward Euler method. We also\n", + "introduced the backward and centered difference formulas for the first\n", + "derivative, which were also obtained from the definition of derivative.\n", + "\n", + "You saw how the choice of grid spacing affected the accuracy of the\n", + "solution, and were introduced to the concepts of convergence and\n", + "stability of a numerical scheme. More will be said about these topics in\n", + "the succeeding lab, as well as other methods for discretizing\n", + "derivatives." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Generalizations\n", + "\n", + "\n", + "The idea of discretization introduced in the previous section can be\n", + "generalized in several ways, some of which are:\n", + "\n", + "- problems with higher derivatives,\n", + "\n", + "- systems of ordinary differential equations,\n", + "\n", + "- boundary value problems, and\n", + "\n", + "- partial differential equations.\n", + "\n", + "### Higher Derivatives\n", + "\n", + "\n", + "Many problems in physics involve derivatives of second order and higher.\n", + "Discretization of these derivatives is no more difficult than the first\n", + "derivative in the previous section. The difference formula for the\n", + "second derivative, which will be derived in Lab \\#2, is given by\n", + "\n", + "
\n", + "(Centered Second Derivative)\n", + "$$y^{\\prime\\prime}(t_i) \\approx \n", + " \\frac{y(t_{i+1})-2y(t_i)+y(t_{i-1})}{(\\Delta t)^2} ,$$\n", + "
\n", + "\n", + "and is called the *second-order\n", + "centered difference formula* for the second derivative (“centered”,\n", + "because it involves the three points centered about $t_i$, and\n", + "“second-order” for reasons we will see in the next Lab). We will apply\n", + "this formula in the following example …" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Eight\n", + "\n", + "[lab1:exm:balloon]: <#Example-Eight>\n", + "\n", + "A weather balloon, filled with helium, climbs\n", + "vertically until it reaches its level of neutral buoyancy, at which\n", + "point it begins to oscillate about this equilibrium height. We can\n", + "derive a DE describing the motion of the balloon by applying Newton’s\n", + "second law: \n", + "$$mass \\; \\times \\; acceleration = force$$\n", + "$$m \\frac{d^2 y}{d t^2} = \n", + " \\underbrace{- \\beta \\frac{dy}{dt}}_{\\mbox{air resistance}} \n", + " \\underbrace{- \\gamma y}_{\\mbox{buoyant force}},$$ where\n", + "\n", + "- $y(t)$ is the displacement of the balloon vertically from its\n", + " equilibrium level, $y=0$;\n", + "\n", + "- $m$ is the mass of the balloon and payload;\n", + "\n", + "- the oscillations are assumed small, so that we can assume a linear\n", + " functional form for the buoyant force, $-\\gamma y$.\n", + "\n", + "This problem also requires initial values for both the initial\n", + "displacement and velocity:\n", + "$$y(0) = y_0 \\;\\; \\mbox{and} \\;\\; \\frac{dy}{dt}(0) = v_0.$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='balloon/balloon.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Weather Balloon: A weather balloon oscillating about its level of neutral\n", + "buoyancy.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Problem One\n", + "\n", + "\n", + "a\\) Using the centered difference formula ([Centered Second Derivative](#lab1:eq:centered-diff2)) for the second derivative, and\n", + " the forward difference formula ([Forward Difference Formula](#lab1:eq:forward-diff')) for the first derivative at the point\n", + " $t_i,$ derive a difference scheme for $y_{i+1}$, the vertical\n", + " displacement of the weather balloon.\n", + "\n", + "b\\) What is the difference between this scheme and the forward Euler\n", + " scheme from [Example Seven](#Example-Seven), related to the initial\n", + " conditions? (**Hint:** think about starting values …)\n", + "\n", + "c\\) Given the initial values above, explain how to start the numerical\n", + " integration.\n", + " \n", + "*Note*: There are a number of problems in the text of each lab. See the syllabus for which problems you are assigned as part of your course. That is, you don't have to do them all!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Systems of First-order ODE's\n", + "\n", + "\n", + "Discretization extends in a simple way to first-order systems of ODE’s,\n", + "which arise in many problems, as we will see in some of the later labs.\n", + "For now, though, we can see:\n", + "\n", + "#### Example 9\n", + "\n", + "\n", + "The second order DE for the weather balloon problem from\n", + "Example [Eight](#Example-Eight) can be rewritten by letting $u=dy/dt$. Then,\n", + "\n", + "\\begin{align}\n", + "\\frac{dy}{dt} &= u\\\\\n", + "\\frac{du}{dt} &= -\\frac{\\beta}{m} u - \\frac{\\gamma}{m} y\n", + "\\end{align}\n", + "\n", + "which is a\n", + "system of first order ODE’s in $u$ and $y$. This set of differential\n", + "equations can be discretized to obtain another numerical scheme for the\n", + "weather balloon problem." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Problem Two\n", + "\n", + "\n", + "a\\) Derive a difference scheme for the problem based on the above system\n", + " of two ODE’s using the forward difference formula for the first\n", + " derivative.\n", + "\n", + "b\\) By combining the discretized equations into one equation for y, show\n", + " that the difference between this scheme and the scheme obtained in\n", + " problem one is the difference formula for the second derivative." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Boundary Value Problem\n", + "\n", + "[lab1:sec:bvp]: (#6.3-Boundary-Value-Problems)\n", + "\n", + "\n", + "So far, we’ve been dealing with *initial value problems* or *IVP’s*\n", + "(such as the problem of heat conduction in a rock in\n", + "Example [One](#Example-One)): a differential equation is given for an\n", + "unknown function, along with its initial value. There is another class\n", + "of problems, called *boundary value problems* (or *BVP’s*),\n", + "where the independent variables are restricted to a *closed domain* (as\n", + "opposed to an *open domain*) and the solution (or its derivative) is\n", + "specified at every point along the boundary of the domain. Contrast this\n", + "to initial value problems, where the solution is *not given* at the end\n", + "time.\n", + "\n", + "A simple example of a boundary value problem is the steady state heat\n", + "diffusion equation problem for the rod in\n", + "Example [Three](#Example-Three). By *steady state*, we mean simply that\n", + "the rod has reached a state where its temperature no longer changes in\n", + "time; that is, $\\partial u/\\partial t = 0$. The corresponding problem\n", + "has a temperature, $u(x)$, that depends on position only, and obeys the\n", + "following equation and boundary conditions:\n", + "$$u_{xx} = 0,$$\n", + "$$u(0) = u(1) = 0.$$\n", + "This problem is known as an *initial-boundary value problem*(or *IBVP*),\n", + "since it has a mix of both initial and boundary values.\n", + "\n", + "The structure of initial and boundary value problems are quite different\n", + "mathematically: IVP’s involve a time variable which is unknown at the\n", + "end time of the integration (and hence the solution is known on an open\n", + "domain or interval), whereas BVP’s specify the solution value on a\n", + "closed domain or interval. The numerical methods corresponding to these\n", + "problems are also quite different, and this can be best illustrated by\n", + "an example.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 10\n", + "\n", + "[lab1:exm:steady-diffusion]: (#Example-Ten)\n", + "\n", + "We can discretize the steady state diffusion\n", + "equation using the centered difference formula for the second\n", + "derivative to obtain: $$u_{i+1}-2u_i+u_{i-1} = 0$$ where\n", + "$u_i\\approx u(i/N)$ and $i=0,1,\\ldots,N$ (and the factor of\n", + "$(\\Delta x)^2 = {1}/{N^2}$ has been multiplied out). The boundary values\n", + "$u_0$ and $u_N$ are both known to be zero, so the above expression\n", + "represents a system of $N-1$ equations in $N-1$ unknown values $u_i$\n", + "that must be solved for *simultaneously*. The solution of such systems\n", + "of linear equations will be covered in more detail in Lab \\#3 in fact, this\n", + "equation forms the basis for a Problem in the Linear Algebra Lab.\n", + "\n", + "Compare this to the initial value problems discretized using the forward\n", + "Euler method, where the resulting numerical scheme is a step-by-step,\n", + "marching process (that is, the solution at one grid point can be\n", + "computed using an explicit formula using only the value at the previous\n", + "grid point).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Partial Differential Equations\n", + "\n", + "\n", + "So far, the examples have been confined to ordinary differential\n", + "equations, but the procedure we’ve set out for ODE’s extends with only\n", + "minor modifications to problems involving PDE’s.\n", + "\n", + "#### Example 11\n", + "\n", + "To illustrate the process, let us go back to the heat diffusion problem\n", + "from Example [Three](#Example-Three), an initial-boundary value problem\n", + "in the temperature $u(x,t)$: $$u_{t} = \\alpha^2 u_{xx},$$ along with\n", + "initial values $$u(x,0) = u_0(x),$$ and boundary values\n", + "$$u(0,t) = u(1,t) = 0.$$\n", + "\n", + "As for ODE’s, the steps in the process of discretization remain the\n", + "same:\n", + "\n", + "1) First, replace the independent variables by discrete values\n", + " $$x_i = i \\Delta x = \\frac{i}{M}, \\;\\; \\mbox{where $i=0, 1,\n", + " \\ldots, M$, and}$$\n", + " $$t_n = n \\Delta t, \\;\\; \\mbox{where $n=0, 1,\n", + " \\ldots$}$$ In this example, the set of discrete points define\n", + " a two-dimensional grid of points, as pictured in\n", + " Figure [PDE Grid](#lab1:fig:pde-grid).\n", + "\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='pdes/pde-grid.png', width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure PDE Grid: The computational grid for the heat diffusion problem, with\n", + " discrete points $(x_i,t_n)$.\n", + "
\n", + "\n", + "2) Replace the dependent variables (in this example, just the\n", + " temperature $u(x,t)$) with approximations defined at the grid\n", + " points: $$U_i^n \\approx u(x_i,t_n).$$ The boundary and initial\n", + " values for the discrete temperatures can then be written in terms of\n", + " the given information.\n", + "\n", + "3) Approximate all of the derivatives appearing in the problem with\n", + " finite difference approximations. If we use the centered difference\n", + " approximation ([Centered Second Derivative](#lab1:eq:centered-diff2)) for the second derivative in $x$, and\n", + " the forward difference formula ([Forward Difference Formula](#lab1:eq:forward-diff')) for the time derivative (while evaluating the\n", + " terms on the right hand side at the previous time level), we obtain\n", + " the following numerical scheme:\n", + " \\begin{equation} \n", + " U_i^{n+1} = U_i^n + \\frac{\\alpha^2 \\Delta t}{(\\Delta x)^2} \\left(\n", + " U_{i+1}^n - 2 U_i^n + U_{i-1}^n \\right)\n", + " \\end{equation}\n", + "\n", + " Given the initial values, $U_i^0=u_0(x_i)$, and boundary values\n", + " $U_0^n=U_M^n=0$, this difference formula allows us to compute values of\n", + " temperature at any time, based on values at the previous time.\n", + "\n", + " There are, of course, other ways of discretizing this problem, but the\n", + " above is one of the simplest." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes\n", + "\n", + "\n", + "\n", + "### Solution to the Heat Conduction Equation\n", + "\n", + "\n", + "\n", + "In Example [One](#Example-One), we had the equation\n", + "$$\\frac{dT}{dt} = -\\lambda (T-T_a),$$\n", + "subject to the initial condition\n", + "$T(0)$. This equation can be solved by *separation of variables*,\n", + "whereby all expressions involving the independent variable $t$ are moved\n", + "to the right hand side, and all those involving the dependent variable\n", + "$T$ are moved to the left $$\\frac{dT}{T-T_a} = -\\lambda dt.$$\n", + "The resulting expression is integrated from time $0$ to $t$\n", + "$$\\int_{T(0)}^{T(t)} \\frac{dS}{S-T_a} = -\\int_0^t\\lambda ds,$$\n", + "(where $s$ and $S$ are dummy variables of integration), which then leads to the\n", + "relationship \n", + "$$\\ln \\left( T(t)-T_a)-\\ln(T(0)-T_a \\right) = -\\lambda t,$$\n", + "or, after exponentiating both sides and rearranging,\n", + "$$T(t) = T_a + (T(0)-T_a)e^{-\\lambda t},$$\n", + "which is exactly the [Conduction Solution](#lab1:eq:conduction-soln) equation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References\n", + "\n", + "\n", + "
\n", + "Boyce, W. E. and R. C. DiPrima, 1986: Elementary Differential Equations and Boundary Value Problems. John Wiley & Sons, New York, NY, 4th edition.\n", + "
\n", + "
\n", + "Burden, R. L. and J. D. Faires, 1981: Numerical Analysis. PWS-Kent, Boston, 4th edition.\n", + "
\n", + "
\n", + "Chen, J.-P., 1994: Predictions of saturation ratio for cloud microphysical models. Journal of the Atmospheric\n", + "Sciences, 51(10), 1332–1338.\n", + "
\n", + "Garcia, A. L., 1994: Numerical Methods for Physics. Prentice-Hall, Englewood Cliffs, NJ.\n", + "
\n", + "Strang, G., 1986: Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary\n", + "\n", + "\n", + "**backward difference discretization:** used to estimate a derivative – uses the current points and points with smaller independent variable.\n", + "\n", + "**boundary value problem:** a differential equation (or set of differential equations) along with boundary values for the unknown functions. Abbreviated BVP.\n", + "\n", + "**BVP:** see *boundary value problem*\n", + "\n", + "**centre difference discretization:** used to estimate a derivative – uses a discretization symmetric (in\n", + "independent variable) around the current point.\n", + "\n", + "**closed domain:** a domain for which the value of the dependent variables is known on the boundary of the domain.\n", + "\n", + "**converge:** as the discretization step (eg. ∆t) is reduced the solutions generated approach one solution curve.\n", + "\n", + "**DE:** see *differential equation*\n", + "\n", + "**dependent variable:** a variable which is a (possibly unknown) function of the independent variables in a problem; for example, in a fluid the pressure can be thought of as a dependent variable, which depends on the time t and position (x, y, z).\n", + "\n", + "**differential equation:** an equation involving derivatives. Abbreviated DE.\n", + "\n", + "**discretization:** when referring to DE’s, it is the process whereby the independent variables are replaced by a *grid* of discrete points; the dependent variables are replaced by approximations at the grid points; and the derivatives appearing in the problem are replaced by a *finite difference* approximation. The discretization process replaces the DE (or DE’s) with an algebraic equation or finite system of algebraic equations which can be solved on a computer.\n", + "\n", + "**finite difference:** an approximation of the derivative of a function by a difference quotient involving values of the function at discrete points. The simplest method of deriving finite difference formulae is using Taylor series.\n", + "\n", + "**first order differential equation:** a differential equation involving only first derivatives of the unknown functions.\n", + "\n", + "**forward difference discretization:** used to calculate a derivative – uses the current points and points with larger independent variable.\n", + "\n", + "**grid:** when referring to discretization of a DE, a grid is a set of discrete values of the independent variables, defining a *mesh* or array of points, at which the solution is approximated.\n", + "\n", + "**independent variable:** a variable that does not depend on other quantities (typical examples are time, position, etc.)\n", + "\n", + "**initial value problem:** a differential equation (or set of differential equations) along with initial values for the unknown functions. Abbreviated IVP.\n", + "\n", + "**interpolation:** a method for estimating the value of a function at points intermediate to those where its values are known.\n", + "\n", + "**IVP:** initial value problem\n", + "\n", + "**linear:** pertaining to a function or expression in which the quantities appear in a linear combination. If $x_i$ are the variable quantities, and $c_i$ are constants, then any linear function of the $x_i$ can be written in the form $c_0 + \\sum_i c_i \\cdot x_i$.\n", + "\n", + "**linear interpolation:** interpolation using straight lines between the known points\n", + "\n", + "**Navier-Stokes equations:** the system of non-linear PDE’s that describe the time evolution of the flow of\n", + "a fluid.\n", + "\n", + "**non-linear:** pertaining to a function or expression in which the quantities appear in a non-linear combination.\n", + "\n", + "**numerical instability:** although the continuous differential equation has a finite solution, the numerical solution grows without bound as the numerical interation proceeds.\n", + "\n", + "**ODE:** see *ordinary differential equation*\n", + "\n", + "**open domain:** a domain for which the value of one or more dependent variables is unknown on a portion\n", + "of the boundary of the domain or a domain for which one boundary (say time very large) is not specified.\n", + "\n", + "**ordinary differential equation:** a differential equation where the derivatives appear only with respect to one independent variable. Abbreviated ODE.\n", + "\n", + "**partial differential equation:** a differential equation where derivatives appear with respect to more than one independent variable. Abbreviated PDE.\n", + "\n", + "**PDE:** see *partial differential equation*\n", + "\n", + "**second order differential equation:** a differential equation involving only first and second derivatives of the unknown functions.\n", + "\n", + "**separation of variables:** a technique whereby a function with several dependent variables is written as a product of several functions, each of which depends on only one of the dependent variables. For example, a function of three unknowns, u(x, y, t), might be written as u(x, y, t) = X(x) · Y (y) · T (t)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": { + "height": "512px", + "width": "252px" + }, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": true, + "toc_position": {}, + "toc_section_display": "block", + "toc_window_display": false + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab10/01-lab10.ipynb.txt b/_sources/notebooks/lab10/01-lab10.ipynb.txt new file mode 100644 index 0000000..afa8c36 --- /dev/null +++ b/_sources/notebooks/lab10/01-lab10.ipynb.txt @@ -0,0 +1,682 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 10: Numerical Advection Schemes #\n", + "\n", + "Carmen Guo" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Learning Goals ##\n", + "\n", + "- Explain why modelling advection is particularly difficult\n", + "- Explain and contrast diffusion and dispersion\n", + "- Define positive definite" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems ##\n", + "\n", + "- [Problem One](#Problem-One)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import context\n", + "from IPython.display import Image\n", + "import matplotlib.pyplot as plt\n", + "# make the plots happen inline\n", + "%matplotlib inline\n", + "# import the numpy array handling library\n", + "import numpy as np\n", + "# import the advection code from the numlabs directory\n", + "import numlabs.lab10.advection_funs as afs" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Advection Process ##\n", + "\n", + "The word advection means ‘transfer of heat through the horizontal motion\n", + "of a flow’. More generally, we will consider a flow of some\n", + "non-diffusive quantity. For example, consider the wind as a flow and the\n", + "water vapour in the air as the non-diffusive quantity. Suppose the wind\n", + "is travelling in the positive x direction, and we are considering the\n", + "vapour concentration from $x=1$ to $x=80$.\n", + "\n", + "Assume that initially the distribution curve of the vapour is Gaussian\n", + "([Figure Initial Distribution](#fig:initial)). Ideally, the water droplets move at the\n", + "same speed as that of the air, so the distribution curve retains its\n", + "initial shape as it travels along the x-axis. This process is described\n", + "by the following PDE: \n", + "\n", + "\n", + "(Advection Eqn)\n", + "$$\\frac{\\partial c}{\\partial t} + u \\frac{\\partial c}{\\partial x} = 0$$ \n", + "where $c$ is the concentration of the water\n", + "vapour, and $u$ is the speed of the wind (assuming the wind is blowing\n", + "at constant speed)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/initial.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Figure Initial Distribution:** This is the initial distribution of water vapour concentration.\n", + "\n", + "As you will see in the upcoming examples, it is not easy to obtain a\n", + "satisfactory numerical solution to this PDE." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Simple Solution Using Centred Differencing Scheme ##\n", + "\n", + "Let’s start off simple and solve the PDE ([Advection Eqn](#eqn:advection)) using\n", + "centred differences, *i.e.*, by expanding the time and spatial derivatives\n", + "in the following way:\n", + "\n", + "\n", + "(Centered Difference Scheme)\n", + "$$\\frac{\\partial c}{\\partial t}(x=m dx, t=n dt) =\\frac {c(x=m dx, t=(n+1) dt) - c(x=m dx, t=(n-1) dt)}{2 dt}$$\n", + "$$\\frac{\\partial c}{\\partial x}(x=m dx, t=n dt)=\\frac {c(x=(m+1) dx, t=n dt) - c(x=(m-1) dx, t=n dt)}{2 dx}$$\n", + "\n", + "where $m=2, \\ldots, 79$, and $n=2, \\ldots$. Substitution of the\n", + "equations into the PDE yields the following recurrence relation:\n", + "$$c(m, n+1)= c(m, n-1) - u \\frac{dt}{dx} (c(m+1, n) - c(m-1, n))$$\n", + "\n", + "The boundary conditions are : $$c(x=1 dx)= c(x=79 dx)$$\n", + "$$c(x=80 dx)= c(x=2 dx)$$\n", + "\n", + "The initial conditions are:\n", + "$$c(x=n dx) = \\exp( - \\alpha (n dx - \\hbox{offset})^2)$$ \n", + "where $\\hbox{offset}$ is the location of the peak of the distribution curve.\n", + "We don’t want the peak to be located near $x=0$ due to the boundary\n", + "conditions.\n", + "\n", + "Now we need the values of $c$ at $t= 1 dt$ to calculate $c$ at\n", + "$t= 2 dt$, and we will use the Forward Euler scheme to approximate $c$\n", + "at $t= 1 dt$. \n", + "So \n", + "$$\\frac{\\partial c}{\\partial t}(m, 0)= \\frac {c(m, 1) - c(m, 0)}{dt}$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Substitution of the equations into the PDE yields\n", + "$$c(m, 1) = c(m, 0) - u \\frac{dt}{2 dx}(c(m+1, 0) - c(m-1, 0))$$ \n", + "where $m=2, \\ldots, 79$. \n", + "The end points at $t= 1 dt$ can be found using the boundary conditions.\n", + "\n", + "The function that computes the numerical solution using this scheme is in\n", + "*advection_funs.py*. It is a python function *advection(timesteps)*, which takes in\n", + "the number of time steps as input and plots the distribution curve at 20 time steps.\n", + "\n", + "We can see the problem with this scheme just by running the function\n", + "with 10 time steps ([Figure Distribution with Centered Scheme](#fig:centered)). Following is a plot of\n", + "the distribution curve at the last time step." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/centered.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Figure Distribution with Centered Scheme:** This is the distribution after 10 time steps approximated using the centred differencing scheme.\n", + "\n", + "Comparing this curve with the initial state\n", + "([Figure Initial Distribution](#fig:initial)), we can see that ripples are produced to\n", + "the left of the curve which means negative values are generated. But\n", + "water vapour does not have negative concentrations. The centred\n", + "differencing scheme does not work well for the advection process because\n", + "it is not **positive definite**, *i.e.*, it generates negative values which\n", + "are impossible in real life." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Another example of using the same scheme" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afs.advection(10, lab_example=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Numerical Solution Using Upstream Method ##\n", + "\n", + "\n", + "Let’s see what is wrong with our simple centred differencing scheme. We\n", + "used centred differences to compute the time and spatial derivatives\n", + "([Centered Difference Scheme](#eqn:centered)). In other words, $c(x=m dx)$ depends on\n", + "$c(x=(m-1) dx)$ and $c(x=(m+1) dx)$, and $c(t=n dt)$ depends on\n", + "$c(t=(n-1) dt)$ and $c(t=(n+1) dt)$. But we know the wind is moving in\n", + "the positive x direction, so $c(x=m dx)$ should not depend on\n", + "$c(x=(m+1) dx)$. Therefore, we will change the centred differencing\n", + "scheme to backward differencing scheme. In other words, we will always\n", + "be looking ‘upstream’ in the approximation.\n", + "\n", + "If we use backward differences to approximate the spatial derivative but\n", + "continue to use centred differences to approximate the time derivative,\n", + "we will end up with an unstable scheme. Thus, we will use backward\n", + "differences for both time and spatial derivatives. Now the time and\n", + "spatial derivatives are given by:\n", + "\n", + "\n", + "(Upstream Scheme)\n", + "$$\\frac{\\partial c}{\\partial t}(x=m dx, t=n dt)=\\frac {c(x=m dx, t=n dt) - c(x=m dx, t=(n-1) dt)}{dt}$$\n", + "$$\\frac{\\partial c}{\\partial x}(x=m dx, t=n dt)=\\frac {c(x=m dx, t=n dt) - c(x=(m-1) dx, t=n dt)}{dx}$$\n", + "\n", + "Substitution of the equations into the PDE yields\n", + "$$c(m, n+1)=c(m, n)- u \\frac{dt}{dx} (c(m, n) - c(m-1, n))$$\n", + "\n", + "The boundary conditions and the initial conditions are the same as in\n", + "the centred differencing scheme. This time we compute $c$ at $t= 1 dt$\n", + "using backward differences just as with all subsequent time steps.\n", + "\n", + "The function that computes the solution using this scheme is in\n", + "*advection_funs.py*. It is a python function *advection2(timesteps)*, which takes\n", + "in the number of time steps as input and plots the distribution curve at 20 time steps.\n", + "\n", + "Although this scheme is positive definite and conservative (the area\n", + "under the curve is the same as in the initial state), it introduces a\n", + "new problem — diffusion. As the time step increases, you can see that\n", + "the curve becomes wider and lower, *i.e.*, it diffuses quickly\n", + "([Figure Upstream Distribution](#fig:upstream))." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/upstream.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Figure Upstream Distribution:**\n", + "This is the distribution after 60 time steps approximated using the\n", + "upstream method.\n", + "\n", + "But ideally, the curve should retain its original shape as the wave\n", + "travels along the x-axis, so the upstream method is still not good\n", + "enough for the advection problem. In the next section, we will present\n", + "another method that does a better job." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Another example using the same scheme" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afs.advection2(60, lab_example=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## A Better Solution ##\n", + "\n", + "In previous sections, we were concerned with values at grid points only,\n", + "ie, values of $c$ at $x= 1dx, 2dx, \\ldots$. But in this section, we will\n", + "also consider grid boxes each containing a grid point in the centre. For\n", + "each grid point $j$, the left boundary of the grid box containing $j$ is\n", + "indexed as $j- 1/2$, and the right boundary as $j+ 1/2$. The scheme\n", + "presented here was developed by Andreas Bott ([Bott, 1989](#Ref:Bott)).\n", + "\n", + "The PDE ([Advection Eqn](#eqn:advection)) is rewritten as :\n", + "\n", + "\n", + "(Flux Form Eqn)\n", + "$$\\frac{\\partial c}{\\partial t} + \\frac{\\partial uc}{\\partial x} = 0$$ \n", + "where $F= uc$ gives the flux of water vapour.\n", + "\n", + "Expand the time derivative using forward differences and the spatial\n", + "derivative as follows:\n", + "$$\\frac{\\partial F}{\\partial x}(x=j dx) = \\frac {F(x= (j+1/2) dx) - F(x= (j-1/2) dx)}{dx}$$\n", + "where $F(x= j+1/2)$ gives the flux through the right boundary of the\n", + "grid box $j$. For simplicity, we use the notation $F(x= j+1/2)$ for\n", + "$F(x= j+1/2, n)$, ie, the flux through the right boundary of the grid\n", + "box j after $n$ time steps.\n", + "\n", + "Substituting the expanded derivatives into the PDE, we obtain the\n", + "following recurrence formula ($c(m, n)= c(x= m dx, t= n dt)$):\n", + "$$c(m, n+1)= c(m, n) - \\frac {dt} {dx}(F(m+1/2, n)-F(m-1/2, n))$$\n", + "\n", + "Since flux is defined as the amount flowing through per unit time, we\n", + "need to calculate the portion of the distribution curve in each grid box\n", + "that passes the right boundary after $dt$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/flux.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Figure Amount Leaving Grid:**\n", + "After $dt$, the shaded area will be past the right boundary of the\n", + "grid box $j$.\n", + "\n", + "As ([Figure Amount Leaving Grid](#fig:flux)) shows, the distribution curve in grid box $j$\n", + "has travelled a distance of ${u dt}$ after each time step; in other\n", + "words, the curve has moved $u dt/dx$ of a grid space within a time unit.\n", + "The shaded region is the portion of the vapour that passes the right\n", + "boundary of grid box $j$ within $dt$. We can use integration to find out\n", + "the area of the shaded region and then divide the result by $dt$ to get\n", + "$F(j+1/2)$.\n", + "\n", + "Since we only know $c$ at the grid point $j$, we are going to use\n", + "polynomial interpolation techniques to approximate $c$ at other points\n", + "in the grid box. We define $c$ in grid box $j$ with a polynomial of\n", + "order $l$ as follows:\n", + "\n", + "$$c _{j, \\ell}(x^\\prime) = \\sum _{k=0} ^{\\ell} a _{j, k} x ^{\\prime k}$$ \n", + "where $x^\\prime = (x- x_j)/dx$ and $-1/2 \\le x^\\prime \\le \\ell/2$. \n", + "\n", + "The\n", + "coefficients $a _{j, k}$ are obtained by interpolating the curve with\n", + "the aid of neighbouring grid points. We will skip the detail of the\n", + "interpolation process. Values of $a _{j, k}$ for $\\ell=0, 1, \\ldots, 4$\n", + "have been computed and are summarised in Tables\n", + "[Table $\\ell=0$](#tab:ell0), [Table $\\ell=1$](#tab:ell1), [Table $\\ell=2$](#tab:ell2), [Table $\\ell=3$](#tab:ell3) and\n", + "[Table $\\ell=4$](#tab:ell4)," + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Table $\\ell = 0$:**\n", + "\n", + "|.............|................|\n", + "| :-------------: | :-----------------: |\n", + "| $k=0$ | $a_{j,0}=c_j$ |\n", + "\n", + "\n", + "**Table $\\ell = 1$:** two representations\n", + "\n", + "|.............|............................|\n", + "| :-------------: | :-----------------: |\n", + "| $k=0$ | $a_{j,0}= c_j$ |\n", + "| $k=1$ | $a_{j, 1}= c_{j+1} - c_j$ |\n", + "\n", + "|.............|............................|\n", + "| :-------------: | :-----------------: |\n", + "| $k=0$ | $a_{j,0}= c_j$ |\n", + "| $k=1$ | $a_{j, 1}= c_j-c_{j-1}$ |\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Table $\\ell = 2$:**\n", + "\n", + "|.............|.................................................|\n", + "| :-------------: | :-----------------: |\n", + "| $k=0$ | $a_{j,0}= c_j$ |\n", + "| $k=1$ | $a_{j, 1}=\\frac {1}{2}(c_{j+1}-c_{j-1})$ |\n", + "| $k=2$ | $a_{j, 2}=\\frac {1}{2}(c_{j+1}-2c_j+c_{j-1})$ |\n", + "\n", + "\n", + "**Table $\\ell = 3$:** two representations\n", + "\n", + "|.............|....................................................................|\n", + "| :-------------: | :-----------------: |\n", + "| $k=0$ | $a_{j,0}= c_j$ |\n", + "| $k=1$ | $a_{j, 1}=\\frac {1}{6}(-c_{j+2}+6c_{j+1}-3c_j-2c_{j-1})$ |\n", + "| $k=2$ | $a_{j, 2}=\\frac {1}{2}(c_{j+1}-2c_j+c_{j-1})$ |\n", + "| $k=3$ | $a_{j, 3}=\\frac {1}{6}(c_{j+2}-3c_{j+1}+3c_j-c_{j-1})$ |\n", + "\n", + "|.............|....................................................................|\n", + "| :-------------: | :-----------------: |\n", + "| $k=0$ | $a_{j,0}= c_j$ |\n", + "| $k=1$ | $a_{j, 1}=\\frac {1}{6}(2c_{j+1}+3c_j-6{j-1}+c_{j-2})$ |\n", + "| $k=2$ | $a_{j, 2}=\\frac {1}{2}(c_{j+1}-2c_j+c_{j-1})$ |\n", + "| $k=3$ | $a_{j, 3}=\\frac {1}{6}(c_{j+1}-3c_j+3c_{j-1}-c_{j-2})$ |" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Table $\\ell = 4$:**\n", + "\n", + "|.............|........................................................................................|\n", + "| :-------------: | :-----------------: |\n", + "| $k=0$ | $a_{j,0}= c_j$ |\n", + "| $k=1$ | $a_{j, 1}=\\frac {1}{12}(-c_{j+2}+8c_{j+1}-8c_{j-1}+c_{j-2})$ |\n", + "| $k=2$ | $a_{j, 2}=\\frac {1}{24}(-c_{j+2}+16c_{j+1}-30c_j+16c_{j-1}-c_{j-2})$ |\n", + " | $k=3$ | $a_{j, 3}=\\frac {1}{12}(c_{j+2}-2c_{j+1}+2c_{j-1}-c_{j-2})$ |\n", + " | $k=4$ | $a_{j, 4}=\\frac {1}{24}(c_{j+2}-4c_{j+1}+6c_j-4c_{j-1}+c_{j-2})$ |" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that for even order polynomials, an odd number of $c$ values\n", + "including the grid point $j$ are needed to calculate the coefficients\n", + "$a_{j, k}$. This means the same number of grid points to the left and\n", + "right of $x_j$ are used in the calculation of $a_{j, k}$. If on the\n", + "other hand, we choose odd order polynomials, there will be one extra\n", + "point used to either side of $x_j$, thus resulting in 2 different\n", + "representations of $a_{j, k}$. This is why there are 2 sets of\n", + "$a_{j, k}$ for $\\ell=1, 3$ in the table. Decision as to which set is to be\n", + "used must be made according to specific conditions of the calculation.\n", + "\n", + "If we choose $\\ell=0$, we will end up with the upstream method. In other\n", + "words, the upstream method assumes that $c$ is constant in each grid\n", + "box. This poor representation of $c$ results in strong numerical\n", + "diffusion. Experiments have shown that generally if we use higher order\n", + "polynomials (where $\\ell \\le 4$), we can significantly suppress numerical\n", + "diffusion.\n", + "\n", + "Now we define $I_{j+1/2}$ as the shaded area in grid box $j$ in ([Figure Amount Leaving Grid](#fig:flux)): \n", + "\n", + "\n", + "(Flux Leaving Eqn)\n", + "$$\n", + " I_{j+1/2} = \\int _{1/2 - \\frac{udt}{dx}}^{1/2} c_j(x^\\prime) dx^\\prime $$\n", + " $$ = \\sum _{k=0}^{l} \\frac {a_{j, k}}{(k+1) 2^{k+1}} \\left[1- \\left(1- 2 u \\frac{dt}{dx}\\right)^{k+1} \\right] $$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note that we are integrating over $x^\\prime$ instead of $x$. Thus, to\n", + "get the real shaded area, we need to multiply $I_{j+1/2}$ by $dx$. So\n", + "\n", + "\n", + "(Flux Eqn)\n", + "$$F_{j+1/2}= \\frac {dx} {dt}I_{j+1/2}$$ \n", + "In this form, the scheme is conservative and\n", + "weakly diffusive. But it still lacks positive definiteness. A sufficient\n", + "condition for this is \n", + "\n", + "\n", + "(Positive Definite Condition)\n", + "$$0 \\le I_{j+1/2} dx \\le c_j dx$$ \n", + "That is, the total outflow is never\n", + "negative and never greater than $c_j dx$. In other words, the shaded\n", + "area should be no less than zero but no greater than the area of the\n", + "rectangle with length $c_j$ and width $dx$. \n", + "([Figure Total in Cell](#fig:limit))\n", + "shows why the total outflow should be limited above by $c_j dx$:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/limit.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "**Figure Total in Cell:** The shaded area is equal to $c_j dx$, and it is already greater than\n", + "the total amount of vapour (the area under the curve) in grid box $j$.\n", + "If $I_{j+1/2} dx > c_j dx$, then the total outflow $I_{j+1/2} dx$ would\n", + "be greater than the amount of vapour in the grid box, and the amount of\n", + "vapour will be negative at the next time step, thus violating the\n", + "positive definiteness requirement. \n", + "\n", + "If the total outflow is larger than the shaded area $c_j dx$, we will\n", + "get negative values of $c$ in this grid box at the next time step. We do\n", + "not want this to happen since negative values are meaningless.\n", + "\n", + "To satisfy the condition for positive definiteness\n", + "([Positive Definite Condition](#eq:posdef)) we need to guarantee that\n", + "$I_{j+1/2} \\le c_j$ holds at all time steps. We can achieve this\n", + "condition by multiplying $I_{j+1/2}$ by a weighting factor. Define\n", + "$I_{j+1/2}^\\prime$ as \n", + "\n", + "\n", + "(Normalization Eqn)\n", + "$$I_{j+1/2}^\\prime=I_{j+1/2} \\frac {c_j}{I_j}$$ \n", + "where \n", + "$$\n", + " {I_j} = \\int_{-1/2}^{1/2} c_j(x^\\prime) dx^\\prime $$\n", + "$$ = \\sum_{k=0}^{l} \\frac {a_{j, k}} {(k+1) 2^{k+1}} [(-1)^k +1]$$\n", + "\n", + "Since the total flow out of a grid box is always less than the total\n", + "grid volume, $I_{j+1/2}/I_j$ is always less than 1, thus\n", + "$I_{j+1/2} c_j/I_j$ is always less than $c_j$. Thus we can satisfy the\n", + "upper limit of the positive definiteness condition\n", + "([Positive Definite Condition](#eq:posdef)) by multiplying $I_{j+1/2}$ by a weighting\n", + "factor $c_j/I_j$. So now $F$ is defined as:\n", + "$$F_{j+1/2}= \\frac {dx} {dt}\\frac {c_j}{I_j} I_{j+1/2}$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now to satisfy the lower limit of the positive definiteness condition,\n", + "([Positive Definite Condition](#eq:posdef))\n", + "we need to make sure $I_{j+1/2}$ remains non negative at all time steps.\n", + "So we will set $I_{j+1/2}$ to 0 whenever it is negative.\n", + "\n", + "If we are looking at the parts of the curve that are far away from the\n", + "peak, $I_j=0, I_{j+1/2}=0$, and we will be dividing by 0 in ([Normalization Eqn](#eq:normalize))! To avoid this instability, we introduce a small\n", + "term $\\epsilon$ when $I_j=0, I_{j+1/2}=0$, *i.e.*, we set $I_j$ to\n", + "$\\epsilon$.\n", + "\n", + "Combining all the conditions from above, the advection scheme is\n", + "described as follows:\n", + "$$c(j, n+1)= c(j, n) - \\frac {dt} {dx} [F(j+1/2, n)- F(j-1/2, n)]$$\n", + "$$F(j+1/2, n)= \\frac {dx}{dt} \\frac {i_{l, j+1/2}}{i_{l, j}} c_j$$ \n", + "with\n", + "$$i_{l, j+1/2} = \\hbox{max}(0, I_{j+1/2})$$\n", + "$$i_{l, j} = \\hbox{max}(I_{l, j}, i_{l, j+1/2} + \\epsilon)$$ \n", + "where $l$ is the order of the polynomial we use to interpolate $c$ in each grid box.\n", + "\n", + "An example function for this scheme is in *advection_funs.py*. The python\n", + "function *advection3(timesteps, order)* takes in 2 arguments, the first is\n", + "the number of time steps to be computed, the second is the order of the\n", + "polynomial for the approximation of $c$ within each grid box. It plots the curve at 20 time steps.\n", + "Students should try it out and compare this scheme with the previous\n", + "two." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "afs.advection3(60, 4, lab_example=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem One ###\n", + "\n", + "Using the Bott Scheme, modify initialize in *advection_funs.py* to solve the following advection problem: The wind is moving along the x-axis with speed u=20 m/s. The initial distribution curve is 50 km in width. Use your program to approximate the curve during 24 hours.\n", + "\n", + "a\\) Run your program for different orders of approximating polynomials\n", + "(up to 4). Compare the accuracy of approximation for different orders.\n", + "Do you see better results with increasing order? Is this true for all\n", + "orders from 0 to 4? Is there any particularity to odd and even order\n", + "polynomials? What if you decrease or increase the Courant number (value in front of dx/u for dt calculation)?\n", + "\n", + "b\\) For odd ordered polynomials, *advection_funs.py* uses the representation\n", + "of $a_{j,k}$ that involves an extra point to the right of the centre\n", + "grid point. Modify the table of coefficients for odd ordered polynomials\n", + "([Table $\\ell=1$](#tab:ell1))and ([Table $\\ell=3$](#tab:ell3)) to use the extra point to the left of the\n", + "centre grid point. Run your program again and compare the results of 2\n", + "different representations of $a_{j,k}$ for order 1 and 3, respectively.\n", + "Is one representation better than the other, or about the same, or does\n", + "each have its own problem? How, do you think the different\n", + "representation affects the result?\n" + "\n", + "c\\) What happens if you increase the Courant number to greater than one? Hint: check the speed.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Conclusion ##\n", + "\n", + "The last scheme presented solves 2 problems introduced in the previous\n", + "two schemes. The centred differencing scheme lacks positive definiteness\n", + "because it is of second order accuracy, thus it introduces additional\n", + "oscillations near the peak. The scheme presented here solves this\n", + "problem by checking and normalising the relevant values (ie, $I_{j+1/2}$\n", + "and $I_j$) when needed at each time step. The upstream scheme produces\n", + "strong diffusion because it is of only first order accuracy. The scheme\n", + "presented here solves this problem by using higher order polynomials to\n", + "approximate $c$ at each grid box.\n", + "\n", + "Experiments have shown that this scheme is numerically stable in most\n", + "atmospheric situations. This scheme is only slightly unstable in the\n", + "case of strong deformational flow field models.\n", + "\n", + "For more detail about this advection scheme , please refer to ([Bott, 1989](#Ref:Bott)).\n", + "Since 1989, new advection schemes including the MUSCL and TVD have been developed and are more routinely used. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References ##\n", + "\n", + "\n", + "Bott, A., 1989: A positive definite advection scheme obtained by nonlinear renormalization of the advective fluxes. *Monthly Weather Review*, 117, 1006–1015." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "toc_cell": true, + "toc_position": {}, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/_sources/notebooks/lab2/01-lab2.ipynb.txt b/_sources/notebooks/lab2/01-lab2.ipynb.txt new file mode 100644 index 0000000..50353ce --- /dev/null +++ b/_sources/notebooks/lab2/01-lab2.ipynb.txt @@ -0,0 +1,1294 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Lab 2: Stability and accuracy\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important - before you start**\n", + "\n", + "Before starting on a new lab, you should \"fetch\" any changes that we have made to the labs in our repository (we are continually trying to improve them). Follow the instructions here: https://rhwhite.github.io/numeric_2024/getting_started/python.html#Pulling-changes-from-the-github-repository \n", + "\n", + "Caution - if you have made changes to, for example, lab 1, but you didn't duplicate and rename the file first, this can write over your changes! Follow the instructions above carefully to not lose your work, and remember to always create a copy of each lab for you to do your work in. \n", + "\n", + "This step will be smoother if you haven't saved any changes to the default files (sometimes even opening it, and saving it, counts as a change!)\n", + "\n", + "\n", + "**Some notes about navigating in Jupyter Lab**\n", + "\n", + "In Jupyter Lab, once you have opened a lab, if you click on the symbol that looks like three bullet points over on the far left, this brings up a contents page that allows you to jump immediately to any particular section in the lab you are currently looking at.\n", + "\n", + "The jigsaw puzzle icon below this allows you to install extensions, if you want extra functionality in your notebooks (you may need to go to Settings, Enable Extension Manager to access this).\n", + "\n", + "Also, in Jupyter Lab you can click on links in the 'List of Problems' to take you to directly to each problem. Remember to check canvas for which problems you need to submit for the weekly assignments. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems\n", + "\n", + "There are probems throughout this lab, as you might find at the end of a textbook chapter. Some of these problems will be set as an assignment for you to hand in to be graded - see the Lab 2: Assignment on the canvas site for which problems you should hand in. However, if you have time, you are encouraged to work through all of these problems to help you better understand the material.\n", + "\n", + "- [Problem Accuracy](#Problem-Accuracy) \n", + "- [Problem Stability](#Problem-Stability) \n", + "- [Problem Backward-Euler](#Problem-Backward-Euler)\n", + "- [Problem Taylor Series](#Problem-Taylor-Series)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "\n", + "In Lab \\#1 you were introduced to the concept of discretization, and saw that\n", + "there were many different ways to approximate a given problem. This Lab\n", + "will delve further into the concepts of accuracy and stability of\n", + "numerical schemes, in order that we can compare the many possible\n", + "discretizations.\n", + "\n", + "At the end of this Lab, you will have seen where the error in a\n", + "numerical scheme comes from, and how to quantify the error in terms of\n", + "*order*. The stability of several examples will be demonstrated, so that\n", + "you can recognize when a scheme is unstable, and how one might go about\n", + "modifying the scheme to eliminate the instability.\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- Define the term and identify: Implicit numerical scheme and Explicit\n", + " numerical scheme.\n", + "\n", + "- Define the term, identify, or write down for a given equation:\n", + " Backward Euler method and Forward Euler method.\n", + "\n", + "- Explain the difference in terminology between: Forward difference\n", + " discretization and Forward Euler method.\n", + "\n", + "- Define: truncation error, local truncation error, global truncation\n", + " error, and stiff equation.\n", + "\n", + "- Explain: a predictor-corrector method.\n", + "\n", + "- Identify from a plot: an unstable numerical solution.\n", + "\n", + "- Be able to: find the order of a scheme, use the test equation to find\n", + " the stability of a scheme, find the local truncation error from a\n", + " graph of the exact solution and the numerical solution.\n", + "\n", + "- Evaluate and compare the accuracy and stability of at least 3 different discretization methods." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "\n", + "This lab is designed to be self-contained. If you would like\n", + "additional background on any of the following topics, I'd recommend this book:\n", + "[Finite difference computing with PDES](https://link-springer-com.ezproxy.library.ubc.ca/book/10.1007/978-3-319-55456-3) by Hans Petter Langtangen and Svein Linge\n", + "The entire book is available on [github](http://hplgit.github.io/fdm-book/doc/pub/book/html/decay-book.html) with the python code [here](https://github.com/hplgit/fdm-book/tree/master/src). Much of the content of this lab is summarized in [Appendix B -- truncation analysis](https://link-springer-com.ezproxy.library.ubc.ca/content/pdf/bbm%3A978-3-319-55456-3%2F1.pdf)\n", + "\n", + "\n", + "### Other recommended books\n", + "\n", + "- **Differential Equations:**\n", + "\n", + " - Strang (1986), Chapter 6 (ODE’s).\n", + "\n", + "- **Numerical Methods:**\n", + "\n", + " - Strang (1986), Section 6.5 (a great overview of difference methods\n", + " for initial value problems)\n", + "\n", + " - Burden and Faires (1981), Chapter 5 (a more in-depth analysis of the\n", + " numerical methods and their accuracy and stability).\n", + " \n", + " - Newman (2013) Derivatives, round-off and truncation errors, Section 5.10 pp. 188-198.\n", + " Forward Euler, mid-point and leap-frog methods, Chapter 8 pp. 327-335.\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction \n", + "\n", + "\n", + "Remember from Lab \\#1  that you were introduced to three approximations\n", + "to the first derivative of a function, $T^\\prime(t)$. If the independent\n", + "variable, $t$, is discretized at a sequence of N points,\n", + "$t_i=t_0+i \\Delta t$, where $i\n", + "= 0,1,\\ldots, N$ and $\\Delta t= 1/N$, then we can write the three\n", + "approximations as follows:\n", + "\n", + " **Forward difference formula:**\n", + "\n", + " $$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_i}{\\Delta t}$$\n", + "\n", + " **Backward difference formula:**\n", + "\n", + " $$T^\\prime(t_i) \\approx \\frac{T_{i}-T_{i-1}}{\\Delta t}$$\n", + "\n", + " **Centered difference formula** (add together the forwards and backwards formula):\n", + "\n", + " $$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_{i-1}}{2 \\Delta t}$$\n", + "\n", + "In fact, there are many other possible methods to approximate the\n", + "derivative (some of which we will see later in this Lab). With this\n", + "large choice we have in the choice of approximation scheme, it is not at\n", + "all clear at this point which, if any, of the schemes is the “best”. It\n", + "is the purpose of this Lab to present you with some basic tools that\n", + "will help you to decide on an appropriate discretization for a given\n", + "problem. There is no generic “best” method, and the choice of\n", + "discretization will always depend on the problem that is being dealt\n", + "with.\n", + "\n", + "In an example from Lab \\#1, the forward difference formula was used to\n", + "compute solutions to the saturation development equation, and you saw\n", + "two important results:\n", + "\n", + "- reducing the grid spacing, $\\Delta t$, seemed to improve the\n", + " accuracy of the approximate solution; and\n", + "\n", + "- if $\\Delta t$ was taken too large (that is, the grid was not fine\n", + " enough), then the approximate solution exhibited non-physical\n", + " oscillations, or a *numerical instability*.\n", + "\n", + "There are several questions that arise from this example:\n", + "\n", + "1. Is it always true that reducing $\\Delta t$ will improve the discrete\n", + " solution?\n", + "\n", + "2. Is it possible to improve the accuracy by using another\n", + " approximation scheme (such as one based on the backward or centered\n", + " difference formulas)?\n", + "\n", + "3. Are these numerical instabilities something that always appear when\n", + " the grid spacing is too large?\n", + "\n", + "4. By using another difference formula for the first derivative, is it\n", + " possible to improve the stability of the approximate solution, or to\n", + " eliminate the stability altogether?\n", + "\n", + "The first two questions, related to *accuracy*, will be dealt with in\n", + "the next section, [Section 5 (1.5)](#Accuracy), and the last two will have to wait\n", + "until [Section 6 (1.6)](#Stability) when *stability* is discussed." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Accuracy of Difference Approximations \n", + "\n", + "Before moving on to the details of how to measure the error in a scheme,\n", + "let’s take a closer look at another example which we’ve seen already …\n", + "\n", + "\n", + "\n", + "### Accuracy Example\n", + "\n", + "Let’s go back to the heat conduction equation from\n", + "Lab \\#1, where the temperature, $T(t)$, of a rock immersed in water or\n", + "air, evolves in time according to the first order ODE:\n", + "$$\\frac{dT}{dt} = \\lambda(T,t) \\, (T-T_a) $$ with initial condition $T(0)$. We saw\n", + "in the section on the **forward Euler method** that one way to discretize\n", + "this equation was using the forward difference formula  for the\n", + "derivative, leading to\n", + "\n", + "$T_{i+1} = T_i + \\Delta t \\, \\lambda(T_i,t_i) \\, (T_i-T_a).$ (**eq: euler**)\n", + "\n", + "Similarly, we could apply either of the other two difference formulae to obtain other difference schemes, namely what we called the\n", + "**backward Euler method**\n", + "\n", + "$T_{i+1} = T_i + \\Delta t \\, \\lambda(T_{i+1},t_{i+1}) \\, (T_{i+1}-T_a),$ (**eq: beuler**)\n", + "\n", + "and the **mid-point** or **leap-frog** centered method\n", + "\n", + "$T_{i+1} = T_{i-1} + 2 \\Delta t \\, \\lambda(T_{i},t_{i}) \\, (T_{i}-T_a).$ (**eq: midpoint**)\n", + "\n", + "The forward Euler and mid-point schemes are called *explicit methods*,\n", + "since they allow the temperature at any new time to be computed in terms\n", + "of the solution values at *previous* time steps only, i.e. it does not require \n", + "any information from current or future time steps. The backward Euler\n", + "scheme, on the other hand, is called an *implicit scheme*, since it\n", + "gives an equation defining $T_{i+1}$ implicitly, that is, the function \n", + "$\\lambda$ takes the value $T_{i+1}$ as an input, in order to calculate $T_{i+1}$. \n", + "If $\\lambda$ depends non-linearly on $T$, then this equation may require an additional step,\n", + "involving the iterative solution of a non-linear equation. We will pass\n", + "over this case for now, and refer you to a reference such as\n", + "Burden and Faires (1981) for the details on non-linear solvers such as *Newton’s\n", + "method*." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important point**: Note that **eq: midpoint** requires the value of the temperature at two points: $T_{i-1}$ and \n", + "$T_{i}$ to calculate the temperature $T_{i+1}$. This requires an approximate guess for $T_i$, which we will discuss\n", + "in more detail below." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For now, let’s assume that the function $\\lambda$ is a constant, and thus it is independent of $T$\n", + "and $t$. Plots of the numerical results from each of these schemes,\n", + "along with the exact solution, are given in Figure 1\n", + "(with the “unphysical” parameter value $\\lambda=0.8$ chosen to enhance\n", + "the show the growth of numerical errors, even though in a real material\n", + "this would violate conservation of energy).\n", + "\n", + "The functions used in make the following figure are imported from [lab2_functions.py](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/lab2_functions.py)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2023-08-15T20:57:12.556777Z", + "start_time": "2023-08-15T20:57:09.233103Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# import and define functions\n", + "%matplotlib inline\n", + "import context\n", + "import matplotlib.pyplot as plt\n", + "from numlabs.lab2.lab2_functions import euler,beuler,leapfrog\n", + "import numpy as np\n", + "plt.style.use('ggplot')\n", + "\n", + "#\n", + "# save our three functions to a dictionary, keyed by their names\n", + "#\n", + "theFuncs={'euler':euler,'beuler':beuler,'leapfrog':leapfrog}\n", + "#\n", + "# store the results in another dictionary\n", + "#\n", + "output={}\n", + "#\n", + "#end time = 10 seconds\n", + "#\n", + "tend=10.\n", + "#\n", + "# start at 30 degC, air temp of 20 deg C\n", + "#\n", + "Ta=20.\n", + "To=30.\n", + "#\n", + "# note that lambda is a reserved keyword in python so call this\n", + "# thelambda\n", + "#\n", + "theLambda=0.8 #units have to be per minute if time in seconds\n", + "#\n", + "# dt = 10/npts = 10/30 = 1/3\n", + "#\n", + "npts=30\n", + "for name,the_fun in theFuncs.items():\n", + " output[name]=the_fun(npts,tend,To,Ta,theLambda)\n", + "#\n", + "# calculate the exact solution for comparison\n", + "#\n", + "exactTime=np.linspace(0,tend,npts)\n", + "exactTemp=Ta + (To-Ta)*np.exp(theLambda*exactTime)\n", + "#\n", + "# now plot all four curves\n", + "#\n", + "fig,ax=plt.subplots(1,1,figsize=(8,8))\n", + "ax.plot(exactTime,exactTemp,label='exact',lw=2)\n", + "for fun_name in output.keys():\n", + " the_time,the_temp=output[fun_name]\n", + " ax.plot(the_time,the_temp,label=fun_name,lw=2)\n", + "ax.set_xlim([0,2.])\n", + "ax.set_ylim([30.,90.])\n", + "ax.grid(True)\n", + "ax.set(xlabel='time (seconds)',ylabel='bar temp (deg C)')\n", + "out=ax.legend(loc='upper left')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure 1** A plot of the exact and computed solutions for the temperature of a\n", + "rock, with parameters: $T_a=20$, $T(0)=30$, $\\lambda= +0.8$,\n", + "$\\Delta t=\\frac{1}{3}$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice from these results that the mid-point/leap-frog scheme is the most accurate, and backward Euler the least accurate.\n", + "\n", + "The next section explains why some schemes are more accurate than others, and introduces a means to quantify the accuracy of a numerical approximation.\n", + "\n", + "### Round-off Error and Discretization Error \n", + "\n", + "From [Accuracy Example](#ex_accuracy) and the example in the Forward Euler\n", + "section of the previous lab,  it is obvious that a numerical\n", + "approximation is exactly that - **an approximation**. The process of\n", + "discretizing a differential equation inevitably leads to errors. In this\n", + "section, we will tackle two fundamental questions related to the\n", + "accuracy of a numerical approximation:\n", + "\n", + "- Where does the error come from (and how can we measure it)?\n", + "\n", + "- How can the error be controlled?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Where does the error come from? \n", + "\n", + "#### Round-off error:\n", + "\n", + "When attempting to solve differential equations on a computer, there are\n", + "two main sources of error. The first, *round-off error*, derives from\n", + "the fact that a computer can only represent real numbers by *floating\n", + "point* approximations, which have only a finite number of digits of\n", + "accuracy.\n", + "\n", + "* Mathematical note [floating point notation](#floating-point)\n", + "\n", + "\n", + "For example, we all know that the number $\\pi$ is a non-repeating\n", + "decimal, which to the first twenty significant digits is\n", + "$3.1415926535897932385\\dots$ Imagine a computer which stores only eight\n", + "significant digits, so that the value of $\\pi$ is rounded to\n", + "$3.1415927$.\n", + "\n", + "In many situations, these five digits of accuracy may be sufficient.\n", + "However, in some cases, the results can be catastrophic, as shown in the\n", + "following example: $$\\frac{\\pi}{(\\pi + 0.00000001)-\\pi}.$$ Since the\n", + "computer can only “see” 8 significant digits, the addition\n", + "$\\pi+0.00000001$ is simply equal to $\\pi$ as far as the computer is\n", + "concerned. Hence, the computed result is $\\frac{1}{0}$ - an undefined\n", + "expression! The exact answer $100000000\\pi$, however, is a very\n", + "well-defined non-zero value.\n", + "\n", + "A side note: round-off errors played a key role in Edward Lorenz's exploration of chaos theory in physics, see https://www.aps.org/publications/apsnews/200301/history.cfm" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Truncation error:\n", + "\n", + "The second source of error stems from the discretization of the problem,\n", + "and hence is called *discretization error* or *truncation error*. In\n", + "comparison, round-off error is always present, and is independent of the\n", + "discretization being used. The simplest and most common way to analyse\n", + "the truncation error in a scheme is using *Taylor series expansions*.\n", + "\n", + "Let us begin with the forward difference formula for the first\n", + "derivative, , which involves the discrete solution at times $t_{i+1}$\n", + "and $t_{i}$. Since only continuous functions can be written as Taylor\n", + "series, we expand the exact solution (instead of the discrete values\n", + "$T_i$) at the discrete point $t_{i+1}$:\n", + "\n", + "$$T(t_{i+1}) = T(t_i+\\Delta t) = T(t_i) + (\\Delta t) T^\\prime(t_i) + \n", + " \\frac{1}{2}(\\Delta t)^2 T^{\\prime\\prime}(t_i) +\\cdots$$\n", + "\n", + "\n", + "Rewriting to clean this up slightly gives **eq: feuler**\n", + "\n", + "$$\\begin{aligned}\n", + "T(t_{i+1}) &= T(t_i) + \\Delta t T^{\\prime}(t_i,T(t_i)) +\n", + " \\underbrace{\\frac{1}{2}(\\Delta t)^2T^{\\prime\\prime}(t_i) + \\cdots}\n", + "_{\\mbox{ truncation error}} \\\\ \\; \n", + " &= T(t_i) + \\Delta t T^{\\prime}(t_i) + {\\cal O}(\\Delta t^2).\n", + " \\end{aligned}$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This second expression writes the truncation error term in terms of\n", + "*order notation*. If we write $y = {\\cal O}(\\Delta t)$, then we mean\n", + "simply that $y < c \\cdot \\Delta t$ for some constant $c$, and we say that\n", + "“ $y$ is first order in $\\Delta t$ ” (since it depends on $\\Delta t$ to\n", + "the first power) or “ $y$ is big-oh of $\\Delta t$.” As $\\Delta t$ is\n", + "assumed small, the next term in the series, $\\Delta t^2$ is small\n", + "compared to the $\\Delta t$ term. In words, we say that forward euler is\n", + "*first order accurate* with errors of second order.\n", + "\n", + "It is clear from this that as $\\Delta t$ is reduced in size (as the\n", + "computational grid is refined), the error is also reduced. If you\n", + "remember that we derived the approximation from the limit definition of\n", + "derivative, then this should make sense. This dependence of the error on\n", + "powers of the grid spacing $\\Delta t$ is an underlying characteristic of\n", + "difference approximations, and we will see approximations with higher\n", + "orders in the coming sections …\n", + "\n", + "There is one more important distinction to be made here. The “truncation\n", + "error” we have been discussing so far is actually what is called *local\n", + "truncation error*. It is “local” in the sense that we have expanded the\n", + "Taylor series *locally* about the exact solution at the point $t_i$.\n", + "\n", + "There is also a *global truncation error* (or, simply, *global error*),\n", + "which is the error made during the course of the entire computation,\n", + "from time $t_0$ to time $t_n$. The difference between local and global\n", + "truncation error is illustrated in Figure 2. If the local error stays approximately\n", + "constant, then the global error will be approximately the local error times\n", + "the number of timesteps. For a fixed simulation length of $t$, the number of\n", + "timesteps required is $t/\\Delta t$, thus the global truncation error will be\n", + "approximately of the order of $1/\\Delta t$ times the local error, or about one \n", + "order of $\\Delta t$ *worse* (lower order) than the local error." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure Error:** Local and global truncation error. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It is easy to get a handle on the order of the local truncation error\n", + "using Taylor series, regardless of whether the exact solution is known,\n", + "but no similar analysis is available for the global error. We can write\n", + "\n", + "$$\\text{global error} = |T(t_n)-T_n|$$\n", + "\n", + "but this expression can only be\n", + "evaluated if the exact solution is known ahead of time (which is not the\n", + "case in most problems we want to compute, since otherwise we wouldn’t be\n", + "computing it in the first place!). Therefore, when we refer to\n", + "truncation error, we will always be referring to the local truncation\n", + "error." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "#### Second order accuracy\n", + "\n", + "Above we mentioned a problem with evaluating the mid-point method. If we start with three points $(t_0,t_1,t_2)$, \n", + "each separated by $\\Delta t/2$ so that $t_2 - t_0=\\Delta t$\n", + "\n", + "\\begin{align}\n", + "y(t_2)&=y(t_1) + y^\\prime (t_1,y(t_1))(t_2 - t_1) + \\frac{y^{\\prime \\prime}(t_1,y(t_1))}{2} (t_2 - t_1)^2 + \\frac{y^{\\prime \\prime \\prime}(t_1,y(t_1))}{6} (t_2 - t_1)^3 + h.o.t. \\ (eq.\\ a)\\\\\n", + "y(t_0)&=y(t_1) + y^\\prime (t_1,y(t_1))(t_0 - t_1) + \\frac{y^{\\prime \\prime}(t_1)}{2} (t_0 - t_1)^2 + \\frac{y^{\\prime \\prime \\prime}(t_1)}{6} (t_0 - t_1)^3 + h.o.t. \\ (eq.\\ b)\n", + "\\end{align}\n", + "\n", + "\n", + "where h.o.t. stands for \"higher order terms\". Rewriting in terms of $\\Delta t$:\n", + "\n", + "\n", + "\\begin{align}\n", + "y(t_2)&=y(t_1) + \\frac{\\Delta t}{2}y^\\prime (t_1,y(t_1)) + \\frac{\\Delta t^2}{8} y^{\\prime \\prime}(t_1,y(t_1)) + \\frac{\\Delta t^3}{48} y^{\\prime \\prime \\prime}(t_1,y(t_1)) + h.o.t. \\ (eq.\\ a)\\\\\n", + "y(t_0)&=y(t_1) - \\frac{\\Delta t}{2}y^\\prime (t_1,y(t_1)) + \\frac{\\Delta t^2}{8} y^{\\prime \\prime}(t_1,y(t_1)) - \\frac{\\Delta t^3}{48} y^{\\prime \\prime \\prime}(t_1,y(t_1)) + h.o.t. \\ (eq.\\ b)\n", + "\\end{align}\n", + "\n", + "\n", + "and subtracting:\n", + "\n", + "\\begin{align}\n", + "y(t_2)&=y(t_0) + \\Delta t y^\\prime (t_1,y(t_1)) + \\frac{\\Delta t^3}{24} y^{\\prime \\prime \\prime}(t_1,y(t_1)) + h.o.t. \\ (eq.\\ c)\n", + "\\end{align}\n", + "\n", + "where $t_1=t_0 + \\Delta t/2$\n", + "\n", + "Comparing with [eq: feuler](#eq:feuler) we can see that we've canceled the $\\Delta t^2$ terms, so that\n", + "if we drop the $\\frac{\\Delta t^3}{24} y^{\\prime \\prime \\prime}(t_1,y(t_1))$\n", + "and higher order terms we're doing one order better that foward euler, as long as we can solve the problem of\n", + "estimating y at the midpoint: $y(t_1) = y(t_0 + \\Delta t/2)$\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Mid-point and leap-frog\n", + "\n", + "The mid-point and leap-frog methods take two slightly different approaches to estimating $y(t_0 + \\Delta t/2)$. \n", + "For the [explicit mid-point method](https://en.wikipedia.org/wiki/Midpoint_method), we estimate $y$ at\n", + "the midpoint by taking a half-step:\n", + "\n", + "\n", + "\\begin{align}\n", + "k_1 & = \\Delta t y^\\prime(t_0,y(t_0)) \\\\\n", + "k_2 & = \\Delta t y^\\prime(t_0 + \\Delta t/2,y(t_0) + k_1/2) \\\\\n", + "y(t_0 + \\Delta t) &= y(t_0) + k_2\n", + "\\end{align}\n", + "\n", + "\n", + "Compare this to the [leap-frog method](https://en.wikipedia.org/wiki/Leapfrog_integration), which uses the results\n", + "from one half-interval to calculate the results for the next half-interval:\n", + "\n", + "\n", + "\\begin{align}\n", + "y(t_0 + \\Delta t/2) & = y(t_0) + \\frac{\\Delta t}{2} y^\\prime(t_0,y(t_0))\\ (i) \\\\\n", + "y(t_0 + \\Delta t) & = y(t_0) + \\Delta t y^\\prime(t_0 + \\Delta t/2,y(t_0 + \\Delta t/2)\\ (ii)\\\\\n", + "y(t_0 + 3 \\Delta t/2) & = y(t_0 + \\Delta t/2) + \\Delta t y^\\prime(t_0 + \\Delta t,y(t_0 + \\Delta t))\\ (iii) \\\\\n", + "y(t_0 + 2 \\Delta t) & = y(t_0 + \\Delta t) + \\Delta t y^\\prime(t_0 + 3\\Delta t/2,y(t_0 + 3 \\Delta t/2))\\ (iv) \\\\\n", + "\\end{align}\n", + "\n", + "\n", + "Comparing (iii) and (iv) shows how the method gets its name: the half-interval and whole interval values\n", + "are calculated by leaping over each other. Once the first half and whole steps are done, the rest of the\n", + "integration is completed by repeating (iii) and (iv) as until the endpoint is reached.\n", + "\n", + "The leap-frog scheme has the advantage that it is *time reversible* or as the Wikipedia article says *sympletic*. \n", + "This means that estimating $y(t_1)$ and then using that value to go backwards by $-\\Delta t$ yields $y(t_0)$\n", + "exactly, which the mid-point method does not. The mid-point method, however, is one member (the 2nd order member)\n", + "of a family of *Runge Kutta* integrators, which will be covered in more detail in Lab 4.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### How can we control the error? \n", + "\n", + "Now that we’ve determined the source of the error in numerical methods,\n", + "we would like to find a way to control it; that is, we would like to be\n", + "able to compute and be confident that our approximate solution is\n", + "“close” to the exact solution. Round-off error is intrinsic to all\n", + "numerical computations, and cannot be controlled (except to develop\n", + "methods that do not magnify the error unduly … more on this later).\n", + "Truncation error, on the other hand, *is* under our control.\n", + "\n", + "In the simple ODE examples that we’re dealing with in this lab, the\n", + "round-off error in a calculation is much smaller than the truncation\n", + "error. Furthermore, the schemes being used are *stable with respect to\n", + "round-off error* in the sense that round-off errors are not magnified in\n", + "the course of a computation. So, we will restrict our discussion of\n", + "error control in what follows to the truncation error.\n", + "\n", + "However, there are many numerical algorithms in which the round-off\n", + "error can dominate the the result of a computation (Gaussian elimination\n", + "is one example, which you will see in Lab \\#3 ), and so we must always\n", + "keep it in mind when doing numerical computations.\n", + "\n", + "There are two fundamental ways in which the truncation error in an\n", + "approximation  can be reduced:\n", + "\n", + "1. **Decrease the grid spacing**. Provided that the second derivative of\n", + " the solution is bounded, it is clear from the error term in **eq: feuler** that as\n", + " $\\Delta t$ is reduced, the error will also get smaller. This principle was\n", + " demonstrated in an example from Lab \\#1 using the Forward Euler method. The disadvantage to\n", + " decreasing $\\Delta t$ is that the cost of the computation increases since more\n", + " steps must be taken. Also, there is a limit to how small $\\Delta t$ can be,\n", + " beyond which round-off errors will start polluting the computation.\n", + "\n", + "2. **Increase the order of the approximation**. We saw above that the\n", + " forward difference approximation of the first derivative is first\n", + " order accurate in the grid spacing. It is also possible to derive\n", + " higher order difference formulas which have a leading error term of\n", + " the form $(\\Delta t)^p$, with $p>1$. As noted above in [Section Second Order](#sec_secondOrder)\n", + " the midpoint formula\n", + " is a second order scheme, and some further examples will be given in\n", + " [Section Higher order Taylor](#sec_HigherOrderTaylor). The main disadvantage to using\n", + " very high order schemes is that the error term depends on higher\n", + " derivatives of the solution, which can sometimes be very large – in\n", + " this case, the stability of the scheme can be adversely affected\n", + " (for more on this, see [Section Stability](#Stability).\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-16T06:27:59.383864Z", + "start_time": "2022-01-16T06:27:59.341911Z" + } + }, + "source": [ + "### Problem Accuracy\n", + "\n", + "In order to investigate these two approaches to improving the accuracy of an approximation, you can use the code in\n", + "[terror.ipynb](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/terror2.ipynb)\n", + "to play with the solutions to the heat conduction equation. You will need the additional functions provided for this lab. These can be found on your local computer: numeric_2024/numlabs/lab2 (you will need to fetch upstream from github to get recent changes from our version to your clone before pulling those changes to your local machine; don't forget to commit your previous labs!). For a given function $\\lambda(T)$, and specified parameter values, you should experiment with various time steps and schemes, and compare the computed results (Note: only the answers to the assigned questions need to be handed in). Look at the different schemes (euler, leap-frog, midpoint, 4th order runge kutta) run them for various total times (tend) and step sizes (dt=tend/npts).\n", + "\n", + "The three schemes that will be used here are forward Euler (first order), leap-frog (second order) and the fourth order Runge-Kutta scheme (which will be introduced more thoroughly in Lab 4).\n", + "\n", + "Try three different step sizes for all three schemes for a total of 9 runs. It’s helpful to be able to change the axis limits to look at various parts of the plot." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Use your 9 results to answer parts a and b below.\n", + "\n", + "- a\\) Does increasing the order of the scheme, or decreasing the time step\n", + " always improve the solution?\n", + "\n", + "- b\\) How would you compute the local truncation error from the error plot?\n", + " And the global error? Do this on a plot for one set of parameters.\n", + "\n", + "- c\\) Similarly, how might you estimate the *order* of the local truncation\n", + " error? The order of the global error? ( **Hint:** An order $p$ scheme\n", + " has truncation error that looks like $c\\cdot(\\Delta t)^p$. Read the\n", + " error off the plots for several values of the grid spacing and use this\n", + " to find $p$.) Are the local and global error significantly different?\n", + " Why or why not?\n", + "\n", + "### Other Approximations to the First Derivative \n", + "\n", + "The Taylor series method of deriving difference formulae for the first\n", + "derivative is the simplest, and can be used to obtain approximations\n", + "with even higher order than two. There are also many other ways to\n", + "discretize the derivatives appearing in ODE’s, as shown in the following\n", + "sections…" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Higher Order Taylor Methods \n", + "\n", + "As mentioned earlier, there are many other possible approximations to\n", + "the first derivative using the Taylor series approach. The basic\n", + "approach in these methods is as follows:\n", + "\n", + "1. expand the solution in a Taylor series at one or more points\n", + " surrounding the point where the derivative is to be approximated\n", + " (for example, for the centered scheme, you used two points,\n", + " $T(t_i+\\Delta t)$ and $T(t_i-\\Delta t)$. You also have to make sure that you\n", + " expand the series to a high enough order …\n", + "\n", + "2. take combinations of the equations until the $T_i$ (and possibly\n", + " some other derivative) terms are eliminated, and all you’re left\n", + " with is the first derivative term.\n", + "\n", + "One example is the fourth-order centered difference formula for the\n", + "first derivative:\n", + "$$\\frac{-T(t_{i+2})+8T(t_{i+1})-8T(t_{i-1})+T(t_{i-2})}{12\\Delta t} =\n", + " T^\\prime(t_i) + {\\cal O}((\\Delta t)^4)$$\n", + "\n", + "**Quiz:** Try the quiz at [this\n", + "link](https://phaustin.github.io/numeric/quizzes2/order.html)\n", + "related to this higher order scheme." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Predictor-Corrector Methods \n", + "\n", + "Another class of discretizations are called *predictor-corrector\n", + "methods*. Implicit methods can be difficult or expensive to use because\n", + "of the solution step, and so they are seldom used to integrate ODE’s.\n", + "Rather, they are often used as the basis for predictor-corrector\n", + "algorithms, in which a “prediction” for $T_{i+1}$ based only on an\n", + "explicit method is then “corrected” to give a better value by using this\n", + "precision in an implicit method.\n", + "\n", + "To see the basic idea behind these methods, let’s go back (once again)\n", + "to the backward Euler method for the heat conduction problem which\n", + "reads:\n", + "$$T_{i+1} = T_{i} + \\Delta t \\, \\lambda( T_{i+1}, t_{i+1} ) \\, ( T_{i+1}\n", + "- T_a ).$$ Note that after applying the backward difference formula ,\n", + "all terms in the right hand side are evaluated at time $t_{i+1}$.\n", + "\n", + "Now, $T_{i+1}$ is defined implicitly in terms of itself, and unless\n", + "$\\lambda$ is a very simple function, it may be very difficult to solve\n", + "this equation for the value of $T$ at each time step. One alternative,\n", + "mentioned already, is the use of a non-linear equation solver such as\n", + "Newton’s method to solve this equation. However, this is an iterative\n", + "scheme, and can lead to a lot of extra expense. A cheaper alternative is\n", + "to realize that we could try estimating or *predicting* the value of\n", + "$T_{i+1}$ using the simple explicit forward Euler formula and then use\n", + "this in the right hand side, to obtain a *corrected* value of $T_{i+1}$.\n", + "The resulting scheme, $$\\begin{array}{ll}\n", + " \\mathbf{Prediction}: & \\widetilde{T}_{i+1} = T_i + \\Delta t \\,\n", + " \\lambda(T_i,t_i) \\, (T_i-T_a), \\\\ \\; \\\\\n", + " \\mathbf{Correction}: & T_{i+1} = T_i + \\Delta t \\,\n", + " \\lambda(\\widetilde{T}_{i+1},t_{i+1}) \\, (\\widetilde{T}_{i+1}-T_a).\n", + "\\end{array}$$\n", + "\n", + "This method is an explicit scheme, which can also be shown to be second\n", + "order accurate in . It is the simplest in a whole class of schemes\n", + "called *predictor-corrector schemes* (more information is available on\n", + "these methods in a numerical analysis book such as  @burden-faires)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Other Methods \n", + "\n", + "The choice of methods is made even greater by two other classes of\n", + "schemes:\n", + "\n", + " **Runge-Kutta methods:**\n", + "\n", + "- We have already seen two examples of the Runge-Kutta family of integrators: Forward Euler is a first order Runge-Kutta, and the midpoint method is second order Runge-Kutta. Fourth and fifth order Runge-Kutta algorithms will be described in Labs #4 and #5\n", + "\n", + "**Multi-step methods:**\n", + " \n", + "- These use values of the solution at more than one previous time step in order to increase the accuracy. Compare these to one-step schemes, such as forward Euler, which use the solution only at one previous step.\n", + "\n", + "More can be found on these (and other) methods in  Burden and Faires (1981) and Newman (2013)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Accuracy Summary\n", + "\n", + "In this section, you’ve been given a short overview of the accuracy of\n", + "difference schemes for first order ordinary differential equations.\n", + "We’ve seen that accuracy can be improved by either decreasing the grid\n", + "spacing, or by choosing a higher order scheme from one of several\n", + "classes of methods. When using a higher order scheme, it is important to\n", + "realize that the cost of the computation usually rises due to an added\n", + "number of function evaluations (especially for multi-step and\n", + "Runge-Kutta methods). When selecting a numerical scheme, it is important\n", + "to keep in mind this trade-off between accuracy and cost.\n", + "\n", + "However, there is another important aspect of discretization that we\n", + "have pretty much ignored. The next section will take a look at schemes\n", + "of various orders from a different light, namely that of *stability*." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-16T05:49:29.341551Z", + "start_time": "2022-01-16T05:49:29.266183Z" + } + }, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stability of Difference Approximations \n", + "\n", + "The easiest way to introduce the concept of stability is for you to see\n", + "it yourself.\n", + "\n", + "### Problem Stability\n", + "\n", + "This example is a slight modification of [Problem accuracy](#Problem-Accuracy) from the previous section on accuracy. We will add one scheme (backward euler) and drop the 4th order Runge-Kutta, and change the focus from error to stability. The value of $\\lambda$ is assumed a constant, so that the backward Euler scheme results in an explicit method, and we’ll also compute a bit further in time, so that any instability manifests itself more clearly. Run the [stability2.ipynb](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/stability2.ipynb) notebook in numlabs/lab2 with $\\lambda= -8\\ s^{-1}$, with $\\Delta t$ values that just straddle the stability condition for the forward euler scheme\n", + "($\\Delta t < \\frac{-2}{\\lambda}$, derived below). Create plots that show that \n", + "1) the stability condition does in fact predict the onset of the instablity in the euler scheme, and \n", + "\n", + "2) determine whether the backward euler and leap-frog are stable or unstable for the same $\\Delta t$ values. (you should run out to longer than tend=10 seconds to see if there is a delayed instability.)\n", + "\n", + "and provide comments/markdown code explaining what you see in the plots.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Determining Stability Properties\n", + "The heat conduction problem, as you saw in Lab \\#1, has solutions that are stable when $\\lambda<0$. It is clear from\n", + "[Problem stability](#Problem-Stability) above that some higher order schemes (namely, the leap-frog scheme) introduce a spurious oscillation not present in the continuous solution. This is called a *computational* or *numerical instability*, because it is an artifact of the discretization process only. This instability is not a characteristic of the heat conduction problem alone, but is present in other problems where such schemes are used. Furthermore, as we will see below, even a scheme such as forward Euler can be unstable for certain problems and choices of the time step.\n", + "\n", + "There is a way to determine the stability properties of a scheme, and that is to apply the scheme to the *test equation* $$\\frac{dz}{dt} = \\lambda z$$ where $\\lambda$ is a complex constant.\n", + "\n", + "The reason for using this equation may not seem very clear. But if you think in terms of $\\lambda z$ as being the linearization of some more complex right hand side, then the solution to is $z=e^{\\lambda t}$, and so $z$ represents, in some sense, a Fourier mode of the solution to the linearized ODE problem. We expect that the behaviour of the simpler, linearized problem should mimic that of the original problem.\n", + "\n", + "Applying the forward Euler scheme to this test equation, results in the following difference formula $$z_{i+1} = z_i+(\\lambda \\Delta t)z_i$$ which is a formula that we can apply iteratively to $z_i$ to obtain\n", + "$$\\begin{aligned}\n", + "z_{i+1} &=& (1+\\lambda \\Delta t)z_{i} \\\\\n", + " &=& (1+\\lambda \\Delta t)^2 z_{i-1} \\\\\n", + " &=& \\cdots \\\\\n", + " &=& (1+\\lambda \\Delta t)^{i+1} z_{0}.\\end{aligned}$$ The value of $z_0$ is fixed by the initial conditions, and so this difference equation for $z_{i+1}$ will “blow up” as $i$ gets bigger, if the factor in front of $z_0$ is greater than 1 in magnitude – this is a sign of instability. Hence, this analysis has led us to the conclusion that if\n", + "$$|1+\\lambda\\Delta t| < 1,$$ then the forward Euler method is stable. For *real* values of $\\lambda<0$, this inequality can be shown to be equivalent to the *stability condition* $$\\Delta t < \\frac{-2}{\\lambda},$$ which is a restriction on how large the time step can be so that the numerical solution is stable." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2023-08-15T23:56:36.411227Z", + "start_time": "2023-08-15T23:56:36.256588Z" + } + }, + "source": [ + "### Problem Backward Euler\n", + "\n", + "Perform a similar analysis for the backward Euler formula, and show that it is *always stable* when $\\lambda$ is real and negative. Confirm this using plots using similar code to Problem: Stability (i.e. using stability2.ipynb if you haven't gone through Problem: Stability yet)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### Example: leap-frog\n", + "\n", + "*Now, what about the leap-frog scheme?*\n", + "\n", + "Applying the test equation to the leap-frog scheme results in the\n", + "difference equation $$z_{i+1} = z_{i-1} + 2 \\lambda \\Delta t z_i.$$\n", + "Difference formulas such as this one are typically solved by looking for\n", + "a solution of the form $z_i = w^i$ which, when substituted into this\n", + "equation, yields $$w^2 - 2\\lambda\\Delta t w - 1 = 0,$$ a quadratic\n", + "equation with solution\n", + "$$w = \\lambda \\Delta t \\left[ 1 \\pm \\sqrt{1+\\frac{1}{(\\lambda\n", + " \\Delta t)^2}} \\right].$$ The solution to the original difference\n", + "equation, $z_i=w^i$ is stable only if all solutions to this quadratic\n", + "satisfy $|w|<1$, since otherwise, $z_i$ will blow up as $i$ gets large.\n", + "\n", + "The mathematical details are not important here – what is important is\n", + "that there are two (possibly complex) roots to the quadratic equation\n", + "for $w$, and one is *always* greater than 1 in magnitude *unless*\n", + "$\\lambda$ is pure imaginary ( has real part equal to zero), *and*\n", + "$|\\lambda \\Delta t|<1$. For the heat conduction equation in [Problem Stability](#Stability) (which is already of the same form as the test equation ), $\\lambda$ is clearly not imaginary, which explains the presence of the instability for the leap-frog scheme.\n", + "\n", + "Nevertheless, the leap-frog scheme is still useful for computations. In fact, it is often used in geophysical applications, as you will see later on when discretizing." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "An example of where the leap-frog scheme is superior to the other first\n", + "order schemes is for undamped periodic motion (which arose in the\n", + "weather balloon example from Lab \\#1 ). This corresponds to the system\n", + "of ordinary differential equations (with the damping parameter, $\\beta$,\n", + "taken to be zero): $$\\frac{dy}{dt} = u,$$\n", + "$$\\frac{du}{dt} = - \\frac{\\gamma}{m} y.$$ You’ve already discretized\n", + "this problem using the forward difference formula, and the same can be\n", + "done with the second order centered formula. We can then compare the\n", + "forward Euler and leap-frog schemes applied to this problem. We code this\n", + "in the module\n", + "\n", + "Solution plots are given in [Figure oscilator](#fig_oscillator) below, for\n", + "parameters $\\gamma/m=1$, $\\Delta t=0.25$, $y(0)=0.0$ and $u(0)=1.0$, and\n", + "demonstrate that the leap-frog scheme is stable, while forward Euler is\n", + "unstable. This can easily be explained in terms of the stability\n", + "criteria we derived for the two schemes when applied to the test\n", + "equation. The undamped oscillator problem is a linear problem with pure\n", + "imaginary eigenvalues, so as long as $|\\sqrt{\\gamma/m}\\Delta t|<1$, the\n", + "leap-frog scheme is stable, which is obviously true for the parameter\n", + "values we are given. Furthermore, the forward Euler stability condition\n", + "$|1+\\lambda\\Delta\n", + " t|<1$ is violated for any choice of time step (when $\\lambda$ is pure\n", + "imaginary) and so this scheme is always unstable for the undamped\n", + "oscillator. The github link to the oscillator module is [oscillator.py](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/oscillator.py)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-16T08:14:30.453763Z", + "start_time": "2022-01-16T08:14:30.035238Z" + } + }, + "outputs": [], + "source": [ + "import numlabs.lab2.oscillator as os\n", + "the_times=np.linspace(0,20.,80)\n", + "yvec_init=[0,1]\n", + "output_euler=os.euler(the_times,yvec_init)\n", + "output_mid=os.midpoint(the_times,yvec_init)\n", + "output_leap=os.leapfrog(the_times,yvec_init)\n", + "answer=np.sin(the_times)\n", + "plt.style.use('ggplot')\n", + "fig,ax=plt.subplots(1,1,figsize=(7,7))\n", + "ax.plot(the_times,(output_euler[0,:]-answer),label='euler')\n", + "ax.plot(the_times,(output_mid[0,:]-answer),label='midpoint')\n", + "ax.plot(the_times,(output_leap[0,:]-answer),label='leapfrog')\n", + "ax.set(ylim=[-2,2],xlim=[0,20],title='global error between sin(t) and approx. for three schemes',\n", + " xlabel='time',ylabel='exact - approx')\n", + "ax.legend(loc='best');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure numerical**: Numerical solution to the undamped harmonic oscillator problem, using the forward Euler and leap-frog schemes. Parameter values: $\\gamma / m=1.0$, $\\Delta t=0.25$, $y(0)=0$, $u(0)=1.0$. The exact solution is a sinusoidal wave.\n", + "\n", + "Had we taken a larger time step (such as $\\Delta t=2.0$, for example), then even the leap-frog scheme is unstable. Furthermore, if we add damping ($\\beta\\neq 0$), then the eigenvalues are no longer pure imaginary, and the leap-frog scheme is unstable no matter what time step we use." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stiff Equations \n", + "\n", + "This Lab has dealt only with ODE’s (and systems of\n", + "ODE’s) that are *non-stiff*. *Stiff equations* are equations that have\n", + "solutions with at least two widely varying times scales over which the\n", + "solution changes. An example of stiff solution behaviour is a problem\n", + "with solutions that have rapid, transitory oscillations, which die out\n", + "over a short time scale, after which the solution slowly decays to an\n", + "equilibrium. A small time step is required in the initial transitory\n", + "region in order to capture the rapid oscillations. However, a larger\n", + "time step can be taken in the non-oscillatory region where the solution\n", + "is smoother. Hence, using a very small time step will result in very\n", + "slow and inefficient computations.\n", + "\n", + "There are also many other numerical schemes designed specifically for\n", + "stiff equations, most of which are implicit schemes. We will not\n", + "describe any of them here – you can find more information in a numerical\n", + "analysis text such as  @burden-faires." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Difference Approximations of Higher Derivatives\n", + "\n", + "Higher derivatives can be discretized in a similar way to what we did\n", + "for first derivatives. Let’s consider for now only the second\n", + "derivative, for which one possible approximation is the second order\n", + "centered formula: $$\\frac{y(t_{i+1})-2y(t_i)+y(t_{i-1})}{(\\Delta t)^2} = \n", + " y^{\\prime\\prime}(t_i) + {\\cal O}((\\Delta t)^2),$$ There are, of course,\n", + "many other possible formulae that we might use, but this is the most\n", + "commonly used." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Taylor Series \n", + "(Hand in png file)\n", + "\n", + "- Use Taylor series to derive the second order centered formula above\n", + "\n", + "- For more practice (although not required), try deriving a higher order approximation as well. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary \n", + "\n", + "This lab has discussed the accuracy and stability of difference schemes\n", + "for simple first order ODEs. The results of the problems should have\n", + "made it clear to you that choosing an accurate and stable discretization\n", + "for even a very simple problem is not straightforward. One must take\n", + "into account not only the considerations of accuracy and stability, but\n", + "also the cost or complexity of the scheme. Selecting a numerical method\n", + "for a given problem can be considered as an art in itself.\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes\n", + "\n", + "### Taylor Polynomials and Taylor Series\n", + "\n", + "\n", + "Taylor Series are of fundamental importance in numerical analysis. They\n", + "are the most basic tool for talking about the approximation of\n", + "functions. Consider a function $f(x)$ that is smooth – when we say\n", + "“smooth”, what we mean is that its derivatives exist and are bounded\n", + "(for the following discussion, we need $f$ to have $(n+1)$ derivatives).\n", + "We would like to approximate $f(x)$ near the point $x=x_0$, and we can\n", + "do it as follows:\n", + "$$f(x) = \\underbrace{P_n(x)}_{\\mbox{Taylor polynomial}} +\n", + " \\underbrace{R_n(x)}_{\\mbox{remainder term}},$$ where\n", + "$$P_n(x)=f(x_0)+ f^\\prime(x_0)(x-x_0) +\n", + " \\frac{f^{\\prime\\prime}(x_0)}{2!}(x-x_0)^2 + \\cdots + \n", + " \\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n$$ is the *$n$th order Taylor\n", + "polynomial* of $f$ about $x_0$, and\n", + "$$R_n(x)=\\frac{f^{(n+1)}(\\xi(x))}{(n+1)!}(x-x_0)^{n+1}$$ is the\n", + "*remainder term* or *truncation error*. The point $\\xi(x)$ in the error\n", + "term lies somewhere between the points $x_0$ and $x$. If we look at the\n", + "infinite sum ( let $n\\rightarrow\\infty$), then the resulting infinite\n", + "sum is called the *Taylor series of $f(x)$ about $x=x_0$*. This result\n", + "is also know as *Taylor’s Theorem*.\n", + "\n", + "Remember that we assumed that $f(x)$ is smooth (in particular, that its\n", + "derivatives up to order $(n+1)$ exist and are finite). That means that\n", + "all of the derivatives appearing in $P_n$ and $R_n$ are bounded.\n", + "Therefore, there are two ways in which we can think of the Taylor\n", + "polynomial $P_n(x)$ as an approximation of $f(x)$:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. First of all, let us fix $n$. Then, we can improve the approximation\n", + " by letting $x$ approach $x_0$, since as $(x-x_0)$ gets small, the\n", + " error term $R_n(x)$ goes to zero ($n$ is considered fixed and all\n", + " terms depending on $n$ are thus constant). Therefore, the\n", + " approximation improves when $x$ gets closer and closer to $x_0$.\n", + "\n", + "2. Alternatively, we can think of fixing $x$. Then, we can improve the\n", + " approximation by taking more and more terms in the series. When $n$\n", + " is increased, the factorial in the denominator of the error term\n", + " will eventually dominate the $(x-x_0)^{n+1}$ term (regardless of how\n", + " big $(x-x_0)$ is), and thus drive the error to zero.\n", + "\n", + "In summary, we have two ways of improving the Taylor polynomial\n", + "approximation to a function: by evaluating it at points closer to the\n", + "point $x_0$; and by taking more terms in the series.\n", + "\n", + "This latter property of the Taylor expansion can be seen by a simple example.\n", + "Consider the Taylor polynomial for the function $f(x)=\\sin(x)$ about the\n", + "point $x_0=0$. All of the even terms are zero since they involve $sin(0)$, \n", + "so that if we take $n$\n", + "odd ( $n=2k+1$), then the $n$th order Taylor polynomial for $sin(x)$ is\n", + "\n", + "$$P_{2k+1}(x)=x - \\frac{x^3}{3!}+\\frac{x^5}{5!} -\\frac{x^7}{7!}+\\cdots\n", + " +\\frac{x^{2k+1}}{(2k+1)!}.\\ eq: taylor$$\n", + "\n", + "The plot in Figure: Taylor illustrates quite clearly\n", + "how the approximation improves both as $x$ approaches 0, and as $n$ is\n", + "increased." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure: Taylor** -- Plot of $\\sin(x)$ compared to its Taylor polynomial approximations\n", + "about $x_0=0$, for various values of $n=2k +1$ in eq: taylor." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Consider a specific Taylor polynomial, say $P_3(x)$ ( fix $n=3$). Notice\n", + "that for $x$ far away from the origin, the polynomial is nowhere near\n", + "the function $\\sin(x)$. However, it approximates the function quite well\n", + "near the origin. On the other hand, we could take a specific point,\n", + "$x=5$, and notice that the Taylor series of orders 1 through 7 do not\n", + "approximate the function very well at all. Nevertheless the\n", + "approximation improves as $n$ increases, as is shown by the 15th order\n", + "Taylor polynomial." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Floating Point Representation of Numbers\n", + "\n", + "Unlike a mathematician, who can deal with real numbers having infinite\n", + "precision, a computer can represent numbers with only a finite number of\n", + "digits. The best way to understand how a computer stores a number is to\n", + "look at its *floating-point form*, in which a number is written as\n", + "$$\\pm 0.d_1 d_2 d_3 \\ldots d_k \\times 10^n,$$ where each digit, $d_i$ is\n", + "between 0 and 9 (except $d_1$, which must be non-zero). Floating point\n", + "form is commonly used in the physical sciences to represent numerical\n", + "values; for example, the Earth’s radius is approximately 6,400,000\n", + "metres, which is more conveniently written in floating point form as\n", + "$0.64\\times 10^7$ (compare this to the general form above).\n", + "\n", + "Computers actually store numbers in *binary form* (i.e. in base-2\n", + "floating point form, as compared to the decimal or base-10 form shown\n", + "above). However, it is more convenient to use the decimal form in order\n", + "to illustrate the basic idea of computer arithmetic. For a good\n", + "discussion of the binary representation of numbers, see Burden & Faires\n", + " [sec. 1.2] or Newman section 4.2.\n", + "\n", + "For the remainder of this discussion, assume that we’re dealing with a\n", + "computer that can store numbers with up to 8 *significant digits*\n", + "(i.e. $k=8$) and exponents in the range $-38 \\leq n \\leq 38$. Based on\n", + "these values, we can make a few observations regarding the numbers that\n", + "can be represented:\n", + "\n", + "- The largest number that can be represented is about $1.0\\times\n", + " 10^{+38}$, while the smallest is $1.0\\times 10^{-38}$.\n", + "\n", + "- These numbers have a lot of *holes*, where real numbers are missed.\n", + " For example, consider the two consecutive floating point numbers\n", + " $$0.13391482 \\times 10^5 \\;\\;\\; {\\rm and} \\;\\;\\; 0.13391483 \\times 10^5,$$\n", + " or 13391.482 and 13391.483. Our floating-point number system cannot\n", + " represent any numbers between these two values, and hence any number\n", + " in between 13391.482 and 13391.483 must be approximated by one of\n", + " the two values. Another way of thinking of this is to observe that\n", + " $0.13391482 \\times 10^5$ does not represent just a single real\n", + " number, but a whole *range* of numbers.\n", + "\n", + "- Notice that the same amount of floating-point numbers can be\n", + " represented between $10^{-6}$ and $10^{-5}$ as are between $10^{20}$\n", + " and $10^{21}$. Consequently, the density of floating points numbers\n", + " increases as their magnitude becomes smaller. That is, there are\n", + " more floating-point numbers close to zero than there are far away.\n", + " This is illustrated in the figure below.\n", + "\n", + " The floating-point numbers (each represented by a $\\times$) are\n", + " more dense near the origin.\n", + " \n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The values $k=8$ and $-38\\leq n \\leq 38$ correspond to what is known as\n", + "*single precision arithmetic*, in which 4 bytes (or units of memory in a\n", + "computer) are used to store each number. It is typical in many\n", + "programming languages, including $C++$, to allow the use of higher\n", + "precision, or *double precision*, using 8 bytes for each number,\n", + "corresponding to values of $k=16$ and $-308\\leq n \\leq 308$, thereby\n", + "greatly increasing the range and density of numbers that can be\n", + "represented. When doing numerical computations, it is customary to use\n", + "double-precision arithmetic, in order to minimize the effects of\n", + "round-off error (in a $C++$ program, you can define a variable ` x` to\n", + "be double precision using the declaration ` double x;`).\n", + "\n", + "Sometimes, double precision arithmetic may help in eliminating round-off\n", + "error problems in a computation. On the minus side, double precision\n", + "numbers require more storage than their single precision counterparts,\n", + "and it is sometimes (but not always) more costly to compute in double\n", + "precision. Ultimately, though, using double precision should not be\n", + "expected to be a cure-all against the difficulties of round-off errors.\n", + "The best approach is to use an algorithm that is not unstable with\n", + "respect to round-off error. For an example where increasing precision\n", + "will not help, see the section on Gaussian elimination in Lab \\#3." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "218.7px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab3/01-lab3.ipynb.txt b/_sources/notebooks/lab3/01-lab3.ipynb.txt new file mode 100644 index 0000000..97420e6 --- /dev/null +++ b/_sources/notebooks/lab3/01-lab3.ipynb.txt @@ -0,0 +1,2303 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 3: Linear Algebra (Sept. 12, 2017)\n", + "\n", + "Grace Yung" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## List of Problems\n", + "\n", + "- [Problem One](#Problem-One): Pollution Box Model\n", + "- [Problem Two](#Problem-Two): Condition number for Dirichlet problem\n", + "- [Problem Three](#Problem-Three): Condition number for Neumann problem\n", + "- [Problem Four](#Problem-Four): Condition number for periodic problem" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "The object of this lab is to familiarize you with some of the common\n", + "techniques used in linear algebra. You can use the Python software\n", + "package to solve some of the problems presented in the lab. There are\n", + "examples of using the Python commands as you go along in the lab. In\n", + "particular, after finishing the lab, you will be able to\n", + "\n", + "- Define: condition number, ill-conditioned matrix, singular matrix,\n", + " LU decomposition, Dirichlet, Neumann and periodic boundary\n", + " conditions.\n", + "\n", + "- Find by hand or using Python: eigenvalues, eigenvectors, transpose,\n", + " inverse of a matrix, determinant.\n", + "\n", + "- Find using Python: condition numbers.\n", + "\n", + "- Explain: pivoting.\n", + "\n", + "There is a description of using Numpy and Python for Matrices at the end of the lab. It includes a brief description of how to use the built-in functions introduced in\n", + "this lab. Just look for the paw prints:\n", + "\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n", + "\n", + "when you are not sure what\n", + "functions to use, and this will lead you to the mini-manual." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "\n", + "You should have had an introductory course in linear algebra." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# import the context to find files\n", + "import context\n", + "# import the quiz script\n", + "from numlabs.lab3 import quiz3\n", + "# import numpy\n", + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Linear Systems\n", + "\n", + "In this section, the concept of a matrix will be reviewed. The basic\n", + "operations and methods in solving a linear system are introduced as\n", + "well.\n", + "\n", + "Note: **Whenever you see a term you are not familiar with, you can find\n", + "a definition in the [glossary](#Glossary).**\n", + "\n", + "### What is a Matrix?\n", + "\n", + "Before going any further on how to solve a linear system, you need to\n", + "know what a linear system is. A set of m linear equations with n\n", + "unknowns: \n", + "\n", + "
(System of Equations)
\n", + "\n", + "$$\\begin{array}{ccccccc}\n", + "a_{11}x_{1} & + & \\ldots & + & a_{1n}x_{n} & = & b_{1} \\\\\n", + "a_{21}x_{1} & + & \\ldots & + & a_{2n}x_{n} & = & b_{2} \\\\\n", + " & & \\vdots & & & & \\vdots \\\\\n", + "a_{m1}x_{1} & + & \\ldots & + & a_{mn}x_{n} & = & b_{m}\n", + "\\end{array}\n", + "$$\n", + "\n", + "\n", + "can be represented as an augmented matrix:\n", + "\n", + "$$\\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{ccccc}\n", + " a_{11} & & \\ldots & & a_{1n} \\\\\n", + " \\vdots & & \\ddots & & \\vdots \\\\\n", + " a_{m1} & & \\ldots & & a_{mn}\n", + " \\end{array}\n", + "&\n", + " \\left|\n", + " \\begin{array}{rc}\n", + " & b_{1} \\\\ & \\vdots \\\\ & b_{m}\n", + " \\end{array}\n", + " \\right.\n", + "\\end{array}\n", + "\\right]$$\n", + "\n", + "Column 1 through n of this matrix contain the coefficients $a_{ij}$ of\n", + "the unknowns in the set of linear equations. The right most column is\n", + "the *augmented column*, which is made up of the coefficients of the\n", + "right hand side $b_i$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Quiz on Matrices\n", + "Which matrix matches this system of equations?\n", + "\n", + "$$\\begin{array}{lcr}\n", + "2x + 3y + 6z &=& 19\\\\\n", + "3x + 6y + 9z &=& 21\\\\\n", + "x + 5y + 10z &=& 0\n", + "\\end{array}$$\n", + "(A)\n", + "$$\\left[ \\begin{array}{ccc}\n", + "2 & 3 & 1 \\\\\n", + "3 & 6 & 5 \\\\\n", + "6 & 9 & 10\\\\\n", + "19 & 21 & 0\n", + "\\end{array}\n", + "\\right]$$\n", + "(B)\n", + "$$\\left[ \\begin{array}{ccc}\n", + "2 & 3 & 6\\\\\n", + "3 & 6 & 9\\\\\n", + "1 & 5 & 10\n", + "\\end{array}\n", + "\\right]$$\n", + "(C)\n", + "$$\\left[ \\begin{array}{ccc|c}\n", + "1 & 5 & 10 & 0\\\\\n", + "2 & 3 & 6 & 19\\\\\n", + "3 & 6 & 9 & 21\n", + "\\end{array}\n", + "\\right]$$\n", + "(D)\n", + "$$\\left[ \\begin{array}{ccc|c}\n", + "2 & 3 & 6 & -19\\\\\n", + "3 & 6 & 9 & -21\\\\\n", + "1 & 5 & 10 & 0 \n", + "\\end{array}\n", + "\\right]$$\n", + "\n", + "In the following, replace 'x' by 'A', 'B', 'C', or 'D' and run the cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(quiz3.matrix_quiz(answer = 'xxx'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Quick Review\n", + "\n", + "[lab3:sec:quick]: (#Quick-Review)\n", + "\n", + "Here is a review on the basic matrix operations, including addition,\n", + "subtraction and multiplication. These are important in solving linear\n", + "systems.\n", + "\n", + "Here is a short exercise to see how much you remember:\n", + "\n", + "Let $x = \\left[ \\begin{array}{r} 2 \\\\ 2 \\\\ 7 \\end{array} \\right] , \n", + " y = \\left[ \\begin{array}{r} -5 \\\\ 1 \\\\ 3 \\end{array} \\right] , \n", + " A = \\left[ \\begin{array}{rrr} 3 & -2 & 10 \\\\ \n", + " -6 & 7 & -4 \n", + " \\end{array} \\right],\n", + " B = \\left[ \\begin{array}{rr} -6 & 4 \\\\ 7 & -1 \\\\ 2 & 9 \n", + " \\end{array} \\right]$\n", + "\n", + "Calculate the following:\n", + "\n", + "1. $x + y$\n", + "\n", + "2. $x^{T}y$\n", + "\n", + "3. $y-x$\n", + "\n", + "4. $Ax$\n", + "\n", + "5. $y^{T}A$\n", + "\n", + "6. $AB$\n", + "\n", + "7. $BA$\n", + "\n", + "8. $AA$\n", + "\n", + "The solutions to these exercises are available [here](lab3_files/quizzes/quick/quick.html)\n", + "\n", + "After solving the questions by hand, you can also use Python to check\n", + "your answers.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "## A cell to use Python to check your answers\n", + "x = np.array([2, 2, 7])\n", + "y = np.array([-5, 1, 3])\n", + "A = np.array([[3, -2, 10],\n", + " [-6, 7, -4]])\n", + "B = np.array([[-6, 4],\n", + " [7, -1],\n", + " [2, 9]])\n", + "print(f'(x+y) is {x+y}')\n", + "print(f'x^T y is {np.dot(x, y)}')\n", + "\n", + "## you do the rest!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Gaussian Elimination\n", + "\n", + "[lab3:sec:gaus]: <#Gaussian-Elimination> \"Gaussian Elimination\"\n", + "\n", + "The simplest method for solving a linear system is Gaussian elimination,\n", + "which uses three types of *elementary row operations*:\n", + "\n", + "- Multiplying a row by a non-zero constant ($kE_{ij}$)\n", + "\n", + "- Adding a multiple of one row to another ($E_{ij} + kE_{kj}$)\n", + "\n", + "- Exchanging two rows ($E_{ij} \\leftrightarrow E_{kj}$)\n", + "\n", + "Each row operation corresponds to a step in the solution of the [System\n", + "of Equations](#lab3:eq:system) where the equations are combined\n", + "together. *It is important to note that none of those operations changes\n", + "the solution.* There are two parts to this method: elimination and\n", + "back-substitution. The purpose of the process of elimination is to\n", + "eliminate the matrix entries below the main diagonal, using row\n", + "operations, to obtain a upper triangular matrix with the augmented\n", + "column. Then, you will be able to proceed with back-substitution to find\n", + "the values of the unknowns.\n", + "\n", + "Try to solve this set of linear equations:\n", + "\n", + "$$\\begin{array}{lrcrcrcr}\n", + " E_{1j}: & 2x_{1} & + & 8x_{2} & - & 5x_{3} & = & 53 \\\\\n", + " E_{2j}: & 3x_{1} & - & 6x_{2} & + & 4x_{3} & = & -48 \\\\\n", + " E_{3j}: & x_{1} & + & 2x_{2} & - & x_{3} & = & 13\n", + "\\end{array}$$\n", + "\n", + "The solution to this problem is available [here](lab3_files/quizzes/gaus/gaus.html)\n", + "\n", + "After solving the system by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Decomposition \n", + "[lab3:sec:decomp]: <#Decomposition> \"Decomposition\"\n", + "\n", + "Any invertible, square matrix, $A$, can be factored out into a product\n", + "of a lower and an upper triangular matrices, $L$ and $U$, respectively,\n", + "so that $A$ = $LU$. The $LU$- *decomposition* is closely linked to the\n", + "process of Gaussian elimination.\n", + "\n", + "#### Example One\n", + "\n", + "> Using the matrix from the system of the previous section (Sec\n", + "[Gaussian Elimination](#Gaussian-Elimination)), we have:\n", + "\n", + "> $$A = \\left[ \\begin{array}{rrr} 2 & 8 & -5 \\\\\n", + " 3 & -6 & 4 \\\\\n", + " 1 & 2 & -1 \\end{array} \\right]$$\n", + "\n", + "> The upper triangular matrix $U$ can easily be calculated by applying\n", + "Gaussian elimination to $A$:\n", + "\n", + "> $$\\begin{array}{cl}\n", + " \\begin{array}{c} E_{2j}-\\frac{3}{2}E_{1j} \\\\\n", + " E_{3j}-\\frac{1}{2}E_{1j} \\\\\n", + " \\rightarrow \\end{array}\n", + " & \n", + " \\left[ \\begin{array}{rrr} 2 & 8 & -5 \\\\\n", + " 0 & -18 & \\frac{23}{2} \\\\\n", + " 0 & -2 & \\frac{3}{2} \n", + " \\end{array} \\right] \\\\ \\\\\n", + " \\begin{array}{c} E_{3j}-\\frac{1}{9}E_{2j} \\\\\n", + " \\rightarrow \\end{array}\n", + " &\n", + " \\left[ \\begin{array}{rrr} 2 & 8 & -5 \\\\\n", + " 0 & -18 & \\frac{23}{2} \\\\\n", + " 0 & 0 & \\frac{2}{9}\n", + " \\end{array} \\right] = U\n", + "\\end{array}$$\n", + "\n", + "> Note that there is no row exchange.\n", + "\n", + "> The lower triangular matrix $L$ is calculated with the steps which lead\n", + "us from the original matrix to the upper triangular matrix, i.e.:\n", + "\n", + "> $$\\begin{array}{c} E_{2j}-\\frac{3}{2}E_{1j} \\\\ \\\\\n", + " E_{3j}-\\frac{1}{2}E_{1j} \\\\ \\\\\n", + " E_{3j}-\\frac{1}{9}E_{2j} \n", + "\\end{array}$$\n", + "\n", + "> Note that each step is a multiple $\\ell$ of equation $m$ subtracted from\n", + "equation $n$. Each of these steps, in fact, can be represented by an\n", + "elementary matrix. $U$ can be obtained by multiplying $A$ by this\n", + "sequence of elementary matrices.\n", + "\n", + "> Each of the elementary matrices is composed of an identity matrix the\n", + "size of $A$ with $-\\ell$ in the ($m,n$) entry. So the steps become:\n", + "\n", + "> $$\\begin{array}{ccc} \n", + " E_{2j}-\\frac{3}{2}E_{1j} & \n", + " \\rightarrow &\n", + " \\left[ \\begin{array}{rcc} 1 & 0 & 0 \\\\\n", + " -\\frac{3}{2} & 1 & 0 \\\\\n", + " 0 & 0 & 1\n", + " \\end{array} \\right] = R \\\\ \\\\\n", + " E_{3j}-\\frac{1}{2}E_{1j} &\n", + " \\rightarrow &\n", + " \\left[ \\begin{array}{rcc} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " -\\frac{1}{2} & 0 & 1 \n", + " \\end{array} \\right] = S \\\\ \\\\\n", + " E_{3j}-\\frac{1}{9}E_{2j} &\n", + " \\rightarrow &\n", + " \\left[ \\begin{array}{crc} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " 0 & -\\frac{1}{9} & 1\n", + " \\end{array} \\right] = T\n", + "\\end{array}$$\n", + "\n", + "> and $TSRA$ = $U$. Check this with Python.\n", + "\n", + "> To get back from $U$ to $A$, the inverse of $R$, $S$ and $T$ are\n", + "multiplied onto $U$:\n", + "\n", + "> $$\\begin{array}{rcl}\n", + " T^{-1}TSRA & = & T^{-1}U \\\\\n", + " S^{-1}SRA & = & S^{-1}T^{-1}U \\\\ \n", + " R^{-1}RA & = & R^{-1}S^{-1}T^{-1}U \\\\\n", + "\\end{array}$$\n", + "\n", + "> So $A$ = $R^{-1}S^{-1}T^{-1}U$. Recall that $A$ = $LU$. If\n", + "$R^{-1}S^{-1}T^{-1}$ is a lower triangular matrix, then it is $L$.\n", + "\n", + "> The inverse of the elementary matrix is the same matrix with only one\n", + "difference, and that is, $\\ell$ is in the $a_{mn}$ entry instead of\n", + "$-\\ell$. So:\n", + "\n", + "> $$\\begin{array}{rcl}\n", + " R^{-1} & = & \\left[ \\begin{array}{rrr} 1 & 0 & 0 \\\\\n", + " \\frac{3}{2} & 1 & 0 \\\\\n", + " 0 & 0 & 1\n", + " \\end{array} \\right] \\\\ \\\\\n", + " S^{-1} & = & \\left[ \\begin{array}{rrr} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " \\frac{1}{2} & 0 & 1\n", + " \\end{array} \\right] \\\\ \\\\\n", + " T^{-1} & = & \\left[ \\begin{array}{rrr} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " 0 & \\frac{1}{9} & 1\n", + " \\end{array} \\right] \n", + "\\end{array}$$\n", + "\n", + "> Multiplying $R^{-1}S^{-1}T^{-1}$ together, we have:\n", + "\n", + "> $$\\begin{array}{rcl} R^{-1}S^{-1}T^{-1} \n", + " & = &\n", + " \\left[ \\begin{array}{ccc} 1 & 0 & 0 \\\\\n", + " \\frac{3}{2} & 1 & 0 \\\\\n", + " \\frac{1}{2} & \\frac{1}{9} & 1\n", + " \\end{array} \\right] = L\n", + "\\end{array}$$\n", + "\n", + "> So $A$ is factored into two matrices $L$ and $U$, where\n", + "\n", + "> $$\\begin{array}{ccc}\n", + " L = \\left[ \\begin{array}{ccc} 1 & 0 & 0 \\\\\n", + " \\frac{3}{2} & 1 & 0 \\\\\n", + " \\frac{1}{2} & \\frac{1}{9} & 1\n", + " \\end{array} \\right]\n", + "& \\mbox{ and } &\n", + " U = \\left[ \\begin{array}{ccc} 2 & 8 & -5 \\\\\n", + " 0 & -18 & \\frac{23}{2} \\\\\n", + " 0 & 0 & \\frac{2}{9}\n", + " \\end{array} \\right]\n", + "\\end{array}$$\n", + "\n", + "> Use Python to confirm that $LU$ = $A$.\n", + "\n", + "The reason decomposition is introduced here is not because of Gaussian\n", + "elimination $-$ one seldom explicitly computes the $LU$ decomposition of\n", + "a matrix. However, the idea of factoring a matrix is important for other\n", + "direct methods of solving linear systems (of which Gaussian elimination\n", + "is only one) and for methods for finding eigenvalues ([Characteristic Equation](#Characteristic-Equation)).\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Round-off Error\n", + "\n", + "[lab3:sec:round-off-error]: <#Round-off-Error> \"Round-off-Error\"\n", + "\n", + "When a number is represented in its floating point form, i.e. an\n", + "approximation of the number, the resulting error is the *round-off\n", + "error*. The floating-point representation of numbers  and the consequent\n", + "effects of round-off error were discussed already in Lab \\#2.\n", + "\n", + "When round-off errors are present in the matrix $A$ or the right hand\n", + "side $b$, the linear system $Ax = b$ may or may not give a solution that\n", + "is close to the real answer. When a matrix $A$ “magnifies” the effects\n", + "of round-off errors in this way, we say that $A$ is an ill-conditioned\n", + "matrix.\n", + "\n", + "#### Example Two\n", + "[lab3:eg:round]: <#Example-Two> \"Example Two\"\n", + "\n", + "> Let’s see an example:\n", + "\n", + "> Suppose\n", + "\n", + "> $$A = \\left[ \\begin{array}{cc} 1 & 1 \\\\ 1 & 1.0001 \n", + " \\end{array} \\right]$$\n", + "\n", + "> and consider the system:\n", + "\n", + ">
\n", + "(Ill-conditioned version one):\n", + " $$\\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 1 & 1.0001 \\end{array} \n", + "&\n", + " \\left| \\begin{array}{c} 2 \\\\ 2 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "$$\n", + "
\n", + "\n", + "> The condition number, $K$, of a matrix, defined in [Condition Number](#Condition-Number), is a measure of how well-conditioned a matrix is. If\n", + "$K$ is large, then the matrix is ill-conditioned, and Gaussian\n", + "elimination will magnify the round-off errors. The condition number of\n", + "$A$ is 40002. You can use Python to check this number.\n", + "\n", + "> The solution to this is $x_1$ = 2 and $x_2$ = 0. However, if the system\n", + "is altered a little as follows:\n", + "\n", + ">
\n", + "(Ill-conditioned version two):\n", + " $$\\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 1 & 1.0001 \\end{array}\n", + "&\n", + " \\left| \\begin{array}{c} 2 \\\\ 2.0001 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "$$\n", + "
\n", + "\n", + "> Then, the solution becomes $x_1$ = 1 and $x_2$ = 1. A change in the\n", + "fifth significant digit was amplified to the point where the solution is\n", + "not even accurate to the first significant digit. $A$ is an\n", + "ill-conditioned matrix. You can set up the systems [Ill-conditioned version one](#lab3:eq:illbefore)\n", + "and [Ill-conditioned version two](#lab3:eq:illafter) in Python, and check the answers yourself." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Three\n", + "\n", + "[lab3:eg:inacc]: <#Example-Three> \"Example Three\"\n", + "\n", + "> Use Python to try the following example. First solve the\n", + "system $A^{\\prime}x = b$; then solve $A^{\\prime}x = b2$. Find the\n", + "condition number of $A^{\\prime}$.\n", + "\n", + ">$$\\begin{array}{ccccc}\n", + "A^{\\prime} = \\left[ \\begin{array}{cc} 0.0001 & 1 \\\\ 1 & 1 \n", + " \\end{array} \\right]\n", + "& , & \n", + "b = \\left[ \\begin{array}{c} 1 \\\\ 2 \\end{array} \\right]\n", + "& \\mbox{and} &\n", + "b2 = \\left[ \\begin{array}{c} 1 \\\\ 2.0001 \\end{array} \\right] .\n", + "\\end{array}$$\n", + "\n", + "> You will find that the solution for $A^{\\prime}x = b$ is $x_1$ = 1.0001\n", + "and $x_2$ = 0.9999, and the solution for $A^{\\prime}x = b2$ is $x_1$ =\n", + "1.0002 and $x_2$ = 0.9999 . So a change in $b$ did not result in a large\n", + "change in the solution. Therefore, $A^{\\prime}$ is a well-conditioned\n", + "matrix. In fact, the condition number is approximately 2.6.\n", + "\n", + "> Nevertheless, even a well conditioned system like $A^{\\prime}x =\n", + "b$ leads to inaccuracy if the wrong solution method is used, that is, an\n", + "algorithm which is sensitive to round-off error. If you use Gaussian\n", + "elimination to solve this system, you might be misled that $A^{\\prime}$\n", + "is ill-conditioned. Using Gaussian elimination to solve $A^{\\prime}x=b$:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& \\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 0.0001 & 1 \\\\ 1 & 1 \\end{array} \n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 1 \\\\ 2 \\end{array} \\right]\n", + "\\end{array} \\right. \\\\ \\\\\n", + "\\begin{array}{c} 10,000E_{1j} \\\\ \\rightarrow \\end{array} &\n", + "\\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 10,000 \\\\ 1 & 1 \\end{array} \n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 10,000 \\\\ 2 \\end{array} \\right]\n", + "\\end{array} \\right. \\\\ \\\\\n", + "\\begin{array}{c} E_{2j}-E_{1j} \\\\ \\rightarrow \\end{array} &\n", + "\\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 10,000 \\\\ 0 & -9,999 \\end{array} \n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 10,000 \\\\ -9,998 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> At this point, if you continue to solve the system as is, you will get\n", + "the expected answers. You can check this with Python. However, if you\n", + "make changes to the matrix here by rounding -9,999 and -9,998 to -10,000, \n", + "the final answers will be different:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& \\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 10,000 \\\\ 0 & -10,000 \\end{array}\n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 10,000 \\\\ -10,000 \\end{array} \\right]\n", + "\\end{array} \\right. \n", + "\\end{array}$$\n", + "\n", + "> The result is $x_1$ = 0 and $x_2$ = 1, which is quite different from the\n", + "correct answers. So Gaussian elimination might mislead you to think that\n", + "a matrix is ill-conditioned by giving an inaccurate solution to the\n", + "system. In fact, the problem is that Gaussian elimination on its own is\n", + "a method that is unstable in the presence of round-off error, even for\n", + "well-conditioned matrices. Can this be fixed?\n", + "\n", + "> You can try the example with Python.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Partial Pivoting\n", + "\n", + "There are a number of ways to avoid inaccuracy, one of which is applying\n", + "partial pivoting to the Gaussian elimination.\n", + "\n", + "\n", + "Consider the example from the previous section. In order to avoid\n", + "multiplying by 10,000, another pivot is desired in place of 0.0001. The\n", + "goal is to examine all the entries in the first column, find the entry\n", + "that has the largest value, and exchange the first row with the row that\n", + "contains this element. So this entry becomes the pivot. This is partial\n", + "pivoting. Keep in mind that switching rows is an elementary operation\n", + "and has no effect on the solution.\n", + "\n", + "\n", + "In the original Gaussian elimination algorithm, row exchange is done\n", + "only if the pivot is zero. In partial pivoting, row exchange is done so\n", + "that the largest value in a certain column is the pivot. This helps to\n", + "reduce the amplification of round-off error." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Four\n", + "\n", + "> In the matrix $A^{\\prime}$ from [Example Two](#Example-Two), 0.0001 from\n", + "column one is the first pivot. Looking at this column, the entry, 1, in\n", + "the second row is the only other choice in this column. Obviously, 1 is\n", + "greater than 0.0001. So the two rows are exchanged.\n", + "\n", + "> $$\\begin{array}{cl}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 0.0001 & 1 \\\\ 1 & 1 \\end{array} \n", + " & \\left|\n", + " \\begin{array}{c} 1 \\\\ 2 \\end{array} \\right]\n", + " \\end{array} \\right. \\\\ \\\\\n", + " \\begin{array}{c} E_{1j} \\leftrightarrow E_{2j} \\\\ \\rightarrow\n", + " \\end{array}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 0.0001 & 1 \\end{array} \n", + " & \\left|\n", + " \\begin{array}{c} 2 \\\\ 1 \\end{array} \\right] \n", + " \\end{array} \\right. \\\\ \\\\\n", + " \\begin{array}{c} E_{2j}-0.0001E_{1j} \\\\ \\rightarrow \\end{array} \n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 0 & 0.9999 \\end{array} \n", + " & \\left|\n", + " \\begin{array}{c} 2 \\\\ 0.9998 \\end{array} \\right]\n", + " \\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> The same entries are rounded off:\n", + "\n", + "> $$\\begin{array}{cl}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 0 & 1 \\end{array}\n", + " &\n", + " \\left|\n", + " \\begin{array}{c} 2 \\\\ 1 \\end{array} \\right]\n", + " \\end{array} \\right. \\\\ \\\\\n", + " \\begin{array}{c} E_{1j}-E_{2j} \\\\ \\rightarrow \\end{array}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 0 \\\\ 0 & 1 \\end{array}\n", + " &\n", + " \\left|\n", + " \\begin{array}{c} 1 \\\\ 1 \\end{array} \\right]\n", + " \\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> So the solution is $x_1$ = 1 and $x_2$ = 1, and this is a close\n", + "approximation to the original solution, $x_1$ = 1.0001 and $x_2$ =\n", + "0.9999.\n", + "\n", + "\n", + "> You can try the example with Python.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n", + "\n", + "\n", + "\n", + "Note: This section has described row pivoting. The same process can be\n", + "applied to columns, with the resulting procedure being called column\n", + "pivoting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Full Pivoting\n", + "\n", + "Another way to get around inaccuracy ([Example Three](#Example-Three)) is to\n", + "use Gaussian elimination with full pivoting. Sometimes, even partial\n", + "pivoting can lead to problems. With full pivoting, in addition to row\n", + "exchange, columns will be exchanged as well. The purpose is to use the\n", + "largest entries in the whole matrix as the pivots.\n", + "\n", + "#### Example Five\n", + "\n", + "> Given the following:\n", + "\n", + "> $$ \\begin{array}{cccc}\n", + "& A^{''} = \\left[ \\begin{array}{ccc} 0.0001 & 0.0001 & 0.5 \\\\\n", + " 0.5 & 1 & 1 \\\\\n", + " 0.0001 & 1 & 0.0001 \n", + " \\end{array} \n", + " \\right] \n", + "& \\ \\ \\ &\n", + "b^{'} = \\left[ \\begin{array}{c} 1 \\\\ 0 \\\\ 1 \n", + " \\end{array} \n", + " \\right] \\\\ \\\\\n", + "\\rightarrow &\n", + "\\left[ \\begin{array}{cc} \n", + " \\begin{array}{ccc} 0.0001 & 0.0001 & 0.5 \\\\\n", + " 0.5 & 1 & 1 \\\\\n", + " 0.0001 & 1 & 0.0001\n", + " \\end{array}\n", + " & \\left| \\begin{array}{c} 1 \\\\ 0 \\\\ 1 \n", + " \\end{array} \n", + " \\right]\n", + " \\end{array} \n", + "\\right. & & \n", + "\\end{array}$$\n", + "\n", + "> Use Python to find the condition number of $A^{''}$ and the solution to\n", + "this system.\n", + "\n", + "> Looking at the system, if no rows are exchanged, then taking 0.0001 as\n", + "the pivot will magnify any errors made in the elements in column 1 by a\n", + "factor of 10,000. With partial pivoting, the first two rows can be\n", + "exchanged (as below):\n", + "\n", + "> $$\\begin{array}{cl}\n", + "\\begin{array}{c} \\\\ E_{1j} \\leftrightarrow E_{2j} \\\\ \\rightarrow\n", + "\\end{array} \n", + "& \\begin{array}{ccccc}\n", + " & x_1 & x_2 & x_3 & \\\\\n", + " \\left[ \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right. \n", + " & \\begin{array}{c} 0.5 \\\\ 0.0001 \\\\ 0.0001 \\end{array} \n", + " & \\begin{array}{c} 1 \\\\ 0.0001 \\\\ 1 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 0.5 \\\\ 0.0001 \\end{array} \n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + " \\end{array}\n", + "\\end{array} $$ \n", + "\n", + "\n", + "> and the magnification by 10,000 is avoided. Now the matrix will be\n", + "expanded by a factor of 2. However, if the entry 1 is used as the pivot,\n", + "then the matrix does not need to be expanded by 2 either. The only way\n", + "to put 1 in the position of the first pivot is to perform a column\n", + "exchange between columns one and two, or between columns one and three.\n", + "This is full pivoting.\n", + "\n", + "> Note that when columns are exchanged, the variables represented by the\n", + "columns are switched as well, i.e. when columns one and two are\n", + "exchanged, the new column one represents $x_2$ and the new column two\n", + "represents $x_1$. So, we must keep track of the columns when performing\n", + "column pivoting.\n", + "\n", + ">So the columns one and two are exchanged, and the matrix becomes:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "\\begin{array}{c} \\\\ E_{i1} \\leftrightarrow E_{i2} \\\\ \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_1 & x_3 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0.0001 \\\\ 1 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.0001 \\\\ 0.0001 \\end{array} \n", + " & \\begin{array}{c} 1 \\\\ 0.5 \\\\ 0.0001 \\end{array} \n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\\n", + "\\begin{array}{c} \\\\ E_{2j}-0.0001E_{1j} \\\\ E_{3j}-E_{1j} \\\\ \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_1 & x_3 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.00005 \\\\ -0.4999 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 0.4999 \\\\ -0.9999 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array}\n", + "\\end{array}$$\n", + "\n", + "> If we assume rounding is performed, then the entries are rounded off:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_1 & x_3 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.00005 \\\\ -0.5 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 0.5 \\\\ -1 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\\n", + "\\begin{array}{c} \\\\ -E_{3j} \\\\\n", + " E_{2j} \\leftrightarrow E_{3j} \\\\ \n", + " E_{i2} \\leftrightarrow E_{i3} \\\\ \n", + " \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 1 \\\\ 0.5 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.5 \\\\ 0.00005 \\end{array}\n", + " & \\left| \\begin{array}{r} 0 \\\\ -1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\\n", + "\\begin{array}{c} E_{1j}-E_{2j} \\\\ \n", + " E_{3j}-0.5 E_{2j} \\\\\n", + " \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 0.5 \\\\ -0.24995 \\end{array}\n", + " & \\left| \\begin{array}{r} 1 \\\\ -1 \\\\ 1.5 \\end{array} \\right]\n", + "\\end{array}\n", + "\\end{array}$$\n", + "\n", + "> Rounding off the matrix again:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "\\begin{array}{c} \\rightarrow \\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 0.5 \\\\ -0.25 \\end{array}\n", + " & \\left| \\begin{array}{r} 1 \\\\ -1 \\\\ 1.5 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\ \n", + "\\begin{array}{c} E_{2j}-2E_{3j} \\\\ \n", + " 4E_{3j} \\\\\n", + " \\rightarrow \\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 0 \\\\ 1 \\end{array}\n", + " & \\left| \\begin{array}{r} 1 \\\\ 2 \\\\ -6 \\end{array} \\right]\n", + "\\end{array}\n", + "\\end{array}$$\n", + "\n", + "> So reading from the matrix, $x_1$ = -6, $x_2$ = 1 and $x_3$ = 2. Compare\n", + "this with the answer you get with Python, which is $x_1\n", + "\\approx$ -6.0028, $x_2 \\approx$ 1.0004 and $x_3 \\approx$ 2.0010 .\n", + "\n", + "> Using full pivoting with Gaussian elimination, expansion of the error by\n", + "large factors is avoided. In addition, the approximated solution, using\n", + "rounding (which is analogous to the use of floating point\n", + "approximations), is close to the correct answer.\n", + "\n", + "> You can try the example with Python.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Summary\n", + "\n", + "In a system $Ax = b$, if round-off errors in $A$ or $b$ affect the\n", + "system such that it may not give a solution that is close to the real\n", + "answer, then $A$ is ill-conditioned, and it has a very large condition\n", + "number.\n", + "\n", + "Sometimes, due to a poor algorithm, such as Gaussian elimination without\n", + "pivoting, a matrix may appear to be ill-conditioned, even though it is\n", + "not. By applying partial pivoting, this problem is reduced, but partial\n", + "pivoting will not always eliminate the effects of round-off error. An\n", + "even better way is to apply full pivoting. Of course, the drawback of\n", + "this method is that the computation is more expensive than plain\n", + "Gaussian elimination.\n", + "\n", + "An important point to remember is that partial and full pivoting\n", + "minimize the effects of round-off error for well-conditioned matrices.\n", + "If a matrix is ill-conditioned, these methods will not provide a\n", + "solution that approximates the real answer. As an exercise, you can try\n", + "to apply full pivoting to the ill-conditioned matrix $A$ seen at the\n", + "beginning of this section (Example Two](#Example-Two)). You will find that\n", + "the solution is still inaccurate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Matrix Inversion\n", + "\n", + "Given a square matrix $A$. If there is a matrix that will cancel $A$,\n", + "then it is the *inverse* of $A$. In other words, the matrix, multiplied\n", + "by its inverse, will give the identity matrix $I$.\n", + "\n", + "Try to find the inverse for the following matrices:\n", + "\n", + "1. $$A = \\left[ \\begin{array}{rrr} 1 & -2 & 1 \\\\\n", + " 3 & 1 & -1 \\\\\n", + " -1 & 9 & -5 \\end{array} \\right]$$\n", + "\n", + "2. $$B = \\left[ \\begin{array}{rrr} 5 & -2 & 4 \\\\\n", + " -3 & 1 & -5 \\\\\n", + " 2 & -1 & 3 \\end{array} \\right]$$\n", + "\n", + "The solutions to these exercises are available [here](lab3_files/quizzes/inverse/inverse.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Determinant\n", + "\n", + "Every square matrix $A$ has a scalar associated with it. This number is\n", + "the determinant of the matrix, represented as $\\det(A)$. Its absolute\n", + "value is the volume of the parallelogram that can be generated from the\n", + "rows of $A$.\n", + "\n", + "A few special properties to remember about determinants:\n", + "\n", + "1. $A$ must be a square matrix.\n", + "\n", + "2. If $A$ is singular, $\\det(A) =0$, i.e. $A$ does not have an inverse.\n", + "\n", + "3. The determinant of a $2 \\times 2$ matrix is just the difference\n", + " between the products of the diagonals, i.e.\n", + "\n", + " $$\\begin{array}{ccc}\n", + " \\left[ \\begin{array}{cc} a & b \\\\ c & d \\end{array} \\right] &\n", + " = &\n", + " \\begin{array}{ccc} ad & - & bc \\end{array}\n", + " \\end{array}$$\n", + "\n", + "4. For any diagonal, upper triangular or lower triangular matrix $A$,\n", + " $\\det(A)$ is the product of all the entries on the diagonal,\n", + " \n", + "#### Example Six\n", + "\n", + "> $$\\begin{array}{cl}\n", + " & \\det \\left[\n", + " \\begin{array}{rrr}\n", + " 2 & 5 & -8 \\\\\n", + " 0 & 1 & 7 \\\\\n", + " 0 & 0 & -4 \n", + " \\end{array} \\right] \\\\ \\\\\n", + " = & 2 \\times 1 \\times -4 \\\\\n", + " = & -8\n", + " \\end{array}$$ \n", + " \n", + "> Graphically, the parallelogram looks as follows:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The basic procedure in finding a determinant of a matrix larger than 2\n", + "$\\times$ 2 is to calculate the product of the non-zero entries in each\n", + "of the *permutations* of the matrix, and then add them together. A\n", + "permutation of a matrix $A$ is a matrix of the same size with one\n", + "element from each row and column of $A$. The sign of each permutation is\n", + "$+$ or $-$ depending on whether the permutation is odd or even. This is\n", + "illustrated in the following example …" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Seven\n", + "\n", + "> $$I = \\left[ \\begin{array}{ccc} 1 & 2 & 3 \\\\\n", + " 4 & 5 & 6 \\\\\n", + " 7 & 8 & 9 \\end{array} \\right]$$\n", + "\n", + "> will have the following permutations:\n", + "\n", + "> $$\\begin{array}{cccccc}\n", + "+\\left[ \\begin{array}{ccc} 1 & & \\\\\n", + " & 5 & \\\\\n", + " & & 9 \\end{array} \\right]\n", + "& , &\n", + "+\\left[ \\begin{array}{ccc} & 2 & \\\\\n", + " & & 6 \\\\\n", + " 7 & & \\end{array} \\right]\n", + "& , &\n", + "+\\left[ \\begin{array}{ccc} & & 3 \\\\\n", + " 4 & & \\\\\n", + " & 8 & \\end{array} \\right]\n", + "& , \\\\ \\\\\n", + "-\\left[ \\begin{array}{ccc} 1 & & \\\\\n", + " & & 6 \\\\\n", + " & 8 & \\end{array} \\right]\n", + "& , &\n", + "-\\left[ \\begin{array}{ccc} & 2 & \\\\\n", + " 4 & & \\\\\n", + " & & 9 \\end{array} \\right]\n", + "& , &\n", + "-\\left[ \\begin{array}{ccc} & & 3 \\\\\n", + " & 5 & \\\\\n", + " 7 & & \\end{array} \\right]\n", + "& .\n", + "\\end{array}$$\n", + "\n", + "> The determinant of the above matrix is then given by \n", + "$$\\begin{array}{ccl}\n", + " det(A) & = & +1\\cdot 5\\cdot 9 + 2 \\cdot 6 \\cdot 7 + 3 \\cdot 4 \\cdot 8 - 1\n", + "\\cdot 6 \\cdot 8 - 2 \\cdot 4 \\cdot 9 - 3 \\cdot 5 \\cdot 7 \\\\\n", + "& = & 0\\end{array}$$\n", + "\n", + "For each of the following matrices, determine whether or not it has an\n", + "inverse:\n", + "\n", + "1. $$A = \\left[ \\begin{array}{rrr} 3 & -2 & 1 \\\\\n", + " 1 & 5 & -1 \\\\\n", + " -1 & 0 & 0 \\end{array} \\right]$$\n", + "\n", + "2. $$B = \\left[ \\begin{array}{rrr} 4 & -6 & 1 \\\\\n", + " 1 & -3 & 1 \\\\\n", + " 2 & 0 & -1 \\end{array} \\right]$$\n", + "\n", + "3. Try to solve this by yourself first, and use Python to check your\n", + " answer:\n", + "\n", + " $$C = \\left[ \\begin{array}{rrrr} 4 & -2 & -7 & 6 \\\\\n", + " -3 & 0 & 1 & 0 \\\\\n", + " -1 & -1 & 5 & -1 \\\\\n", + " 0 & 1 & -5 & 3\n", + " \\end{array} \\right]$$\n", + "\n", + "The solutions to these exercises are available [here](lab3_files/quizzes/det/det.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Computational cost of Gaussian elimination\n", + "\n", + "[lab3:sec:cost]: <#Computational-cost-of-Gaussian-elimination> \"Computational Cost\"\n", + "\n", + "Although Gaussian elimination is a basic and relatively simple technique\n", + "to find the solution of a linear system, it is a costly algorithm. Here\n", + "is an operation count of this method:\n", + "\n", + "For a $n \\times n$ matrix, there are two kinds of operations to\n", + "consider:\n", + "\n", + "1. division ( *div*) - to find the multiplier from a\n", + " chosen pivot\n", + "\n", + "2. multiplication-subtraction ( *mult/sub* ) - to\n", + " calculate new entries for the matrix\n", + "\n", + "Note that an addition or subtraction operation has negligible cost in\n", + "relation to a multiplication or division operation. So the subtraction\n", + "in this case can be treated as one with the multiplication operation.\n", + "The first pivot is selected from the first row in the matrix. For each\n", + "of the remaining rows, one div and $(n-1)$ mult/sub operations are used\n", + "to find the new entries. So there are $n$ operations performed on each\n", + "row. With $(n-1)$ rows, there are a total of $n(n-1) = n^{2}-n$\n", + "operations associated with this pivot.\n", + "\n", + "Since the subtraction operation has negligible cost in relation to the\n", + "multiplication operation, there are $(n-1)$ operations instead of\n", + "$2(n-1)$ operations.\n", + "For the second pivot, which is selected from the second row of the\n", + "matrix, similar analysis is applied. With the remaining $(n-1)\n", + "\\times (n-1)$ matrix, each row has one div and $(n-2)$ mult/sub\n", + "operations. For the whole process, there are a total of $(n-1)(n-2) =\n", + "(n-1)^{2} - (n-1)$ operations.\\\n", + "\n", + "For the rest of the pivots, the number of operations for a remaining\n", + "$k \\times k$ matrix is $k^{2} - k$.\\\n", + "\n", + "The following is obtained when all the operations are added up:\n", + "\n", + "$$\\begin{array}{l} \n", + "(1^{2}+\\ldots +n^{2}) - (1+\\ldots +n) \\\\ \\\\\n", + "= \\frac{n(n+1)(2n+1)}{6} - \\frac{n(n+1)}{2} \\\\ \\\\\n", + "= \\frac{n^{3}-n}{3} \\\\ \\\\\n", + "\\approx O(n^{3}) \n", + "\\end{array}$$\n", + "\n", + "As one can see, the Gaussian elimination is an $O(n^{3})$ algorithm. For\n", + "large matrices, this can be prohibitively expensive. There are other\n", + "methods which are more efficient, e.g. see [Iterative Methods](#Iterative-Methods). " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "#### Problem One\n", + "\n", + "[lab3:sec2:carbon]:<#Problem-One>\n", + "\n", + "Consider a very simple three box model of the movement of a pollutant in\n", + "the atmosphere, fresh-water and ocean. The mass of the atmosphere is MA\n", + "(5600 x 10$^{12}$ tonnes), the mass of the fresh-water is MF (360 x\n", + "10$^{12}$tonnes) and the mass of the upper layers of the ocean is MO\n", + "(50,000 x 10$^{12}$ tonnes). The amount of pollutant in the atmosphere\n", + "is A, the amount in the fresh water is F and the amount in the ocean is\n", + "O. So A, F, and O have units of tonnes.\n", + "\n", + "The pollutant is going directly into the atmosphere at a rate P1 = 1000\n", + "tonnes/year and into the fresh-water system at a rate P2 = 2000\n", + "tonnes/year. The pollutant diffuses between the atmosphere and ocean at\n", + "a rate depending linearly on the difference in concentration with a\n", + "diffusion constant L1 = 200 tonnes/year. The diffusion between the\n", + "fresh-water system and the atmosphere is faster as the fresh water is\n", + "shallower, L2 = 500 tonnes/year. The fresh-water system empties into the\n", + "ocean at the rate of Q = 36 x 10$^{12}$ tonnes/year. Lastly the\n", + "pollutant decays (like radioactivity) at a rate L3 = 0.05 /year.\n", + "\n", + "See the graphical presentation of the cycle described above in\n", + "Figure [Box Model](#Figure-Box-Model) Schematic for Problem 1.\n", + "Set up a notebook to answer this question. When you have finished, you can print it to pdf.\n", + "\n", + "- a\\) Consider the steady state. There is no change in A, O, or F. Write\n", + " down the three linear governing equations in a text cell. Write the equations as an\n", + " augmented matrix in a text cell. Then use a computational cell to find the solution.\n", + "\n", + "- b\\) Show mathematically that there is no solution to this problem with L3\n", + " = 0. Explain in a text file why, physically, is there no solution.\n", + "\n", + "- c\\) Show mathematically that there is an infinite number of solutions if\n", + " L3 = 0 and P1 = P2 = 0. Explain in a text file why this is true from a physical argument.\n", + "\n", + "- d\\) For part c) above, explain in a text cell what needs to be specified in order to determine a\n", + " single physical solution. Explain in a text cell how would you put this in the matrix equation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Box Model: Schematic for Problem One.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Eigenvalue Problems\n", + "\n", + "This section is a review of eigenvalues and eigenvectors.\n", + "\n", + "### Characteristic Equation\n", + "[lab3:sec:eigval]: <#Characteristic-Equation>\n", + "\n", + "The basic equation for eigenvalue problems is the characteristic\n", + "equation, which is:\n", + "\n", + "$$\\det( A - \\lambda I ) = 0$$\n", + "\n", + "where $A$ is a square matrix, I is an identity the same size as $A$, and\n", + "$\\lambda$ is an eigenvalue of the matrix $A$.\n", + "\n", + "In order for a number to be an eigenvalue of a matrix, it must satisfy\n", + "the characteristic equation, for example:\n", + "\n", + "#### Example Eight\n", + "\n", + "> Given\n", + "\n", + "> $$A = \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right]$$\n", + "\n", + "> To find the eigenvalues of $A$, you need to solve the characteristic\n", + "equation for all possible $\\lambda$.\n", + "\n", + "> $$\\begin{array}{ccl}\n", + "0 & = & \\det (A - \\lambda I) \\\\\n", + "& = & \\begin{array}{cccc}\n", + " \\det & \\left( \n", + " \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right] \\right. &\n", + " - &\n", + " \\lambda \\left. \\left[ \\begin{array}{rr} 1 & 0 \\\\ 0 & 1 \\end{array} \\right] \\right)\n", + " \\end{array} \\\\ \\\\\n", + "& = & \\begin{array}{cc}\n", + " \\det & \n", + " \\left[ \\begin{array}{cc} 3-\\lambda & -2 \\\\ -4 & 5-\\lambda\n", + " \\end{array} \\right]\n", + " \\end{array} \\\\ \\\\\n", + "& = & \\begin{array}{ccc} (3-\\lambda)(5-\\lambda) & - & (-2)(-4) \n", + " \\end{array} \\\\ \\\\\n", + "& = & (\\lambda - 1)(\\lambda - 7) \\\\ \\\\\n", + "\\end{array}$$\n", + "\n", + "> So, $\\lambda = 1 \\mbox{ or } 7$, i.e. the eigenvalues of the matrix $A$\n", + "are 1 and 7.\n", + "\n", + "> You can use Python to check this answer.\n", + "\n", + "Find the eigenvalues of the following matrix:\n", + "\n", + "$$B = \\left[\n", + " \\begin{array}{ccc} 3 & 2 & 4 \\\\ 2 & 0 & 2 \\\\ 4 & 2 & 3\n", + " \\end{array} \\right]$$\n", + "\n", + "The solution to this problem is available [here](lab3_files/quizzes/char/char.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Condition Number\n", + "[lab3.sec.cond]: \n", + "\n", + "The eigenvalues of a matrix $A$ can be used to calculate an\n", + "approximation to the condition number $K$ of the matrix, i.e.\n", + "\n", + "$$K \\approx \\left| \\frac{\\lambda_{max}}{\\lambda_{min}} \\right|$$\n", + "\n", + "where $\\lambda_{max}$ and $\\lambda_{min}$ are the maximum and minimum\n", + "eigenvalues of $A$. When $K$ is large, i.e. the $\\lambda_{max}$ and\n", + "$\\lambda_{min}$ are far apart, then $A$ is ill-conditioned.\n", + "\n", + "The mathematical definition of $K$ is\n", + "\n", + "$$K = \\|A\\|\\|A^{-1}\\|$$\n", + "\n", + "where $\\|\\cdot\\|$ represents the norm of a matrix.\n", + "\n", + "There are a few norms which can be chosen for the formula. The default one used\n", + "in Python for finding $K$ is the 2-norm of the matrix. To see how to\n", + "compute the norm of a matrix, see a linear algebra text. Nevertheless,\n", + "the main concern here is the formula, and the fact that this can be very\n", + "expensive to compute. Actually, the computing of $A^{-1}$ is the costly\n", + "operation.\n", + "\n", + "Note: In Python, the results from the function *cond*($A$) can have\n", + "round-off errors.\n", + "\n", + "For the matrices in this section (A from Example 8 and B just below it) for which you have \n", + "found the\n", + "eigenvalues, use the built-in Python function *np.linalg.cond*($A$) to find $K$,\n", + "and compare this result with the $K$ approximated from the eigenvalues.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Eigenvectors\n", + "\n", + "Another way to look at the characteristic equation is using vectors\n", + "instead of determinant. For a number to be an eigenvalue of a matrix, it\n", + "must satisfy this equation:\n", + "\n", + "$$( A - \\lambda I ) x = 0$$\n", + "\n", + "where $A$ is a $n \\times n$ square matrix, $I$ is an identity matrix the\n", + "same size as $A$, $\\lambda$ is an eigenvalue of $A$, and $x$ is a\n", + "non-zero vector associated with the particular eigenvalue that will make\n", + "this equation true. This vector is the eigenvector. The eigenvector is\n", + "not necessarily unique for an eigenvalue. This will be further discussed\n", + "below after the example.\n", + "\n", + "The above equation can be rewritten as:\n", + "\n", + "$$A x = \\lambda x$$\n", + "\n", + "For each eigenvalue of $A$, there is a corresponding eigenvector. Below\n", + "is an example.\n", + "\n", + "#### Example Nine\n", + "\n", + "> Following the example from the previous section:\n", + "\n", + "> $$A = \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right]$$\n", + "\n", + "> The eigenvalues, $\\lambda$, for this matrix are 1 and 7. To find the\n", + "eigenvectors for the eigenvalues, you need to solve the equation:\n", + "\n", + "> $$( A - \\lambda I ) x = 0.$$ \n", + "\n", + "> This is just a linear system $A^{\\prime}x = b$, where\n", + "$A^{\\prime} = ( A - \\lambda I )$, $b = 0$. To find the eigenvectors, you\n", + "need to find the solution to this augmented matrix for each $\\lambda$\n", + "respectively,\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& ( A - \\lambda I ) x = 0 \\ \\ \\ \\ \\ \\ \\ \\ {\\rm where} \\ \\ \\ \\ \\ \\ \\lambda = 1 \\\\\n", + "\\; & \\; \\\\\n", + "\\rightarrow & \n", + "\\left( \\begin{array}{ccc} \n", + " \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right] \n", + " & - &\n", + " 1 \\left[ \\begin{array}{cc} 1 & 0 \\\\ 0 & 1 \\end{array} \\right]\n", + "\\end{array} \\right) x = 0 \\\\ \\\\\n", + "\\rightarrow &\n", + "\\left[ \\begin{array}{cc}\n", + " \\begin{array}{rr} 2 & -2 \\\\ -4 & 4 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right]\n", + "\\end{array} \\right. \\\\ \\\\ \n", + "\\rightarrow &\n", + "\\left[ \\begin{array}{cc}\n", + " \\begin{array}{rr} 1 & -1 \\\\ 0 & 0 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> Reading from the matrix,\n", + "\n", + "> $$\\begin{array}{ccccc} x_1 & - & x_2 & = & 0 \\\\\n", + " & & 0 & = & 0 \\end{array}$$\n", + "\n", + "> As mentioned before, the eigenvector is not unique for a given\n", + "eigenvalue. As seen here, the solution to the matrix is a description of\n", + "the direction of the vectors that will satisfy $Ax = \\lambda x$. Letting\n", + "$x_1 = 1$, then $x_2 = 1$. So the vector (1, 1) is an eigenvector for\n", + "the matrix $A$ when $\\lambda = 1$. (So is (-1,-1), (2, 2), etc)\\\n", + "\n", + "> In the same way for $\\lambda = 7$, the solution is\n", + "\n", + "> $$\\begin{array}{ccccc} 2 x_1 & + & x_2 & = & 0 \\\\\n", + " & & 0 & = & 0 \\end{array}$$\n", + "\n", + "> So an eigenvector here is x = (1, -2)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Using Python:\n", + "\n", + "A = np.array([[3, -2], [-4, 5]])\n", + "lamb, x = np.linalg.eig(A)\n", + "print(lamb)\n", + "print (x)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> Matrix $x$ is the same size a $A$ and vector $lamb$ is the size of one dimension of $A$. Each column of\n", + "$x$ is a unit eigenvector of $A$, and $lamb$ values\n", + "are the eigenvalues of $A$. Reading from the result, for $\\lambda$ = 1,\n", + "the corresponding unit eigenvector is (-0.70711, -0.70711). The answer\n", + "from working out the example by hand is (1, 1), which is a multiple of\n", + "the unit eigenvector from Python.\n", + "\n", + "> (The unit eigenvector is found by dividing the eigenvector by its\n", + "magnitude. In this case, $\\mid$(1,1)$\\mid$ = $\\sqrt{1^2 +\n", + " 1^2}$ = $\\sqrt{2}$, and so the unit eigenvector is\n", + "($\\frac{1}{\\sqrt{2}}, \\frac{1}{\\sqrt{2}}$) ).\n", + "\n", + "> Remember that the solution for an eigenvector is not the unique answer;\n", + "it only represents a *direction* for an eigenvector corresponding to a\n", + "given eigenvalue.\n", + "\n", + "What are the eigenvectors for the matrix $B$ from the previous section?\n", + "\n", + "$$B = \\left[\n", + " \\begin{array}{ccc} 3 & 2 & 4 \\\\ 2 & 0 & 2 \\\\ 4 & 2 & 3\n", + " \\end{array} \\right]$$\n", + "\n", + "The solution to this problem is available [here](lab3_files/quizzes/eigvec/eigvec.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n", + "\n", + "Although the method used here to find the eigenvalues is a direct way to\n", + "find the solution, it is not very efficient, especially for large\n", + "matrices. Typically, iterative methods such as the Power Method or the\n", + "QR algorithm are used (see a linear algebra text such as [Strang (1988)](#Ref:Strang88)\n", + "for more details)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Iterative Methods\n", + "[lab3:sec:iter]:<#Interative-Methods>\n", + "\n", + "So far, the only method we’ve seen for solving systems of linear\n", + "equations is Gaussian Elimination (with its pivoting variants), which is\n", + "only one of a class of *direct methods*. This name derives from the fact\n", + "that the Gaussian Elimination algorithm computes the exact solution\n", + "*directly*, in a finite number of steps. Other commonly-used direct\n", + "methods are based on matrix decomposition or factorizations different\n", + "from the $LU$ decomposition (see Section [Decomposition](#Decomposition)); for\n", + "example, the $LDL^T$ and Choleski factorizations of a matrix. When the\n", + "system to be solved is not too large, it is usually most efficient to\n", + "employ a direct technique that minimizes the effects of round-off error\n", + "(for example, Gaussian elimination with full pivoting).\n", + "\n", + "However, the matrices that occur in the discretization of differential\n", + "equations are typically *very large* and *sparse* – that is, a large\n", + "proportion of the entries in the matrix are zero. In this case, a direct\n", + "algorithm, which has a cost on the order of $N^3$ multiplicative\n", + "operations, will be spending much of its time inefficiently, operating\n", + "on zero entries. In fact, there is another class of solution algorithms\n", + "called *iterative methods* which exploit the sparsity of such systems to\n", + "reduce the cost of the solution procedure, down to order $N^2$ for\n", + "*Jacobi’s method*, the simplest of the iterative methods (see Lab \\#8 )\n", + "and as low as order $N$ (the optimal order) for *multigrid methods*\n", + "(which we will not discuss here).\n", + "\n", + "Iterative methods are based on the principle of computing an approximate\n", + "solution to a system of equations, where an iterative procedure is\n", + "applied to improve the approximation at every iteration. While the exact\n", + "answer is never reached, it is hoped that the iterative method will\n", + "approach the answer more rapidly than a direct method. For problems\n", + "arising from differential equations, this is often possible since these\n", + "methods can take advantage of the presence of a large number of zeroes\n", + "in the matrix. Even more importantly, most differential equations are\n", + "only approximate models of real physical systems in the first place, and\n", + "so in many cases, an approximation of the solution is sufficient!!!\n", + "\n", + "None of the details of iterative methods will be discussed in this Lab.\n", + "For now it is enough to know that they exist, and what type of problems\n", + "they are used for. Neither will we address the questions: *How quickly\n", + "does an iterative method converge to the exact solution?*, *Does it\n", + "converge at all?*, and *When are they more efficient than a direct\n", + "method?* Iterative methods will be discussed in more detail in Lab \\#8 ,\n", + "when a large, sparse system appears in the discretization of a PDE\n", + "describing the flow of water in the oceans.\n", + "\n", + "For even more details on iterative methods, you can also look at [Strang (1988)](#Ref:Strang88) [p. 403ff.], or one of the several books listed in the\n", + "Readings section from Lab \\#8 ." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Solution of an ODE Using Linear Algebra\n", + "[lab3:sec:prob]:<#Solution-of-an-ODE-Using-Linear-Algebra>\n", + "\n", + "So far, we’ve been dealing mainly with matrices with dimensions\n", + "$4\\times 4$ at the greatest. If this was the largest linear system that\n", + "we ever had to solve, then there would be no need for computers –\n", + "everything could be done by hand! Nevertheless, even very simple\n", + "differential equations lead to large systems of equations.\n", + "\n", + "Consider the problem of finding the steady state heat distribution in a\n", + "one-dimensional rod, lying along the $x$-axis between 0 and 1. We saw in\n", + "Lab \\#1 that the temperature, $u(x)$, can be described by a boundary\n", + "value problem, consisting of the ordinary differential equation\n", + "$$u_{xx} = f(x),$$ along with boundary values $$u(0)=u(1) = 0.$$ The\n", + "only difference between this example and the one from Lab \\#1 is that\n", + "the right hand side function, $f(x)$, is non-zero, which corresponds to\n", + "a heat source being applied along the length of the rod. The boundary\n", + "conditions correspond to the ends of the rod being held at constant\n", + "(zero) temperature – this type of condition is known as a fixed or\n", + "*Dirichlet* boundary condition.\n", + "\n", + "If we discretize this equation at $N$ discrete points, $x_i=id$,\n", + "$i=0,1,\\dots,N$, where $d = 1/N$ is the grid spacing, then the ordinary\n", + "differential equation can be approximated at a point $x_i$ by the\n", + "following system of linear equations:\n", + "
\n", + "(Discrete Differential Equation)\n", + "$$\\frac{u_{i+1} - 2u_i+u_{i-1}}{d^2} = f_i,$$ \n", + "
\n", + "where $f_i=f(x_i)$, and $u_i\\approx u(x_i)$\n", + "is an approximation of the steady state temperature at the discrete\n", + "points. If we write out all of the equations, for the unknown values\n", + "$i=1,\\dots,N-1$, along with the boundary conditions at $i=0,N$, we\n", + "obtain the following set of $N+1$ equations in $N+1$ unknowns:\n", + "
\n", + "(Differential System)\n", + "$$\\begin{array}{ccccccccccc}\n", + " u_0 & & & & & & & & &=& 0 \\\\\n", + " u_0 &-&2u_1 &+& u_2 & & & & &=& d^2 f_1\\\\\n", + " & & u_1 &-& 2u_2 &+& u_3 & & &=& d^2f_2\\\\\n", + " & & & & & & \\dots & & &=& \\\\\n", + " & & & &u_{N-2}&-& 2u_{N-1}&+& u_N &=& d^2f_{N-1}\\\\\n", + " & & & & & & & & u_N &=& 0\n", + "\\end{array}$$\n", + "
\n", + "\n", + "Remember that this system, like any other linear system, can be written\n", + "in matrix notation as\n", + "\n", + "
\n", + "(Differential System Matrix)\n", + "$$\\underbrace{\\left[\n", + " \\begin{array}{ccccccccc}\n", + " 1& 0 & & \\dots & & & & & 0 \\\\\n", + " 1& {-2} & {1} & {0} & {\\dots} & && & \\\\\n", + " 0& {1} & {-2} & {1} & {0} & {\\dots} & & & \\\\\n", + " & {0} & {1} & {-2} & {1} & {0} & {\\dots} & & \\\\\n", + " & & & & & & & & \\\\\n", + " \\vdots & & & {\\ddots} & {\\ddots} & {\\ddots} & {\\ddots} & {\\ddots} & \\vdots \\\\\n", + " & & & & & & & & \\\\\n", + " & & & {\\dots} & {0} & {1} & {-2} & {1} & 0 \\\\\n", + " & & & &{\\dots} & {0} & {1} & {-2} & 1 \\\\\n", + " 0& & & & & \\dots & & 0 & 1 \n", + " \\end{array}\n", + " \\right]\n", + " }_{A_1}\n", + " \\underbrace{\\left[\n", + " \\begin{array}{c}\n", + " u_0 \\\\ {u_1} \\\\ {u_2} \\\\ {u_3} \\\\ \\ \\\\ {\\vdots} \\\\ \\\n", + " \\\\ {u_{N-2}} \\\\ {u_{N-1}} \\\\ u_N\n", + " \\end{array}\n", + " \\right]\n", + " }_{U}\n", + " = \n", + " \\underbrace{\\left[\n", + " \\begin{array}{c}\n", + " 0 \\\\ {d^2 f_1} \\\\ {d^2 f_2} \\\\ {d^2 f_3} \\\\ \\ \\\\\n", + " {\\vdots} \\\\ \\ \\\\ {d^2 f_{N-2}} \\\\ {d^2 f_{N-1}} \\\\ 0 \n", + " \\end{array}\n", + " \\right] \n", + " }_{F}$$\n", + "
\n", + "\n", + "or, simply $A_1 U = F$.\n", + "\n", + "One question we might ask is: *How well-conditioned is the matrix\n", + "$A_1$?* or, in other words, *How easy is this system to solve?* To\n", + "answer this question, we leave the right hand side, and consider only\n", + "the matrix and its condition number. The size of the condition number is\n", + "a measure of how expensive it will be to invert the matrix and hence\n", + "solve the discrete boundary value problem." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Problem Two\n", + "[lab3:prob:dirichlet]:<#Problem-Two> \n", + "\n", + "> a) Using Python, compute the condition number for\n", + "the matrix $A_1$ from Equation [Differential System Matrix](#lab3:eq:dir-system) for several values of $N$\n", + "between 5 and 50. ( **Hint:** This will be much easier if you write a\n", + "small Python function that outputs the matrix $A$ for a given value of\n", + "$N$.)\n", + "\n", + "> b\\) Can you conjecture how the condition number of $A_1$ depends on $N$?\n", + "\n", + "> c\\) Another way to write the system of equations is to substitute the\n", + "boundary conditions into the equations, and thereby reduce size of the\n", + "problem to one of $N-1$ equations in $N-1$ unknowns. The corresponding\n", + "matrix is simply the $N-1$ by $N-1$ submatrix of $A_1$\n", + "from Equation [Differential System Matrix](#lab3:eq:dir-system) $$A_2 = \\left[\n", + " \\begin{array}{ccccccc}\n", + " -2 & 1 & 0 & \\dots & && 0 \\\\\n", + " 1 & -2 & 1 & 0 & \\dots & & \\\\\n", + " 0 & 1 & -2 & 1 & 0 & \\dots & \\\\\n", + " & & & & & & \\\\\n", + " \\vdots & & \\ddots & \\ddots& \\ddots & \\ddots & \\vdots\\\\\n", + " & & & & & & 0 \\\\\n", + " & & \\dots & 0 & 1 & -2 & 1 \\\\\n", + " 0& & &\\dots & 0 & 1 & -2 \\\\\n", + " \\end{array}\n", + " \\right]\n", + "$$ Does this change in the matrix make a significant difference in the\n", + "condition number?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So far, we’ve only considered zero Dirichlet boundary values,\n", + "$u_0=0=u_N$. Let’s look at a few more types of boundary values …\n", + "\n", + " **Fixed (non-zero) boundary conditions:**\n", + "\n", + "> If we fixed the solution at the boundary to be some non-zero values,\n", + " say by holding one end at temperature $u_0=a$, and the other at\n", + " temperature $u_N=b$, then the matrix itself is not affected. The\n", + " only thing that changes in Equation [Differential System](#lab3:eq:dir-system) \n", + " is that\n", + " a term $a$ is subtracted from the right hand side of the second\n", + " equation, and a term $b$ is subtracted from the RHS of the\n", + " second-to-last equation. It is clear from what we’ve just said that\n", + " non-zero Dirichlet boundary conditions have no effect at all on the\n", + " matrix $A_1$ (or $A_2$) since they modify only the right hand side.\n", + "\n", + " **No-flow boundary conditions:**\n", + "\n", + "> These are conditions on the first derivative of the temperature\n", + " $$u_x(0) = 0,$$ $$u_x(1) = 0,$$\n", + " which are also known as *Neumann*\n", + " boundary conditions. The requirement that the first derivative of\n", + " the temperature be zero at the ends corresponds physically to the\n", + " situation where the ends of the rod are *insulated*; that is, rather\n", + " than fixing the temperature at the ends of the rod (as we did with\n", + " the Dirichlet problem), we require instead that there is no heat\n", + " flow in or out of the rod through the ends.\n", + "\n", + "> There is still one thing that is missing in the mathematical\n", + " formulation of this problem: since only derivatives of $u$ appear in\n", + " the equations and boundary conditions, the solution is determined\n", + " only up to a constant, and for there to be a unique solution, we\n", + " must add an extra condition. For example, we could set\n", + " $$u \\left(\\frac{1}{2} \\right) = constant,$$\n", + " or, more realistically,\n", + " say that the total heat contained in the rod is constant, or\n", + " $$\\int_0^1 u(x) dx = constant.$$\n", + " \n", + "> Now, let us look at the discrete formulation of the above problem …\n", + "\n", + "> The discrete equations do not change, except for that discrete\n", + " equations at $i=0,N$ replace the Dirichlet conditions in Equation [Differential System](#lab3:eq:diff-system):\n", + " \n", + " (Neumann Boundary Conditions)\n", + " $$u_{-1} - 2u_0 +u_{1} = d^2f_0 \\quad {\\rm and} \\quad\n", + " u_{N-1} - 2u_N +u_{N+1} = d^2f_N $$ \n", + " where we have introduced the\n", + " additional *ghost points*, or *fictitious points* $u_{-1}$ and\n", + " $u_{N+1}$, *lying outside the boundary*. The temperature at these\n", + " ghost points can be determined in terms of values in the interior\n", + " using the discrete version of the Neumann boundary conditions\n", + " $$\\frac{u_{-1} - u_1}{2d} = 0 \\;\\; \\Longrightarrow \\;\\; u_{-1} = u_1,$$\n", + " $$\\frac{u_{N+1} - u_{N-1}}{2d} = 0 \\;\\; \\Longrightarrow \\;\\; u_{N+1} = u_{N-1}.$$\n", + " Substitute these back into the [Neumann Boundary Conditions](#lab3:eq:neumann-over) to obtain\n", + " $$- 2u_0 + 2 u_1 =d^2 f_0 \\quad {\\rm and} \\quad\n", + " + 2u_{N-1} - 2 u_N =d^2 f_N .$$\n", + " In this case, the matrix is an\n", + " $N+1$ by $N+1$ matrix, almost identical to Equation [Differential System Matrix](#lab3:eq:dir-system),\n", + " but with the first and last rows slightly modified $$A_3 = \\left[\n", + " \\begin{array}{ccccccc}\n", + " -2 & 2 & 0 & \\dots & && 0 \\\\\n", + " 1 & -2 & 1 & 0 & \\dots & & 0\\\\\n", + " 0 & 1 & -2 & 1 & 0 & \\dots & 0\\\\\n", + " & & & & & & \\\\\n", + " & & & \\ddots& \\ddots & \\ddots & \\\\ \n", + " & & & & & & \\\\\n", + " 0 & & \\dots & 0 & 1 & -2 & 1 \\\\\n", + " 0 & & &\\dots & 0 & 2 & -2\n", + " \\end{array}\n", + " \\right]$$ \n", + " This system is *not solvable*; that is, the $A_3$ above is\n", + " singular ( *try it in Python to check for yourself … this should be\n", + " easy by modifying the code from [Problem 2](#Problem-Two)).\n", + " This is a discrete analogue of the fact that the continuous solution\n", + " is not unique. The only way to overcome this problem is to add\n", + " another equation for the unknown temperatures." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Physically the reason the problem is not unique is that we don’t know\n", + "how hot the rod is. If we think of the full time dependent problem:\n", + "\n", + "1\\) given fixed temperatures at the end points of the rod (Dirichlet),\n", + "whatever the starting temperature of the rod, eventually the rod will\n", + "reach equilibrium. with a temperature smoothly varying between the\n", + "values (given) at the end points.\n", + "\n", + "2\\) However, if the rod is insulated (Neumann), no heat can escape and\n", + "the final temperature will be related to the initial temperature. To\n", + "solve this problem we need to know the steady state,\n", + "\n", + "a\\) the initial temperature of the rod,\n", + "\n", + "b\\) the total heat of the rod,\n", + "\n", + "c\\) or a final temperature at some point of the rod.\n", + "\n", + "### Problem Three\n", + "\n", + "[lab3:prob:neumann]:<#Problem-Three>\n", + "\n", + "> How can we make the discrete Neumann problem\n", + "solvable? Think in terms of discretizing the *solvability conditions*\n", + "$u(\\frac{1}{2}) = c$ (condition c) above), or $\\int_0^1 u(x) dx = c$\n", + "(condition b) above), (the integral condition can be thought of as an\n", + "*average* over the domain, in which case we can approximate it by the\n", + "discrete average $\\frac{1}{N}(u_0+u_1+\\dots+u_N)=c$). \n", + "\n", + "> a) Derive the\n", + "matrix corresponding to the linear system to be solved in both of these\n", + "cases.\n", + "\n", + "> b\\) How does the conditioning of the resulting matrix depend on the the\n", + "size of the system?\n", + "\n", + "> c\\) Is it better or worse than for Dirichlet boundary conditions?\n", + "\n", + " **Periodic boundary conditions:**\n", + "\n", + "> This refers to the requirement that the temperature at both ends\n", + " remains the same: $$u(0) = u(1).$$ Physically, you can think of this\n", + " as joining the ends of the rod together, so that it is like a\n", + " *ring*. From what we’ve seen already with the other boundary\n", + " conditions, it is not hard to see that the discrete form of the\n", + " one-dimensional diffusion problem with periodic boundary conditions\n", + " leads to an $N\\times N$ matrix of the form $$A_4 = \\left[\n", + " \\begin{array}{ccccccc}\n", + " -2 & 1 & 0 & \\dots & && 1 \\\\\n", + " 1 & -2 & 1 & 0 & \\dots & & 0\\\\\n", + " 0 & 1 & -2 & 1 & 0 & \\dots & 0\\\\\n", + " & & & & & & \\\\\n", + " & & & \\ddots& \\ddots & \\ddots & \\\\ \n", + " & & & & & & \\\\\n", + " 0 & & \\dots & 0 & 1 & -2 & 1 \\\\\n", + " 1 & & &\\dots & 0 & 1 & -2\n", + " \\end{array}\n", + " \\right],\n", + " $$ where the unknown temperatures are now $u_i$, $i=0,1,\\dots, N-1$.\n", + " The major change to the form of the matrix is that the elements in\n", + " the upper right and lower left corners are now 1 instead of 0. Again\n", + " the same problem of the invertibility of the matrix comes up. This\n", + " is a symptom of the fact that the continuous problem does not have a\n", + " unique solution. It can also be remedied by tacking on an extra\n", + " condition, such as in the Neumann problem above.\n", + "\n", + "### Problem Four\n", + "[lab3:prob:periodic]: <#Problem-Four> \n", + "\n", + "> a) Derive the matrix $A_4$ above using the discrete\n", + "form [Discrete Differential Equation](#lab3:eq:diff-ode) of the differential equation and the periodic\n", + "boundary condition.\n", + "\n", + "> b) For the periodic problem (with the extra integral condition on the\n", + "temperature) how does the conditioning of the matrix compare to that for\n", + "the other two discrete problems?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Summary\n", + "\n", + "As you will have found in these problems, the boundary conditions can\n", + "have an influence on the conditioning of a discrete problem.\n", + "Furthermore, the method of discretizing the boundary conditions may or\n", + "may not have a large effect on the condition number. Consequently, we\n", + "must take care when discretizing a problem in order to obtain an\n", + "efficient numerical scheme." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References\n", + "
\n", + "Strang, G., 1986: Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA.\n", + "
\n", + "
\n", + "Strang, G., 1988: Linear Algebra and its Applications. Harcourt Brace Jovanovich, San Diego, CA, 2nd edition.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Numpy and Python with Matrices\n", + "\n", + "**To start, import numpy,**\n", + "\n", + "Enter:\n", + "\n", + "> import numpy as np\n", + "\n", + "**To enter a matrix,** \n", + "$$ A = \\left[ \\begin{array}{ccc} a, & b, & c \\\\\n", + " d, & e, & f \\end{array} \\right] $$\n", + "Enter:\n", + "\n", + "> A = np.array([[a, b, c], [d, e, f]])\n", + "\n", + "**To add two matrices,**\n", + "$$ C = A + B$$\n", + "\n", + "Enter:\n", + "\n", + "> C = A + B\n", + "\n", + "**To multiply two matrices,**\n", + "$$ C = A \\cdot B $$\n", + "\n", + "Enter:\n", + "\n", + "> C = np.dot(A, B)\n", + "\n", + "**To find the tranpose of a matrix,**\n", + "$$ C = A^{T} $$\n", + "\n", + "Enter:\n", + "\n", + "> C = A.tranpose()\n", + "\n", + "**To find the condition number of a matrix,**\n", + "\n", + "> K = np.linalg.cond(A)\n", + "\n", + "**To find the inverse of a matrix,**\n", + "$$ C = A^{-1} $$\n", + "\n", + "Enter:\n", + "\n", + "> C = np.linalg.inv(A)\n", + "\n", + "**To find the determinant of a matrix,**\n", + "$$ K = |A|$$\n", + "\n", + "Enter:\n", + "\n", + "> K = np.linalg.det(A)\n", + "\n", + "**To find the eigenvalues of a matrix,**\n", + "\n", + "Enter:\n", + "\n", + "> lamb = np.linalg.eigvals(A)\n", + "\n", + "**To find the eigenvalues (lamb) and eigenvectors (x) of a matrix,**\n", + "\n", + "Enter:\n", + "\n", + "> lamb, x = np.linalg.eig(A)\n", + "\n", + "lamb[i] are the eigenvalues and x[:, i] are the eigenvectors.\n", + "\n", + "**To print a matrix,**\n", + "$$C$$\n", + "\n", + "Enter:\n", + "\n", + "> print (C)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary\n", + "[glossary.unnumbered]:<#Glossary>\n", + "\n", + "**A** \n", + "\n", + "augmented matrix\n", + "\n", + "> The $m \\times (n+1)$ matrix representing a linear system,\n", + " $Ax = b$, with the right hand side vector appended to the\n", + " coefficient matrix: $$\\left[ \n", + " \\begin{array}{cc}\n", + " \\begin{array}{ccccc} \n", + " a_{11} & & \\ldots & & a_{1n} \\\\\n", + " \\vdots & & \\ddots & & \\vdots \\\\\n", + " a_{m1} & & \\ldots & & a_{mn} \n", + " \\end{array}\n", + " &\n", + " \\left| \n", + " \\begin{array}{rc}\n", + " & b_{1} \\\\ & \\vdots \\\\ & b_{m}\n", + " \\end{array} \n", + " \\right. \n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "> The right most column is the right hand side vector or augmented\n", + " column.\n", + "\n", + "**C** \n", + "\n", + "characteristic equation\n", + "\n", + "> The equation:\n", + " $$\\det(A - \\lambda I) = 0 , \\ \\ \\ \\ or \\ \\ \\ \\ Ax = \\lambda x$$\n", + " where $A$ is a *square matrix*, $I$ is the *identity matrix*,\n", + " $\\lambda$ is an *eigenvalue* of $A$, and $x$ is the corresponding\n", + " *eigenvector* of $\\lambda$.\n", + "\n", + "coefficient matrix\n", + "\n", + "> A $m \\times n$ matrix made up with the coefficients $a_{ij}$ of the\n", + " $n$ unknowns from the $m$ equations of a set of linear equations,\n", + " where $i$ is the row index and $j$ is the column index: $$\\left[\n", + " \\begin{array}{ccccccc}\n", + " & a_{11} & & \\ldots & & a_{1n} & \\\\\n", + " & \\vdots & & \\ddots & & \\vdots & \\\\\n", + " & a_{m1} & & \\ldots & & a_{mn} &\n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "condition number\n", + "\n", + "> A number, $K$, that refers to the sensitivity of a *nonsingular*\n", + " matrix, $A$, i.e. given a system $Ax = b$, $K$ reflects whether\n", + " small changes in $A$ and $b$ will have any effect on the solution.\n", + " The matrix is well-conditioned if $K$ is close to one. The number is\n", + " described as: $$K(A) = \\|A\\| \\|A^{-1}\\| \n", + " \\ \\ \\ \\ or \\ \\ \\ \\\n", + " K(A) = \\frac{\\lambda_{max}}{\\lambda_{min}}$$ where $\\lambda_{max}$\n", + " and $\\lambda_{min}$ are largest and smallest *eigenvalues* of $A$\n", + " respectively.\n", + "\n", + "**D** \n", + "\n", + "decomposition\n", + "\n", + "> Factoring a matrix, $A$, into two factors, e.g., the Gaussian\n", + " elimination amounts to factoring $A$ into a product of two matrices.\n", + " One is the lower triangular matrix, $L$, and the other is the upper\n", + " triangular matrix, $U$.\n", + "\n", + "diagonal matrix\n", + "\n", + "> A square matrix with the entries $a_{ij} = 0 $ whenever $i \\neq j$.\n", + "\n", + "**E**\n", + "\n", + "eigenvalue\n", + "\n", + "> A number, $\\lambda$, that must satisfy the *characteristic equation*\n", + " $\\det(A - \\lambda I) = 0.$\n", + "\n", + "eigenvector\n", + "\n", + "> A vector, $x$, which corresponds to an *eigenvalue* of a *square\n", + " matrix* $A$, satisfying the characteristic equation:\n", + " $$Ax = \\lambda x .$$\n", + "\n", + "**H** \n", + "\n", + "homogeneous equations\n", + "\n", + "> A set of linear equations, $Ax = b$ with the zero vector on the\n", + " right hand side, i.e. $b = 0$.\n", + "\n", + "**I** \n", + "\n", + "inhomogeneous equations\n", + "\n", + "> A set of linear equations, $Ax = b$ such that $b \\neq 0$.\n", + "\n", + "identity matrix\n", + "\n", + "> A *diagonal matrix* with the entries $a_{ii} = 1$:\n", + " $$\\left[ \\begin{array}{ccccccc}\n", + " & 1 & 0 & \\ldots & \\ldots & 0 & \\\\\n", + " & 0 & 1 & \\ddots & & \\vdots & \\\\\n", + " & \\vdots & \\ddots & \\ddots & \\ddots & \\vdots \\\\\n", + " & \\vdots & & \\ddots & 1 & 0 & \\\\\n", + " & 0 & \\ldots & \\ldots & 0 & 1 &\n", + " \\end{array} \\right]$$\n", + "\n", + "ill-conditioned matrix\n", + "\n", + "> A matrix with a large *condition number*, i.e., the matrix is not\n", + " well-behaved, and small errors to the matrix will have great effects\n", + " to the solution.\n", + "\n", + "invertible matrix\n", + "\n", + "> A square matrix, $A$, such that there exists another matrix,\n", + " $A^{-1}$, which satisfies:\n", + " $$AA^{-1} = I \\ \\ \\ \\ and \\ \\ \\ \\ A^{-1}A = I$$\n", + "\n", + "> The matrix, $A^{-1}$, is the *inverse* of $A$. An invertible matrix\n", + " is *nonsingular*.\n", + "\n", + "**L** \n", + "\n", + "linear system\n", + "\n", + "> A set of $m$ equations in $n$ unknowns: $$\\begin{array}{ccccccc}\n", + " a_{11}x_{1} & + & \\ldots & + & a_{1n}x_{n} & = & b_{1} \\\\\n", + " a_{21}x_{1} & + & \\ldots & + & a_{2n}x_{n} & = & b_{2} \\\\\n", + " & & \\vdots & & & & \\vdots \\\\\n", + " a_{m1}x_{1} & + & \\ldots & + & a_{mn}x_{n} & = & b_{m} \n", + " \\end{array}$$ with unknowns $x_{i}$ and coefficients\n", + " $a_{ij}, b_{j}$.\n", + "\n", + "lower triangular matrix\n", + "\n", + "> A square matrix, $L$, with the entries $l_{ij} = 0$, whenever\n", + " $j > i$: $$\\left[\n", + " \\begin{array}{ccccccc}\n", + " & * & 0 & \\ldots & \\ldots & 0 & \\\\\n", + " & * & * & \\ddots & & \\vdots & \\\\\n", + " & \\vdots & & \\ddots & \\ddots & \\vdots & \\\\\n", + " & \\vdots & & & * & 0 & \\\\\n", + " & * & \\ldots & \\ldots & \\ldots & * &\n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "**N** \n", + "\n", + "nonsingular matrix\n", + "\n", + "> A square matrix,$A$, that is invertible, i.e. the system $Ax = b$\n", + " has a *unique solution*.\n", + "\n", + "**S** \n", + "\n", + "singular matrix\n", + "\n", + "> A $n \\times n$ matrix that is degenerate and does not have an\n", + " inverse (refer to *invertible*), i.e., the system $Ax = b$ does not\n", + " have a *unique solution*.\n", + "\n", + "sparse matrix\n", + "\n", + "> A matrix with a high percentage of zero entries.\n", + "\n", + "square matrix\n", + "\n", + "> A matrix with the same number of rows and columns.\n", + "\n", + "**T** \n", + "\n", + "transpose\n", + "\n", + "> A $n \\times m$ matrix, $A^{T}$, that has the columns of a\n", + " $m \\times n$ matrix, $A$, as its rows, and the rows of $A$ as its\n", + " columns, i.e. the entry $a_{ij}$ in $A$ becomes $a_{ji}$ in $A^{T}$,\n", + " e.g.\n", + "\n", + "> $$A = \n", + " \\left[ \\begin{array}{ccc} 1 & 2 & 3 \\\\ 4 & 5 & 6 \\end{array} \\right] \n", + " \\ \\ \\rightarrow \\ \\ \n", + " A^{T} = \n", + " \\left[ \\begin{array}{cc} 1 & 4 \\\\ 2 & 5 \\\\ 3 & 6 \\end{array} \\right]$$\n", + "\n", + "tridiagonal matrix\n", + "\n", + "> A square matrix with the entries $a_{ij} = 0$, $| i-j | > 1 $:\n", + "\n", + "> $$\\left[\n", + " \\begin{array}{cccccccc}\n", + " & * & * & 0 & \\ldots & \\ldots & 0 & \\\\\n", + " & * & * & * & \\ddots & & \\vdots & \\\\\n", + " & 0 & * & \\ddots & \\ddots & \\ddots & \\vdots & \\\\\n", + " & \\vdots & \\ddots & \\ddots & \\ddots & * & 0 & \\\\\n", + " & \\vdots & & \\ddots & * & * & * & \\\\\n", + " & 0 & \\ldots & \\ldots & 0 & * & * &\n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "**U** \n", + "\n", + "unique solution\n", + "\n", + "> There is only solution, $x$, that satisfies a particular linear\n", + " system, $Ax = b$, for the given $A$. That is, this linear system has\n", + " exactly one solution. The matrix $A$ of the system is *invertible*\n", + " or *nonsingular*.\n", + "\n", + "upper triangular matrix\n", + "\n", + "> A square matrix, $U$, with the entries $u_{ij} = 0$ whenever\n", + " $i > j$:\n", + "\n", + "> $$\\left[\n", + " \\begin{array}{ccccccc}\n", + " & * & \\ldots & \\ldots & \\ldots & * & \\\\\n", + " & 0 & * & & & \\vdots & \\\\\n", + " & \\vdots & \\ddots & \\ddots & & \\vdots & \\\\\n", + " & \\vdots & & \\ddots & * & * & \\\\\n", + " & 0 & \\ldots & \\ldots & 0 & * &\n", + " \\end{array}\n", + " \\right]$$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "header_metadata": { + "chead": "Feb., 2020", + "lhead": "Numeric Lab 2" + }, + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "324.176px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab4/01-lab4.ipynb.txt b/_sources/notebooks/lab4/01-lab4.ipynb.txt new file mode 100644 index 0000000..c76129c --- /dev/null +++ b/_sources/notebooks/lab4/01-lab4.ipynb.txt @@ -0,0 +1,1346 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Solving Ordinary Differential Equations with the Runge-Kutta Methods " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems \n", + "\n", + "\n", + "- [Problem Midpoint](#ProblemMidpoint)\n", + "\n", + "- [Problem Tableau](#ProblemTableau)\n", + "\n", + "- [Problem Runge Kutta4](#ProblemRK4)\n", + "\n", + "- [Problem embedded](#ProblemEmbedded)\n", + "\n", + "- [Problem coding A](#ProblemCodingA)\n", + "\n", + "- [Problem coding B](#ProblemCodingB)\n", + "\n", + "- [Problem coding C](#ProblemCodingC)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Assignment: see canvas for the problems you should hand-in.**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "In this lab, you will explore Runge-Kutta methods for solving ordinary\n", + "differential equations. The goal is to gain a better understanding of\n", + "some of the more popular Runge-Kutta methods and the corresponding\n", + "numerical code.\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- describe the mid-point method\n", + "\n", + "- construct a Runge-Kutta tableau from equations or equations from a\n", + " tableau\n", + "\n", + "- describe how a Runge-Kutta method estimates truncation error\n", + "\n", + "- edit a working python code to use a different method or solve a\n", + " different problem" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. However, if you would like additional background on any of\n", + "the following topics, then refer to the sections indicated below.\n", + "\n", + "**Runge-Kutta Methods:**\n", + "\n", + " - Newman, Chapter 8\n", + "\n", + " - Press, et al.  Section 16.1\n", + "\n", + " - Burden & Faires  Section 5.4\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "Ordinary differential equations (ODEs) arise in many physical situations. For example, there is the first-order Newton cooling equation discussed in Lab 1, and perhaps the most famous equation of all, the second-order Newton’s Second Law of Mechanics $F=ma$ .\n", + "\n", + "In general, higher-order equations, such as Newton’s force equation, can be rewritten as a system of first-order equations . So the generic problem in ODEs is a set of N coupled first-order differential equations of the form, \n", + "\n", + "$$\n", + " \\frac{d{\\bf y}}{dt} = f({\\bf y},t)\n", + "$$ \n", + " \n", + "where ${\\bf y}$ is a vector of\n", + "variables.\n", + "\n", + "For a complete specification of the solution, boundary conditions for the problem must be given. Typically, the problems are broken up into two classes:\n", + "\n", + "- **Initial Value Problem (IVP)**: the initial values of\n", + " ${\\bf y}$ are specified.\n", + "\n", + "- **Boundary Value Problem (BVP)**: ${\\bf y}$ is\n", + " specified at the initial and final times.\n", + "\n", + "For this lab, we are concerned with the IVP’s. BVP’s tend to be much more difficult to solve and involve techniques which will not be dealt with in this set of labs.\n", + "\n", + "Now as was pointed out in Lab 2, in general, it will not be possible to find exact, analytic solutions to the ODE. However, it is possible to find an approximate solution with a finite difference scheme such as the forward Euler method. This is a simple first-order, one-step scheme which is easy to implement. However, this method is rarely used in practice as it is neither very stable nor accurate.\n", + "\n", + "The higher-order Taylor methods discussed in Lab 2 are one alternative but involve higher-order derivatives that must be calculated by hand or worked out numerically in a multi-step scheme. Like the forward Euler method, stability is a concern.\n", + "\n", + "The Runge-Kutta methods are higher-order, one-step schemes that make use of information at different *stages* between the beginning and end of a step. They are more stable and accurate than the forward Euler method and are still relatively simple compared to schemes such as the multi-step predictor-corrector methods or the Bulirsch-Stoer method. Though they lack the accuracy and efficiency of these more sophisticated schemes, they are still powerful methods that almost always succeed for non-stiff IVPs." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Runge-Kutta methods" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The Midpoint Method: A Two-Stage Runge-Kutta Method \n", + "\n", + "The forward Euler method takes the solution at time $t_n$ and advances\n", + "it to time $t_{n+1}$ using the value of the derivative $f(y_n,t_n)$ at\n", + "time $t_n$ \n", + "\n", + "$$y_{n+1} = y_n + h f(y_n,t_n)$$ \n", + "\n", + "where $h \\equiv \\Delta t$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "title": "[markdown" + }, + "source": [ + "![fig1](images/euler.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Figure Euler: The forward Euler method is essentially a straight-line approximation to the solution, over the interval of one step, using the derivative at the starting point as the slope. \n", + "\n", + "The idea of the Runge-Kutta schemes is to take advantage of derivative information at the times between $t_n$ and $t_{n+1}$ to increase the order of accuracy.\n", + "\n", + "For example, in the midpoint method, the derivative at the initial time is used to approximate the derivative at the midpoint of the interval, $f(y_n+\\frac{1}{2}hf(y_n,t_n), t_n+\\frac{1}{2}h)$. The derivative at the midpoint is then used to advance the solution to the next step. \n", + "\n", + "The method can be written in two *stages* $k_i$,\n", + "\n", + "
eq:midpoint
\n", + "$$\n", + "\\begin{aligned}\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{1}{2}k_1, t_n+\\frac{1}{2}h)\\\\\n", + " y_{n+1} = y_n + k_2\n", + " \\end{array}\n", + "\\end{aligned}\n", + "$$ \n", + "\n", + "The midpoint method is known as a 2-stage Runge-Kutta formula.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![fig2](images/midpoint.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Figure midpoint: The midpoint method again uses the derivative at the starting point to\n", + "approximate the solution at the midpoint. The derivative at the midpoint\n", + "is then used as the slope of the straight-line approximation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Second-Order Runge-Kutta Methods\n", + "\n", + "As was shown in Lab 2, the local error in the forward Euler method is proportional to $h^2$. In other words, the forward Euler method has an accuracy which is *first order* in $h$.\n", + "\n", + "The advantage of the midpoint method is that the extra derivative information at the midpoint results in the first order error term cancelling out, making the method *second order* accurate. This can be shown by a Taylor expansion of equation\n", + "[eq:midpoint](#eq:midpoint)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-24T03:40:10.198195Z", + "start_time": "2022-01-24T03:40:10.179554Z" + } + }, + "source": [ + "### ProblemMidpoint\n", + "\n", + "Even though the midpoint method is second-order\n", + "accurate, it may still be less accurate than the forward Euler method.\n", + "In the demo below, compare the accuracy of the two methods on the\n", + "initial value problem \n", + "\n", + "
eq:linexp
\n", + "\\begin{equation}\n", + "\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1\n", + "\\end{equation}\n", + "\n", + "which has the exact\n", + "solution \n", + "\\begin{equation}\n", + "y(t) = t + e^{-t}\n", + "\\end{equation}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. Why is it possible that the midpoint method may be less accurate\n", + " than the forward Euler method, even though it is a higher order\n", + " method?\n", + "\n", + "2. Based on the numerical solutions of [eq:linexp](#eq:linexp), which method\n", + " appears more accurate?\n", + "\n", + "3. Cut the stepsize in half and check the error at a given time. Repeat\n", + " a couple of more times. How does the error drop relative to the\n", + " change in stepsize?\n", + "\n", + "4. How do the numerical solutions compare to $y(t) = t + e^{-t}$ when\n", + " you change the initial time? Why?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "_Note: in the original lab code (below) there is a bug in the code, such that the euler method is being initialized at each timestep with the previous value from the midpoint method, NOT the previous value from the euler method!_" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# original demo - with bug\n", + "import context\n", + "from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41\n", + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.25,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "y=coeff.yinitial\n", + "ye.append(coeff.yinitial)\n", + "ym.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=eulerinter41(coeff,y,timeVec[i-1])\n", + " ye.append(ynew) \n", + " ynew=midpointinter41(coeff,y,timeVec[i-1])\n", + " ym.append(ynew)\n", + " y=ynew\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig,theAx=plt.subplots(1,1)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,ye,'r-',label='euler')\n", + "l3=theAx.plot(timeVec,ym,'g-',label='midpoint')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.1');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "_Note: this bug has been fixed in the code below, by calling each method with the previous value from that method!_" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-24T03:40:30.477370Z", + "start_time": "2022-01-24T03:40:29.188538Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# original demo\n", + "import context\n", + "from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41\n", + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.25,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "y=coeff.yinitial\n", + "ye.append(coeff.yinitial)\n", + "ym.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=eulerinter41(coeff,ye[i-1],timeVec[i-1]) ## here we use ye[i-1] instead of y\n", + " ye.append(ynew)\n", + " \n", + " ynew=midpointinter41(coeff,ym[i-1],timeVec[i-1]) ## here we use ym[i-1] instead of y\n", + " ym.append(ynew)\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig,theAx=plt.subplots(1,1)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,ye,'r-',label='euler')\n", + "l3=theAx.plot(timeVec,ym,'g-',label='midpoint')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.1');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In general, an *explicit* 2-stage Runge-Kutta method can be\n", + "written as\n", + "\n", + "\n", + "\n", + "
eq:explicitrk1
\n", + "\n", + "\\begin{align}\n", + "k_1 =& h f(y_n,t_n)\\\\\n", + "k_2 =& h f(y_n+b_{21}k_1, t_n+a_2h) \\\\\n", + "y_{n+1} =& y_n + c_1k_1 +c_2k_2\n", + "\\end{align}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The scheme is said to be *explicit* since a given stage does not depend *implicitly* on itself, as in the backward Euler method, or on a later stage.\n", + "\n", + "Other explicit second-order schemes can be derived by comparing the formula [eq: explicitrk2](#eq:explicitrk2) to the second-order Taylor method and matching terms to determine the coefficients $a_2$, $b_{21}$, $c_1$ and $c_2$.\n", + "\n", + "See [Appendix midpoint](#app_midpoint) for the derivation of the midpoint method." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The Runge-Kutta Tableau \n", + "\n", + "A general s-stage Runge-Kutta method can be written as,\n", + "\n", + "$$\n", + "\\begin{array}{l}\n", + " k_i = h f(y_n+ {\\displaystyle \\sum_{j=1}^{s} } b_{ij}k_j, t_n+a_ih), \n", + " \\;\\;\\; i=1,..., s\\\\\n", + " y_{n+1} = y_n + {\\displaystyle \\sum_{j=1}^{s}} c_jk_j \n", + "\\end{array}\n", + "$$\n", + "\n", + "\n", + "\n", + "\n", + "An *explicit* Runge-Kutta method has $b_{ij}=0$ for\n", + "$i\\leq j$, i.e. a given stage $k_i$ does not depend on itself or a later\n", + "stage $k_j$.\n", + "\n", + "The coefficients can be expressed in a tabular form known as the\n", + "Runge-Kutta tableau. \n", + "\n", + "$$\n", + "\\begin{array}{|c|c|cccc|c|} \\hline\n", + "i & a_i &{b_{ij}} & & && c_i \\\\ \\hline\n", + "1 & a_1 & b_{11} & b_{12} & ... & b_{1s} & c_1\\\\\n", + "2 & a_2 & b_{21} & b_{22} & ... & b_{2s} & c_2\\\\ \n", + "\\vdots & \\vdots & \\vdots & \\vdots & & \\vdots & \\vdots\\\\\n", + "s &a_s & b_{s1} & b_{s2} & ... & b_{ss} & c_s\\\\\\hline\n", + "{j=} & & 1 \\ 2 & ... & s & \\\\ \\hline\n", + "\\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "An explicit scheme will be strictly lower-triangular.\n", + "\n", + "For example, a general 2-stage Runge-Kutta method, \n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n+b_{11}k_1+b_{12}k_2,t_n+a_1h)\\\\\n", + " k_2 = h f(y_n+b_{21}k_1+b_{22}k_2, t_n+a_2h)\\\\\n", + " y_{n+1} = y_n + c_1k_1 +c_2k_2\n", + " \\end{array}\n", + "$$\n", + " \n", + " \n", + " has the coefficients,\n", + "\n", + "$$\n", + "\\begin{array}{|c|c|cc|c|} \\hline\n", + "i & a_i & {b_{ij}} & & c_i \\\\ \\hline\n", + "1 & a_1 & b_{11} & b_{12} & c_1\\\\\n", + "2 & a_2 & b_{21} & b_{22} & c_2\\\\ \\hline\n", + "{j=} & & 1 & 2 & \\\\ \\hline\n", + "\\end{array}\n", + "$$\n", + "\n", + "\n", + "\n", + "In particular, the midpoint method is given by the tableau,\n", + "\n", + "$$\n", + "\\begin{array}{|c|c|cc|c|} \\hline\n", + "i & a_i & {b_{ij}} & & c_i \\\\ \\hline\n", + "1 & 0 & 0 & 0 & 0\\\\\n", + "2 & \\frac{1}{2} & \\frac{1}{2} & 0 & 1\\\\ \\hline\n", + "{j=} & & 1 & 2 & \\\\ \\hline\n", + "\\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemTableau \n", + "\n", + "Write out the tableau for\n", + "\n", + "1. [Heun’s/Ralston method](#eq:heuns):\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{2}{3}k_1, t_n+\\frac{2}{3}h)\\\\\n", + " y_{n+1} = y_n + \\frac{1}{4}k_1 + \\frac{3}{4}k_2\n", + " \\end{array}\n", + "$$\n", + "\n", + "3. the fourth-order Runge-Kutta method ([eq:rk4](#eq:rk4)) (discussed further in the\n", + " next section):\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{k_1}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_3 = h f(y_n+\\frac{k_2}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_4 = h f(y_n+k_3, t_n+h)\\\\\n", + " y_{n+1} = y_n + \\frac{k_1}{6}+ \\frac{k_2}{3}+ \\frac{k_3}{3} + \\frac{k_4}{6}\n", + " \\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Explicit Fourth-Order Runge-Kutta Method \n", + "\n", + "\n", + "\n", + "\n", + "Explicit Runge-Kutta methods are popular as each stage can be calculated\n", + "with one function evaluation. In contrast, implicit Runge-Kutta methods\n", + "usually involves solving a non-linear system of equations in order to\n", + "evaluate the stages. As a result, explicit schemes are much less\n", + "expensive to implement than implicit schemes.\n", + "\n", + "However, there are cases in which implicit schemes are necessary and\n", + "that is in the case of *stiff* sets of equations. See\n", + "section 16.6 of Press et al. for a discussion. For these labs, we will\n", + "focus on non-stiff equations and on explicit Runge-Kutta methods.\n", + "\n", + "The higher-order Runge-Kutta methods can be derived by in manner similar\n", + "to the midpoint formula. An s-stage method is compared to a Taylor\n", + "method and the terms are matched up to the desired order.\n", + "\n", + "Methods of order $M > 4$ require $M+1$ or $M+2$ function evaluations or\n", + "stages, in the case of explicit Runge-Kutta methods. As a result,\n", + "fourth-order Runge-Kutta methods have achieved great popularity over the\n", + "years as they require only four function evaluations per step. In\n", + "particular, there is the classic fourth-order Runge-Kutta formula:\n", + "\n", + "
eq:rk4
\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{k_1}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_3 = h f(y_n+\\frac{k_2}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_4 = h f(y_n+k_3, t_n+h)\\\\\n", + " y_{n+1} = y_n + \\frac{k_1}{6}+ \\frac{k_2}{3}+ \\frac{k_3}{3} + \\frac{k_4}{6}\n", + " \\end{array}\n", + "$$\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemRK4\n", + " \n", + "In the cell below, compare compare solutions to the test\n", + "problem\n", + "\n", + "
eq:test
\n", + "$$\n", + "\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1\n", + "$$ \n", + "\n", + "generated with the\n", + "fourth-order Runge-Kutta method to solutions generated by the forward\n", + "Euler and midpoint methods.\n", + "\n", + "1. Based on the numerical solutions of ([eq:test](#eq:test)), which of the\n", + " three methods appears more accurate?\n", + "\n", + "2. Again determine how the error changes relative to the change in\n", + " stepsize, as the stepsize is halved." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:33:20.675166Z", + "start_time": "2022-01-27T04:33:16.987738Z" + } + }, + "outputs": [], + "source": [ + "from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41,\\\n", + " rk4ODEinter41\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.05,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "yrk=[]\n", + "y=coeff.yinitial\n", + "ye.append(coeff.yinitial)\n", + "ym.append(coeff.yinitial)\n", + "yrk.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=eulerinter41(coeff,ye[i-1],timeVec[i-1])\n", + " ye.append(ynew)\n", + " ynew=midpointinter41(coeff,ym[i-1],timeVec[i-1])\n", + " ym.append(ynew)\n", + " ynew=rk4ODEinter41(coeff,yrk[i-1],timeVec[i-1])\n", + " yrk.append(ynew)\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig=plt.figure(0)\n", + "theFig.clf()\n", + "theAx=theFig.add_subplot(111)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,ye,'r-',label='euler')\n", + "l3=theAx.plot(timeVec,ym,'g-',label='midpoint')\n", + "l4=theAx.plot(timeVec,yrk,'m-',label='rk4')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.2');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Embedded Runge-Kutta Methods: Estimate of the Truncation Error \n", + "\n", + "\n", + "\n", + "It is possible to find two methods of different order which share the\n", + "same stages $k_i$ and differ only in the way they are combined, i.e. the\n", + "coefficients $c_i$. For example, the original so-called embedded\n", + "Runge-Kutta scheme was discovered by Fehlberg and consisted of a\n", + "fourth-order scheme and fifth-order scheme which shared the same six\n", + "stages.\n", + "\n", + "In general, a fourth-order scheme embedded in a fifth-order scheme will\n", + "share the stages \n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+b_{21}k_1, t_n+a_2h)\\\\\n", + " \\vdots \\\\\n", + " k_6 = h f(y_n+b_{61}k_1+ ...+b_{66}k_6, t_n+a_6h)\n", + " \\end{array}\n", + "$$\n", + "\n", + " \n", + "\n", + "\n", + "\n", + "\n", + "The fifth-order formula takes the step: \n", + "\n", + "$$\n", + " y_{n+1}=y_n+c_1k_1+c_2k_2+c_3k_3+c_4k_4+c_5k_5+c_6k_6\n", + "$$ \n", + "\n", + "while the\n", + "embedded fourth-order formula takes a different step:\n", + "\n", + "\n", + "\n", + "$$\n", + " y_{n+1}^*=y_n+c^*_1k_1+c^*_2k_2+c^*_3k_3+c^*_4k_4+c^*_5k_5+c^*_6k_6\n", + "$$\n", + "\n", + "If we now take the difference between the two numerical estimates, we\n", + "get an estimate $\\Delta_{\\rm spec}$ of the truncation error for the\n", + "fourth-order method, \n", + "\n", + "\n", + "$$\n", + " \\Delta_{\\rm est}(i)=y_{n+1}(i) - y_{n+1}^{*}(i) \n", + "= \\sum^{6}_{i=1}(c_i-c_{i}^{*})k_i\n", + "$$ \n", + "\n", + "This will prove to be very useful\n", + "in the next lab where we provide the Runge-Kutta algorithms with\n", + "adaptive stepsize control. The error estimate is used as a guide to an\n", + "appropriate choice of stepsize.\n", + "\n", + "An example of an embedded Runge-Kutta scheme was found by Cash and Karp\n", + "and has the tableau: " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{array}{|c|c|cccccc|c|c|} \\hline\n", + "i & a_i & {b_{ij}} & & & & & & c_i & c^*_i \\\\ \\hline\n", + "1 & & & & & & & & \\frac{37}{378} & \\frac{2825}{27648}\\\\\n", + "2 & \\frac{1}{5} & \\frac{1}{5}& & & & & & 0 &0 \\\\\n", + "3 & \\frac{3}{10} & \\frac{3}{40}&\\frac{9}{40}& & & & &\\frac{250}{621}&\\frac{18575}{48384}\\\\\n", + "4 & \\frac{3}{5}&\\frac{3}{10}& -\\frac{9}{10}&\\frac{6}{5}& & & &\\frac{125}{594}& \\frac{13525}{55296}\\\\\n", + "5 & 1 & -\\frac{11}{54}&\\frac{5}{2}&-\\frac{70}{27}&\\frac{35}{27}& & & 0 & \\frac{277}{14336}\\\\\n", + "6 & \\frac{7}{8}& \\frac{1631}{55296}& \\frac{175}{512}&\\frac{575}{13824}& \\frac{44275}{110592}& \\frac{253}{4096}& & \\frac{512}{1771} & \\frac{1}{4}\\\\\\hline\n", + "{j=} & & 1 & 2 & 3 & 4 & 5 & 6 & & \\\\ \\hline\n", + "\\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemEmbedded\n", + "\n", + "Though the error estimate is for the embedded\n", + "fourth-order Runge-Kutta method, the fifth-order method can be used in\n", + "practice for calculating the solution, the assumption being the\n", + "fifth-order method should be at least as accurate as the fourth-order\n", + "method. In the demo below, compare solutions of the test problem\n", + "[eq:test2](#eq:test2]) \n", + "\n", + "
eq:test2
\n", + "$$\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1$$\n", + "\n", + "generated by the fifth-order method with solutions generated by the\n", + "standard fourth-order Runge-Kutta method. Which method\n", + "is more accurate? For each method, quantitatively analyse how the error decreases as you halve\n", + "the stepsizes - discuss whether this is the expected behaviour given what you know about the order of the methods?\n", + "\n", + "Optional extra part: adapt the rkckODEinter41 code to return both the 5th order (as it currently does) AND the embedded 4th order scheme. Compare the accuracy of the embedded 4th order scheme to the standard 4th order scheme. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "\n", + "from numlabs.lab4.lab4_functions import initinter41,rk4ODEinter41,rkckODEinter41\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.2,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "yrk=[]\n", + "yrkck=[]\n", + "y1=coeff.yinitial\n", + "y2=coeff.yinitial\n", + "yrk.append(coeff.yinitial)\n", + "yrkck.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=rk4ODEinter41(coeff,y1,timeVec[i-1])\n", + " yrk.append(ynew)\n", + " y1=ynew\n", + " ynew=rkckODEinter41(coeff,y2,timeVec[i-1])\n", + " yrkck.append(ynew)\n", + " y2=ynew\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig,theAx=plt.subplots(1,1)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,yrkck,'g-',label='rkck')\n", + "l3=theAx.plot(timeVec,yrk,'m-',label='rk')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.3');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Python: moving from a notebook to a library\n", + "\n", + "### Managing problem configurations\n", + "\n", + "So far we've hardcoded our initialVars file into a cell. We need a strategy for saving\n", + "this information into a file that we can keep track of using git, and modify for\n", + "various runs. In python the fundamental data type is the dictionary. It's very\n", + "flexible, but that comes at a cost -- there are other data structures that are better\n", + "suited to storing this type of information.\n", + "\n", + "##### Mutable vs. immutable data types\n", + "\n", + "Python dictionaries and lists are **mutable**, which means they can be modified after they\n", + "are created. Python tuples, on the other hand, are **immutable** -- there is no way of changing\n", + "them without creating a copy. Why does this matter? One reason is efficiency and safety, an\n", + "immutable object is easier to reason about. Another reason is that immutable objects are **hashable**,\n", + "that is, they can be turned into a unique string that can be guaranteed to represent that exact\n", + "instance of the datatype. Hashable data structures can be used as dictionary keys, mutable\n", + "data structures can't. Here's an illustration -- this cell works:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:38:47.145511Z", + "start_time": "2022-01-27T04:38:47.141507Z" + } + }, + "outputs": [], + "source": [ + "test_dict=dict()\n", + "the_key = (0,1,2,3) # this is a tuple, i.e. immutable - it uses curved parentheses ()\n", + "test_dict[the_key]=5\n", + "print(test_dict)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "this cell fails:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:38:45.037053Z", + "start_time": "2022-01-27T04:38:45.016737Z" + } + }, + "outputs": [], + "source": [ + "import traceback, sys\n", + "test_dict=dict()\n", + "the_key = [0,1,2,3] # this is a list - it uses square parentheses []\n", + "try:\n", + " test_dict[the_key]=5\n", + "except TypeError as e:\n", + " tb = sys.exc_info()\n", + " traceback.print_exception(*tb)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Named tuples\n", + "\n", + "One particular tuple flavor that bridges the gap between tuples and dictionaries\n", + "is the [namedtuple](https://docs.python.org/3/library/collections.html#collections.namedtuple).\n", + "It has the ability to look up values by attribute instead of numerical index (unlike\n", + "a tuple), but it's immutable and so can be used as a dictionary key. The cell\n", + "below show how to convert from a dictionary to a namedtuple for our case:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:44:58.906667Z", + "start_time": "2022-01-27T04:44:58.893321Z" + } + }, + "outputs": [], + "source": [ + "from collections import namedtuple\n", + "initialDict={'yinitial': 1,'t_beg':0.,'t_end':1.,\n", + " 'dt':0.2,'c1':-1.,'c2':1.,'c3':1.}\n", + "inittup=namedtuple('inittup','dt c1 c2 c3 t_beg t_end yinitial')\n", + "initialCond=inittup(**initialDict)\n", + "print(f\"values are {initialCond.c1} and {initialCond.yinitial}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Comment on the cell above:\n", + "\n", + "1) `inittup=namedtuple('inittup','dt c1 c2 c3 t_beg t_end yinitial')`\n", + " creats a new data type with a type name (inittup) and properties\n", + " (the attributes we wlll need like dt, c1 etc.)\n", + " \n", + "2) `initialCond=inittup(**initialDict)`\n", + " uses \"keyword expansion\" via the \"doublesplat\" operator `**` to expand\n", + " the initialDict into a set of key=value pairs for the inittup constructor\n", + " which makes an instance of our new datatype called initialCond\n", + " \n", + "3) we access these readonly members of the instance using attributes like this:\n", + " `newc1 = initialCond.c1`\n", + "\n", + " \n", + "Note the other big benefit for namedtuples -- \"initialCond.c1\" is self-documenting,\n", + "you don't have to explain that the tuple value initialCond[3] holds c1,\n", + "and you never have to worry about changes to the order of the tuple changing the \n", + "results of your code." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "### Saving named tuples to a file\n", + "\n", + "One drawback to namedtuples is that there's no one annointed way to **serialize** them\n", + "i.e. we are in charge of trying to figure out how to write our namedtuple out\n", + "to a file for future use. Contrast this with lists, strings, and scalar numbers and\n", + "dictionaries, which all have a builtin **json** representation in text form.\n", + "\n", + "So here's how to turn our named tuple back into a dictionary:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:45:01.144390Z", + "start_time": "2022-01-27T04:45:01.138679Z" + } + }, + "outputs": [], + "source": [ + "#\n", + "# make the named tuple a dictionary\n", + "#\n", + "initialDict = initialCond._asdict()\n", + "print(initialDict)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Why does `_asdict` start with an underscore? It's to keep the fundamental\n", + "methods and attributes of the namedtuple class separate from the attributes\n", + "we added when we created the new `inittup` class. For more information, see\n", + "the [collections docs](https://docs.python.org/3/library/collections.html#module-collections)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:45:02.889690Z", + "start_time": "2022-01-27T04:45:02.883888Z" + } + }, + "outputs": [], + "source": [ + "outputDict = dict(initialconds = initialDict)\n", + "import json\n", + "outputDict['history'] = 'written Jan. 28, 2020'\n", + "outputDict['plot_title'] = 'simple damped oscillator run 1'\n", + "with open('run1.json', 'w') as jsonout:\n", + " json.dump(outputDict,jsonout,indent=4)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After running this cell, you should see the following [json output](https://en.wikipedia.org/wiki/JSON) in the file `run1.json`:\n", + "\n", + "```\n", + "{\n", + " \"initialconds\": {\n", + " \"dt\": 0.2,\n", + " \"c1\": -1.0,\n", + " \"c2\": 1.0,\n", + " \"c3\": 1.0,\n", + " \"t_beg\": 0.0,\n", + " \"t_end\": 1.0,\n", + " \"yinitial\": 1\n", + " },\n", + " \"history\": \"written Jan. 26, 2022\",\n", + " \"plot_title\": \"simple damped oscillator run 1\"\n", + "}\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Reading a json file back into python\n", + "\n", + "To recover your conditions read the file back in as a dictionary:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:48:15.111091Z", + "start_time": "2022-01-27T04:48:15.087970Z" + } + }, + "outputs": [], + "source": [ + "with open(\"run1.json\",'r') as jsonin:\n", + " inputDict = json.load(jsonin)\n", + "initial_conds = inittup(**inputDict['initialconds'])\n", + "print(f\"values are {initial_conds.c1} and {initial_conds.yinitial}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Passing a derivative function to an integrator\n", + "\n", + "In python, functions are first class objects, which means you can pass them around like any\n", + "other datatype, no need to get function handles as in matlab or Fortran. The integrators\n", + "in [do_example.py](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab4/example/do_example.py)\n", + "have been written to accept a derivative function of the form:\n", + "\n", + "```python\n", + " def derivs4(coeff, y):\n", + "```\n", + "\n", + "i.e. as long as the derivative can be written in terms of coefficients\n", + "and the previous value of y, the integrator will move the ode ahead one\n", + "timestep. If we wanted coefficients that were a function of time, we would\n", + "need to also include those functions the coeff namedtuple, and add keep track of the\n", + "timestep through the integration.\n", + "\n", + "Here's an example using forward euler to integrate the harmonic oscillator\n", + "\n", + "Note that you can also run this from the terminal by doing:\n", + "\n", + "```\n", + "cd numlabs/lab4/example\n", + "python do_example.py\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T05:01:11.730184Z", + "start_time": "2022-01-27T05:01:11.592970Z" + } + }, + "outputs": [], + "source": [ + "import json\n", + "from numlabs.lab4.example.do_example import get_init, euler4\n", + "#\n", + "# specify the derivs function\n", + "#\n", + "def derivs(coeff, y):\n", + " f=np.empty_like(y) #create a 2 element vector to hold the derivative\n", + " f[0]=y[1]\n", + " f[1]= -1.*coeff.c1*y[1] - coeff.c2*y[0]\n", + " return f\n", + "#\n", + "# first make sure we have an input file in this directory\n", + "#\n", + "\n", + "coeff=get_init()\n", + "\n", + "#\n", + "# integrate and save the result in savedata\n", + "#\n", + "time=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "y=coeff.yinitial\n", + "nsteps=len(time)\n", + "savedata=np.empty([nsteps],np.float64)\n", + "\n", + "for i in range(nsteps):\n", + " y=euler4(coeff,y,derivs)\n", + " savedata[i]=y[0]\n", + "\n", + "theFig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "theAx.plot(time,savedata,'o-')\n", + "theAx.set_title(coeff.plot_title)\n", + "theAx.set_xlabel('time (seconds)')\n", + "theAx.set_ylabel('y0');\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemCodingA\n", + "\n", + "As set up above, do_example.py\n", + "solves the damped, harmonic oscillator with the (unstable) forward Euler method.\n", + "\n", + "1. Write a new routine that solves the harmonic oscilator using [Heun’s method](#eq:heuns)\n", + " along the lines of the routines in [lab4_functions.py](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab4/lab4_functions.py)\n", + "\n", + "\n", + "### ProblemCodingB\n", + "\n", + "1. Now solve the following test equation by both the midpoint and\n", + " Heun’s method and compare. \n", + " \n", + " $$f(y,t) = t - y + 1.0$$ \n", + " \n", + " Choose two sets of initial conditions and determine if there is any difference\n", + " between the two methods when applied to this problem. Should there be? Explain by\n", + " analyzing the steps that each method is taking.\n", + " \n", + "\n", + "### ProblemCodingC\n", + "\n", + "1. Solve the Newtonian cooling equation of lab 1 by any of the above\n", + " methods. \n", + "\n", + "2. Add cells that do this and also generate some plots, showing your along with the parameter values and\n", + " initial conditions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes \n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "### Note on the Derivation of the Second-Order Runge-Kutta Methods\n", + "\n", + "A general s-stage Runge-Kutta method can be written as,\n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_i = h f(y_n+ {\\displaystyle \\sum_{j=1}^{s} } b_{ij}k_j, t_n+a_ih), \n", + " \\;\\;\\; i=1,..., s\\\\\n", + " y_{n+1} = y_n + {\\displaystyle \\sum_{j=1}^{s}} c_jk_j \n", + "\\end{array}\n", + "$$ \n", + " \n", + " where\n", + "\n", + "${\\displaystyle \\sum_{j=1}^{s} } b_{ij} = a_i$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "In particular, an *explicit* 2-stage Runge-Kutta method can be written as, \n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+ak_1, t_n+ah)\\\\\n", + " y_{n+1} = y_n + c_1k_1 +c_2k_2\n", + " \\end{array}\n", + "$$\n", + "\n", + "where \n", + " \n", + "$b_{21} = a_2 \\equiv a$. \n", + " \n", + "So we want to know what values of $a$, $c_1$ and $c_2$ leads to a second-order method, i.e. a method with an error proportional to $h^3$.\n", + "\n", + "To find out, we compare the method against a second-order Taylor expansion,\n", + "\n", + "\n", + "\n", + "$$\n", + " y(t_n+h) = y(t_n) + hy^\\prime(t_n) + \\frac{h^2}{2}y^{\\prime \\prime}(t_n)\n", + " + O(h^3)\n", + "$$\n", + "\n", + "So for the $y_{n+1}$ to be second-order accurate, it must match the Taylor method. In other words, $c_1k_1 +c_2k_2$ must match $hy^\\prime(t_n) + \\frac{h^2}{2}y^{\\prime \\prime}$. To do this, we need to express $k_1$ and $k_2$ in terms of derivatives of $y$ at time $t_n$.\n", + "\n", + "First note, $k_1 = hf(y_n, t_n) = hy^\\prime(t_n)$.\n", + "\n", + "Next, we can expand $k_2$ about $(y_n.t_n)$, \n", + "\n", + "\n", + "\n", + "$$\n", + "k_2 = hf(y_n+ak_1, t_n+ah) = h(f + haf_t + haf_yy^\\prime + O(h^2))\n", + "$$\n", + "\n", + "\n", + "\n", + "However, we can write $y^{\\prime \\prime}$ as, \n", + "\n", + "$$\n", + " y^{\\prime \\prime} = \\frac{df}{dt} = f_t + f_yy^\\prime\n", + "$$ \n", + "This allows us\n", + "to rewrite $k_2$ in terms of $y^{\\prime \\prime}$,\n", + "\n", + "$$k_2 = h(y^\\prime + hay^{\\prime \\prime}+ O(h^2))$$\n", + "\n", + "Substituting these expressions for $k_i$ back into the Runge-Kutta formula gives us,\n", + "$$y_{n+1} = y_n + c_1hy^\\prime +c_2h(y^\\prime + hay^{\\prime \\prime})$$\n", + "or \n", + "$$y_{n+1} = y_n + h(c_1 +c_2)y^\\prime + h^2(c_2a)y^{\\prime \\prime}$$\n", + "\n", + "If we compare this against the second-order Taylor method, we see that we need, \n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " c_1 + c_2 = 1\\\\\n", + " a c_2 = \\frac{1}{2}\n", + " \\end{array}\n", + "$$\n", + " \n", + "for the Runge-Kutta method to be\n", + "second-order.\n", + "\n", + "
\n", + "If we choose $a = 1/2$, this implies $c_2 = 1$ and $c_1=0$. This gives us the midpoint method.\n", + "\n", + "However, note that other choices are possible. In fact, we have a *one-parameter family* of second-order methods. For example if we choose, $a=1$ and $c_1=c_2=\\frac{1}{2}$, we get the *modified Euler method*,\n", + "\n", + "\n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+k_1, t_n+h)\\\\\n", + " y_{n+1} = y_n + \\frac{1}{2}(k_1 +k_2)\n", + " \\end{array}\n", + "$$\n", + " \n", + "while the choice\n", + "$a=\\frac{2}{3}$, $c_1=\\frac{1}{4}$ and $c_2=\\frac{3}{4}$, gives us\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
Heun's Method (also referred to as Ralston's method)
\n", + " Note: you may find a different definition of Heun's Method depending on the textbook you are reading)\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{2}{3}k_1, t_n+\\frac{2}{3}h)\\\\\n", + " y_{n+1} = y_n + \\frac{1}{4}k_1 + \\frac{3}{4}k_2\n", + " \\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 3 + }, + "source": [ + "## Glossary \n", + "\n", + "\n", + "- **driver** A routine that calls the other routines to solve the\n", + " problem.\n", + "\n", + "- **embedded Runge-Kutta methods**: Two Runge-Kutta\n", + " methods that share the same stages. The difference between the solutions\n", + " give an estimate of the local truncation error.\n", + "\n", + "- **explicit** In an explicit numerical scheme, the calculation of the solution at a given\n", + " step or stage does not depend on the value of the solution at that step\n", + " or on a later step or stage.\n", + " \n", + "- **fourth-order Runge-Kutta method** A popular fourth-order, four-stage, explicit Runge-Kutta\n", + " method.\n", + "\n", + "- **implicit**: In an implicit numerical scheme, the\n", + " calculation of the solution at a given step or stage does depend on the\n", + " value of the solution at that step or on a later step or stage. Such\n", + " methods are usually more expensive than implicit schemes but are better\n", + " for handling stiff ODEs.\n", + "\n", + "- **midpoint method** : A two-stage,\n", + " second-order Runge-Kutta method.\n", + "\n", + "- **stages**: The approximations\n", + " to the derivative made in a Runge-Kutta method between the start and end\n", + " of a step.\n", + "\n", + "- **tableau** The tableau for a Runge-Kutta method\n", + " organizes the coefficients for the method in tabular form.\n", + "\n" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": true, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "165px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab5/01-lab5.ipynb.txt b/_sources/notebooks/lab5/01-lab5.ipynb.txt new file mode 100644 index 0000000..f8f496d --- /dev/null +++ b/_sources/notebooks/lab5/01-lab5.ipynb.txt @@ -0,0 +1,2251 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Lab 5: Daisyworld" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems\n", + "\n", + "\n", + "[Problem Constant Growth](#prob_constant): Daisyworld with a constant growth rate\n", + "\n", + "[Problem Coupling](#prob_coupling): Daisyworld of neutral daisies coupled to\n", + "the temperature\n", + "\n", + "[Problem Conduction](#prob_conduction): Daisyworld steady states and the effect\n", + "of the conduction parameter R\n", + "\n", + "[Problem Initial](#prob_initial): Daisyworld steady states and initial\n", + "conditions\n", + "\n", + "[Problem Temperature](#prob_temperature): Add temperature retrieval code\n", + "\n", + "[Problem Estimate](#prob_estimate): Compare the error estimate to the true\n", + "error\n", + "\n", + "[Problem Adaptive](#prob_adaptive): Adaptive Timestep Code\n", + "\n", + "[Problem Predators](#prob_predator): Adding predators to Daisyworld\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "## Assignment\n", + "\n", + "See canvas site for which problems you should hand in. Your answers should all be within a jupyter notebook. Use subheadings to organise your notebook by question, and markdown cells to describe what you've done and to answer the questions \n", + "You will be asked to upload:\n", + "1. a pdf of your jupyter notebook answering all questions\n", + "2. the jupyter notebook itself (ipynb file) - if you want to import your own module code, include that with the notebook in a zipfile\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Objectives\n", + "\n", + "In this lab, you will use the tools you have learnt in the previous labs to explore a simple environmental model,\n", + "*Daisyworld*, with the help of a Runge-Kutta method with\n", + "adaptive stepsize control.\n", + "\n", + "The goal is for you to gain some experience using a Runge-Kutta\n", + "integrator and to see the advantages of applying error control to the\n", + "algorithm. As well, you will discover the the possible insights one can\n", + "get from studying numerical solutions of a physical model.\n", + "\n", + "In particular you will be able to:\n", + "\n", + "- explain how the daisies affect the climate in the daisy world model\n", + "\n", + "- define an adaptive step-size model\n", + "\n", + "- explain the reasons why an adaptive step-size model may be faster\n", + " for a given accuracy\n", + "\n", + "- explain why white daisies (alone) can survive at a higher solar\n", + " constant than black daisies\n", + "\n", + "- define hysteresis" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Readings\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. However, if you would like additional background on any of\n", + "the following topics, then refer to the sections indicated below:\n", + "\n", + "- **Daisy World:**\n", + "\n", + " - The original article by [Watson and Lovelock, 1983](http://ezproxy.library.ubc.ca/login?url=http://onlinelibrary.wiley.com/enhanced/doi/10.1111/j.1600-0889.1983.tb00031.x) which derive the equations used here.\n", + "\n", + " - A 2008 Reviews of Geophysics article by [Wood et al.](http://ezproxy.library.ubc.ca/login?url=http://doi.wiley.com/10.1029/2006RG000217) with more recent developments (species competition, etc.)\n", + "\n", + "- **Runge-Kutta Methods with Adaptive Stepsize Control:**\n", + "\n", + " - Newman, Section 8.4\n", + "\n", + " - Press, et al. Section 16.2: these are equations we implemented in Python,\n", + " [scanned pdf here](adapt_ode.pdf)\n", + "\n", + " - Burden & Faires Section 5.5" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Introduction\n", + "\n", + "It is obvious that life on earth is highly sensitive to the planet’s\n", + "atmospheric and climatic conditions. What is less obvious, but of great\n", + "interest, is the role biology plays in the sensitivity of the climate.\n", + "This is dramatically illustrated by the concern over the possible\n", + "contribution to global warming by the loss of the rain forests in\n", + "Brazil, and shifts from forests to croplands over many regions of the Earth. \n", + "\n", + "The fact that each may affect the other implies that the climate and\n", + "life on earth are interlocked in a complex series of feedbacks, i.e. the\n", + "climate affects the biosphere which, when altered, then changes the\n", + "climate and so on. A fascinating question arises as to whether or not\n", + "this would eventually lead to a stable climate. This scenerio is\n", + "exploited to its fullest in the *Gaia* hypothesis which\n", + "postulates that the biosphere, atmosphere, ocean and land are all part\n", + "of some totality, dubbed *Gaia*, which is essentially an\n", + "elaborate feedback system which optimizes the conditions of life here on\n", + "earth.\n", + "\n", + "It would be hopeless to attempt to mathematically model such a large,\n", + "complex system. What can be done instead is to construct a ’toy model’\n", + "of the system in which much of the complexity has been stripped away and\n", + "only some of the relevant characteristics retained. The resulting system\n", + "will then be tractable but, unfortunately, may bear little connection\n", + "with the original physical system.\n", + "\n", + "Daisyworld is such a model. Life on Daisyworld has been reduced to just\n", + "two species of daisies of different colors. The only environmental\n", + "condition that affects the daisy growth rate is temperature. The\n", + "temperature in turn is modified by the varying amounts of radiation\n", + "absorbed by the daisies.\n", + "\n", + "Daisyworld is obviously a gross simplification of the real earth.\n", + "However, it does retain the central feature of interest: a feedback loop\n", + "between the climate and life on the planet. Since the equations\n", + "governing the system will be correspondingly simplified, it will allow\n", + "us to investigate under what conditions, if any, can equilibrium be\n", + "reached. The hope is that this will then gain us some insight into how\n", + "life on the real earth may lead to a stable (or unstable) climate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## The Daisyworld Model\n", + "\n", + "Daisyworld is populated by two types of daisies, one darker than bare ground and the other lighter than bare ground. As with life on earth, the daisies will not grow at extreme temperatures and will have optimum growth at moderate temperatures.\n", + "\n", + "The darker, ’black’ daisies absorb more radiation than the lighter, ’white’ daisies. If the black daisy population grows and spreads over more area, an increased amount of solar energy will be absorbed, which will ultimately raise the temperature of the planet. Conversely, an increase in the white daisy population will result in more radiation being reflected away, lowering the planet’s temperature.\n", + "\n", + "The question to be answered is:\n", + "\n", + "**Under what conditions, if any, will the daisy population and temperature reach equilibrium?**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "### The Daisy Population\n", + "\n", + "The daisy population will be modeled along the lines of standard\n", + "population ecology models where the net growth depends upon the current\n", + "population. For example, the simplest model assumes the rate of growth\n", + "is proportional to the population, i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + "\\frac{dA_w}{dt} = k_w A_w\n", + "$$\n", + "\n", + "$$\n", + "\\frac{dA_b}{dt} = k_b A_b\n", + "$$\n", + "\n", + "where $A_w$\n", + "and $A_b$ are fractions of the total planetary area covered by the white\n", + "and black daisies, respectively, and $k_i$, $i=w,b$, are the white and\n", + "black daisy growth rates per unit time, respectively. If assume the the\n", + "growth rates $k_i$ are (positive) constants we would have exponential\n", + "growth like the bunny rabbits of lore.\n", + "\n", + "We can make the model more realistic by letting the daisy birthrate\n", + "depend on the amount of available land, i.e. $$k_i = \\beta_i x$$ where\n", + "$\\beta_i$ are the white and black daisy growth rates per unit time and\n", + "area, respectively, and $x$ is the fractional area of free fertile\n", + "ground not colonized by either species. We can also add a daisy death\n", + "rate per unit time, $\\chi$, to get\n", + "\n", + "\n", + "$\\textbf{eq: constantgrowth}$\n", + "$$\n", + "\\frac{dA_w}{dt} = A_w ( \\beta_w x - \\chi)\n", + "$$\n", + "\n", + "\n", + "$$\n", + "\\frac{dA_b}{dt} = A_b ( \\beta_b x - \\chi)\n", + "$$\n", + "\n", + "However, even these small modifications are non-trivial mathematically\n", + "as the available fertile land is given by,\n", + "$$\n", + " x = 1 - A_w - A_b\n", + "$$\n", + "\n", + "(assuming all the land mass is fertile) which\n", + "makes the equations non-linear." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem constant growth\n", + "\n", + "\n", + "\n", + "Note that though the daisy growth rate per unit time depends on the amount of available fertile land, it is not\n", + "otherwise coupled to the environment (i.e. $\\beta_i$ is not a function of temperature. Making the growth a function of bare ground, however, keeps the daisy population bounded and the daisy population will eventually reach some steady state. The next python cell has a script that runs a fixed timestep Runge Kutte routine that calculates area coverage of white and black daisies for fixed growth rates $\\beta_w$ and $\\beta_b$. Try changing these growth rates (specified in the derivs5 routine) and the initial white and black concentrations (specified in the fixed_growth.yaml file\n", + "discussed next). To hand in: plot graphs to illustrate how these changes have affected the fractional coverage of black and white daisies over time compared to the original. Comment on the changes that you see.\n", + "\n", + "1. For a given set of growth rates try various (non-zero) initial daisy\n", + " populations.\n", + "\n", + "2. For a given set of initial conditions try various growth rates. In\n", + " particular, try rates that are both greater than and less than the\n", + " death rate.\n", + "\n", + "3. Can you determine when non-zero steady states are achieved? Explain. Connect what you see here with the discussion of hysteresis towards the end of this lab - what determines which steady state is reached? \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "#### Running the constant growth rate demo\n", + "\n", + "In the appendix we discuss the design of the integrator class and the adaptive Runge-Kutta routine. For this demo, we need to be able to change variables in the configuration file. For this assignment problem you are asked to:\n", + "\n", + "a. Change the inital white and black daisy concentrations by changing these lines in the [fixed_growth.yaml](https://github.com/rhwhite/numeric_2022/blob/main/notebooks/lab5/fixed_growth.yaml#L13-L15) input file (you can find this file in this lab directory):\n", + "\n", + " ```yaml\n", + "\n", + " initvars:\n", + " whiteconc: 0.2\n", + " blackconc: 0.7\n", + " ```\n", + "\n", + "b. Change the white and black daisy growth rates by editing the variables beta_w and beta_b in the derivs5 routine in the next cell\n", + "\n", + "The Integrator class contains two different timeloops, both of which use embedded Runge Kutta Cash Carp\n", + "code given in Lab 4 and coded here as [rkckODE5](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L70). The simplest way to loop through the timesteps is just to call the integrator with a specified set of times. This is done in [timeloop5fixed](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L244). Below we will describe how to use the error extimates returned by rkckODE5 to tune the size of the timesteps, which is done in [timeloop5Err](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L244)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.493997Z", + "start_time": "2022-02-17T03:40:22.213290Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "#\n", + "# 4.1 integrate constant growth rates with fixed timesteps\n", + "#\n", + "import context\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "from collections import namedtuple\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "\n", + "\n", + "class Integ51(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_white chi S0 L albedo_black R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'whiteconc blackconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.whiteconc, self.initvars.blackconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " #\n", + " # Construct an Integ51 class by inheriting first intializing\n", + " # the parent Integrator class (called super). Then do the extra\n", + " # initialization in the set_yint function\n", + " #\n", + " def __init__(self, coeffFileName):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"y[0]=fraction white daisies\n", + " y[1]=fraction black daisies\n", + "\n", + " Constant growty rates for white\n", + " and black daisies beta_w and beta_b\n", + "\n", + " returns dy/dt\n", + " \"\"\"\n", + " user = self.uservars\n", + " #\n", + " # bare ground\n", + " #\n", + " x = 1.0 - y[0] - y[1]\n", + "\n", + " # growth rates don't depend on temperature\n", + " beta_b = 0.7 # growth rate for black daisies\n", + " beta_w = 0.7 # growth rate for white daisies\n", + "\n", + " # create a 1 x 2 element vector to hold the derivitive\n", + " f = np.empty([self.nvars], 'float')\n", + " f[0] = y[0] * (beta_w * x - user.chi)\n", + " f[1] = y[1] * (beta_b * x - user.chi)\n", + " return f\n", + "\n", + "\n", + "theSolver = Integ51('fixed_growth.yaml')\n", + "timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + "\n", + "plt.close('all')\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, yVals)\n", + "theLines[0].set_marker('+')\n", + "theLines[1].set_linestyle('--')\n", + "theLines[1].set_color('k')\n", + "theLines[1].set_marker('*')\n", + "theAx.set_title('lab 5 interactive 1 constant growth rate')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "theAx.legend(theLines, ('white daisies', 'black daisies'), loc='best')\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, errorList)\n", + "theLines[0].set_marker('+')\n", + "theLines[1].set_linestyle('--')\n", + "theLines[1].set_color('k')\n", + "theLines[1].set_marker('*')\n", + "theAx.set_title('lab 5 interactive 1 errors')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('error')\n", + "out = theAx.legend(theLines, ('white errors', 'black errors'), loc='best')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### The Daisy Growth Rate - Coupling to the Environment\n", + "\n", + "We now want to couple the Daisy growth rate to the climate, which we do by making the growth rate a function of the local temperature $T_i$,\n", + "$$\\beta_i = \\beta_i(T_i)$$\n", + "The growth rate should drop to zero at extreme temperatures and be optimal at moderate temperatures. In Daisyworld this means the daisy population ceases to grow if the temperature drops below $5^o$C or goes above $40^o $C. The simplest model for the growth rate would then be parabolic function of temperature, peaking at $22.5^o$C:\n", + "\n", + "\n", + "$$\\beta_i = 1.0 - 0.003265(295.5 K - T_i)^2$$\n", + "where the $i$ subscript denotes the type of daisy: grey (i=y), white (i=w) or black (i=b). (We're reserving $\\alpha_g$ for the bare ground albedo)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Before specifying the local temperature, and its dependence on the daisy\n", + "population, first consider the emission temperature $T_e$, which is the\n", + "mean temperature of the planet,\n", + "\n", + "\n", + "\n", + "$$ T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)$$\n", + "\n", + "where $S_0$ is a solar\n", + "flux density constant, $L$ is the fraction of $S_0$ received at\n", + "Daisyworld, and $\\alpha_p$ is the planetary albedo. The greater the\n", + "planetary albedo $\\alpha_p$, i.e. the more solar radiation the planet\n", + "reflects, the lower the emission temperature.\n", + "\n", + "**Mathematical note**: The emission temperature is derived on the assumption that the planet is\n", + "in global energy balance and is behaving as a blackbody radiator. See\n", + "the appendix for more information." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-05T01:50:27.578068Z", + "start_time": "2022-02-05T01:50:27.573440Z" + } + }, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Coupling\n", + "\n", + "Consider daisies with the same albedo as the planet, i.e. ’grey’ or neutral daisies, as specified in derivs5 routine below.\n", + "\n", + "1. For the current value of L (0.2) in the file coupling.yaml, the final daisy steady state is zero. Why is it zero?\n", + "\n", + "2. Find a value of L which leads to a non-zero steady state.\n", + "\n", + "3. What happens to the emission temperature as L is varied? Make a plot of $L$ vs. $T_E$ for 10-15 values of $L$. To do this, I overrode the value of L from the init file by passing a new value into the IntegCoupling constructor (see [Appendix A](#sec_override)). This allowed me to put\n", + "\n", + " ```python\n", + " theSolver = IntegCoupling(\"coupling.yaml\",newL)\n", + " timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + " ```\n", + "\n", + " inside a loop that varied the L value and saved the steady state concentration\n", + " for plotting\n", + "\n", + "After reading the the next section on the local temperature,\n", + "\n", + "4. Do you see any difference between the daisy temperature and emission temperature? Plot both and explain. (Hint: I modified derivs5 to save these variables to self so I could compare their values at the end of the simulation. You could also override timeloop5fixed to do the same thing at each timestep.)\n", + "\n", + "5. How (i.e. through what mechanism) does the makeup of the global daisy population affect the local temperature?\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.504206Z", + "start_time": "2022-02-17T03:40:23.496511Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# define functions\n", + "import matplotlib.pyplot as plt\n", + "\n", + "\n", + "class IntegCoupling(Integrator):\n", + " \"\"\"rewrite the init and derivs5 methods to\n", + " work with a single (grey) daisy\n", + " \"\"\"\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_grey chi S0 L R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'greyconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array([self.initvars.greyconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeffFileName):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"\n", + " Make the growth rate depend on the ground temperature\n", + " using the quadratic function of temperature\n", + "\n", + " y[0]=fraction grey daisies\n", + " t = time\n", + " returns f[0] = dy/dt\n", + " \"\"\"\n", + " sigma = 5.67e-8 # Stefan Boltzman constant W/m^2/K^4\n", + " user = self.uservars\n", + " x = 1.0 - y[0]\n", + " albedo_p = x * user.albedo_ground + y[0] * user.albedo_grey\n", + " Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma\n", + " eta = user.R * user.L * user.S0 / (4.0 * sigma)\n", + " temp_y = (eta * (albedo_p - user.albedo_grey) + Te_4)**0.25\n", + " if (temp_y >= 277.5 and temp_y <= 312.5):\n", + " beta_y = 1.0 - 0.003265 * (295.0 - temp_y)**2.0\n", + " else:\n", + " beta_y = 0.0\n", + "\n", + " # create a 1 x 1 element vector to hold the derivative\n", + " f = np.empty([self.nvars], np.float64)\n", + " f[0] = y[0] * (beta_y * x - user.chi)\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.707043Z", + "start_time": "2022-02-17T03:40:23.506823Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot for grey daisies\n", + "import matplotlib.pyplot as plt\n", + "\n", + "theSolver = IntegCoupling('coupling.yaml')\n", + "timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, yVals)\n", + "theAx.set_title('lab 5: interactive 2 Coupling with grey daisies')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "out = theAx.legend(theLines, ('grey daisies', ), loc='best')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## The Local Temperature - Dependence on Surface Heat Conductivity\n", + "\n", + "If we now allow for black and white daisies, the local temperature will\n", + "differ according to the albedo of the region. The regions with white\n", + "daisies will tend to be cooler than the ground and the regions with\n", + "black daisies will tend to be hotter. To determine what the temperature\n", + "is locally, we need to decide how readily the planet surface\n", + "thermalises, i.e. how easily large-scale weather patterns redistributes\n", + "the surface heat.\n", + "\n", + "- If there is perfect heat ‘conduction’ between the different regions\n", + " of the planet then the local temperature will equal the mean\n", + " temperature given by the emission temperature $T_e$.\n", + "\n", + " \n", + " $$\n", + " T^4_i \\equiv T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)\n", + " $$\n", + "\n", + "- If there is no conduction, or perfect ‘insulation’, between regions\n", + " then the temperature will be the emission temperature due to the\n", + " albedo of the local region.\n", + "\n", + " \n", + " $$\n", + " T^4_i= L \\frac{S_0}{4\\sigma}(1-\\alpha_i)\n", + " $$\n", + "where $\\alpha_i$ indicates either $\\alpha_g$, $\\alpha_w$ or $\\alpha_b$.\n", + "\n", + "The local temperature can be chosen to lie between these two values,\n", + "\n", + "\n", + "\n", + "$$\n", + " T^4_i = R L \\frac{S_0}{4\\sigma}(\\alpha_p-\\alpha_i) + T^4_e\n", + "$$\n", + "\n", + "where $R$\n", + "is a parameter that interpolates between the two extreme cases i.e.\n", + "$R=0$ means perfect conduction and $R=1$ implies perfect insulation\n", + "between regions.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-07T19:31:15.932451Z", + "start_time": "2022-02-07T19:31:15.926836Z" + } + }, + "source": [ + "### Problem Conduction\n", + "The conduction parameter R will determine the temperature differential between the bare ground and the regions with black or white daisies. The code in the next cell specifies the derivatives for this situation, removing the feedback between the daisies and the planetary albedo but introducint conduction. Use it to investigate these two questions:\n", + "\n", + "1. Change the value of R and observe the effects on the daisy and emission temperature.\n", + "\n", + "2. What are the effects on the daisy growth rate and the final steady states?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.719383Z", + "start_time": "2022-02-17T03:40:23.710150Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# 5.2 keep the albedo constant at alpha_p and vary the conductivity R\n", + "#\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "\n", + "\n", + "class Integ53(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_white chi S0 L albedo_black R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'whiteconc blackconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.whiteconc, self.initvars.blackconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeffFileName):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"y[0]=fraction white daisies\n", + " y[1]=fraction black daisies\n", + " no feedback between daisies and\n", + " albedo_p (set to ground albedo)\n", + " \"\"\"\n", + " sigma = 5.67e-8 # Stefan Boltzman constant W/m^2/K^4\n", + " user = self.uservars\n", + " x = 1.0 - y[0] - y[1]\n", + " #\n", + " # hard wire the albedo to that of the ground -- no daisy feedback\n", + " #\n", + " albedo_p = user.albedo_ground\n", + " Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma\n", + " eta = user.R * user.L * user.S0 / (4.0 * sigma)\n", + " temp_b = (eta * (albedo_p - user.albedo_black) + Te_4)**0.25\n", + " temp_w = (eta * (albedo_p - user.albedo_white) + Te_4)**0.25\n", + "\n", + " if (temp_b >= 277.5 and temp_b <= 312.5):\n", + " beta_b = 1.0 - 0.003265 * (295.0 - temp_b)**2.0\n", + " else:\n", + " beta_b = 0.0\n", + "\n", + " if (temp_w >= 277.5 and temp_w <= 312.5):\n", + " beta_w = 1.0 - 0.003265 * (295.0 - temp_w)**2.0\n", + " else:\n", + " beta_w = 0.0\n", + "\n", + " # create a 1 x 2 element vector to hold the derivitive\n", + " f = np.empty([self.nvars], 'float')\n", + " f[0] = y[0] * (beta_w * x - user.chi)\n", + " f[1] = y[1] * (beta_b * x - user.chi)\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.925942Z", + "start_time": "2022-02-17T03:40:23.721788Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot conduction problem\n", + "import matplotlib.pyplot as plt\n", + "\n", + "theSolver = Integ53('conduction.yaml')\n", + "timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + "\n", + "plt.close('all')\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, yVals)\n", + "theLines[1].set_linestyle('--')\n", + "theLines[1].set_color('k')\n", + "theAx.set_title('lab 5 interactive 3 -- conduction problem')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "out = theAx.legend(theLines, ('white daisies', 'black daisies'),\n", + " loc='center right')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## The Feedback Loop - Feedback Through the Planetary Albedo\n", + "\n", + "The amount of solar radiation the planet reflects will depend on the\n", + "daisy population since the white daisies will reflect more radiation\n", + "than the bare ground and the black daisies will reflect less. So a\n", + "reasonable estimate of the planetary albedo $\\alpha_p$ is an average of\n", + "the albedo’s of the white and black daisies and the bare ground,\n", + "weighted by the amount of area covered by each, i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + " \\alpha_p = A_w\\alpha_w + A_b\\alpha_b + A_g\\alpha_g\n", + "$$\n", + "\n", + "A greater\n", + "population of white daisies will tend to increase planetary albedo and\n", + "decrease the emission temperature, as is apparent from equation\n", + "([lab5:eq:tempe]), while the reverse is true for the black daisies.\n", + "\n", + "To summarize: The daisy population is controlled by its growth rate\n", + "$\\beta_i$ which is a function of the local\n", + "temperature $T_i$ $$\\beta_i = 1.0 - 0.003265(295.5 K -T_i)^2$$ If the\n", + "conductivity $R$ is nonzero, the local temperature is a function of\n", + "planetary albedo $\\alpha_p$\n", + "\n", + "$$T_i = \\left[ R L \\frac{S_0}{4\\sigma}(\\alpha_p-\\alpha_i)\n", + " + T^4_e \\right]^{\\frac{1}{4}}$$\n", + "\n", + "which is determined by the daisy\n", + "population.\n", + "\n", + "- Physically, this provides the feedback from the daisy population\n", + " back to the temperature, completing the loop between the daisies and\n", + " temperature.\n", + "\n", + "- Mathematically, this introduces a rather nasty non-linearity into\n", + " the equations which, as pointed out in the lab 1, usually makes it\n", + " difficult, if not impossible, to obtain exact analytic solutions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Initial\n", + "The feedback means a stable daisy population (a\n", + "steady state) and the environmental conditions are in a delicate\n", + "balance. The code below produces a steady state which arises from a given initial daisy\n", + "population that starts with only white daisies.\n", + "\n", + "1. Add a relatively small (5\\%, blackconc = 0.05) initial fraction of black daisies to the\n", + " value in initial.yaml and see\n", + " what effect this has on the temperature and final daisy populations.\n", + " Do you still have a final non-zero daisy population?\n", + "\n", + "2. Set the initial black daisy population to 0.05 Attempt to adjust the initial white daisy population to obtain a\n", + " non-zero steady state. What value of initial white daisy population gives you a non-zero steady state for blackconc=0.05? Do you have to increase or decrease the initial fraction? What is your explanation for this behavior?\n", + "\n", + "3. Experiment with other initial fractions of daisies and look for\n", + " non-zero steady states. Describe and explain your results. Connect what you see here with the discussion of hysteresis towards the end of this lab - what determines which steady state is reached? " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.936568Z", + "start_time": "2022-02-17T03:40:23.927965Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# functions for problem initial\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "\n", + "\n", + "class Integ54(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_white chi S0 L albedo_black R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'whiteconc blackconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.whiteconc, self.initvars.blackconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeff_file_name):\n", + " super().__init__(coeff_file_name)\n", + " self.set_yinit()\n", + "\n", + " def find_temp(self, yvals):\n", + " \"\"\"\n", + " Calculate the temperatures over the white and black daisies\n", + " and the planetary equilibrium temperature given the daisy fractions\n", + "\n", + " input: yvals -- array of dimension [2] with the white [0] and black [1]\n", + " daisy fractiion\n", + " output: white temperature (K), black temperature (K), equilibrium temperature (K)\n", + " \"\"\"\n", + " sigma = 5.67e-8 # Stefan Boltzman constant W/m^2/K^4\n", + " user = self.uservars\n", + " bare = 1.0 - yvals[0] - yvals[1]\n", + " albedo_p = bare * user.albedo_ground + \\\n", + " yvals[0] * user.albedo_white + yvals[1] * user.albedo_black\n", + " Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma\n", + " temp_e = Te_4**0.25\n", + " eta = user.R * user.L * user.S0 / (4.0 * sigma)\n", + " temp_b = (eta * (albedo_p - user.albedo_black) + Te_4)**0.25\n", + " temp_w = (eta * (albedo_p - user.albedo_white) + Te_4)**0.25\n", + " return (temp_w, temp_b, temp_e)\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"y[0]=fraction white daisies\n", + " y[1]=fraction black daisies\n", + " no feedback between daisies and\n", + " albedo_p (set to ground albedo)\n", + " \"\"\"\n", + " temp_w, temp_b, temp_e = self.find_temp(y)\n", + "\n", + " if (temp_b >= 277.5 and temp_b <= 312.5):\n", + " beta_b = 1.0 - 0.003265 * (295.0 - temp_b)**2.0\n", + " else:\n", + " beta_b = 0.0\n", + "\n", + " if (temp_w >= 277.5 and temp_w <= 312.5):\n", + " beta_w = 1.0 - 0.003265 * (295.0 - temp_w)**2.0\n", + " else:\n", + " beta_w = 0.0\n", + " user = self.uservars\n", + " bare = 1.0 - y[0] - y[1]\n", + " # create a 1 x 2 element vector to hold the derivitive\n", + " f = np.empty_like(y)\n", + " f[0] = y[0] * (beta_w * bare - user.chi)\n", + " f[1] = y[1] * (beta_b * bare - user.chi)\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:24.818994Z", + "start_time": "2022-02-17T03:40:23.938484Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot for problem initial\n", + "import matplotlib.pyplot as plt\n", + "import pandas as pd\n", + "\n", + "theSolver = Integ54('initial.yaml')\n", + "timevals, yvals, errorlist = theSolver.timeloop5fixed()\n", + "daisies = pd.DataFrame(yvals, columns=['white', 'black'])\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "line1, = theAx.plot(timevals, daisies['white'])\n", + "line2, = theAx.plot(timevals, daisies['black'])\n", + "line1.set(linestyle='--', color='r', label='white')\n", + "line2.set(linestyle='--', color='k', label='black')\n", + "theAx.set_title('lab 5 interactive 4, initial conditions')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "out = theAx.legend(loc='center right')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Temperature\n", + "The code above in Problem Initial adds a new method, ```find_temp``` that takes the white/black daisy fractions and calculates local and planetary temperatures.\n", + "\n", + "1. override ```timeloop5fixed``` so that it saves these three temperatures, plus the daisy growth rates\n", + " to new variables in the Integ54 instance\n", + "\n", + "2. Make plots of (temp_w, temp_b) and (beta_w, beta_b) vs. time for a case with non-zero equilibrium\n", + " concentrations of both black and white daisies" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Adaptive Stepsize in Runge-Kutta\n", + "\n", + "\n", + "\n", + "### Why Adaptive Stepsize?\n", + "\n", + "As a rule of thumb, accuracy increases in Runge-Kutta methods as stepsize decreases. At the same time, the number of function evaluations performed increases. This tradeoff between accuracy of the solution and computational cost always exists, but in the ODE solution algorithms presented earlier it often appears to be unnecessarily large. To see this, consider the solution to a problem in two different time intervals - in the first time interval, the solution is close to steady, whereas in the second one it changes quickly. For acceptable accuracy with a non-adaptive method the step size will have to be adjusted so that the approximate solution is close to the actual solution in the second interval. The stepsize will be fairly small, so that the approximate solution is able to follow the changes in the solution here. However, as there is no change in stepsize throughout the solution process, the same step size will be applied to approximate the solution in the first time interval, where clearly a much larger stepsize would suffice to achieve the same accuracy. Thus, in a region where the solution behaves nicely a lot of function evaluations are wasted because the stepsize is chosen in accordance with the most quickly changing part of the solution.\n", + "\n", + "The way to address this problem is the use of adaptive stepsize control. This class of algorithms adjusts the stepsize taken in a time interval according to the properties of the solution in that interval, making it useful for producing a solution that has a given accuracy in the minimum number of steps.\n", + "\n", + "\n", + "\n", + "### Designing Adaptive Stepsize Control\n", + "\n", + "Now that the goal is clear, the question remains of how to close in on it. As mentioned above, an adaptive algorithm is usually asked to solve a problem to a desired accuracy. To be able to adjust the stepsize in Runge-Kutta the algorithm must therefore calculate some estimate of how far its solution deviates from the actual solution. If with its initial stepsize this estimate is already well within the desired accuracy, the algorithm can proceed with a larger stepsize. If the error estimate is larger than the desired accuracy, the algorithm decreases the stepsize at this point and attempts to take a smaller step. Calculating this error estimate will always increase the amount of work done at a step compared to non-adaptive methods. Thus, the remaining problem is to devise a method of calculating this error estimate that is both\n", + "inexpensive and accurate.\n", + "\n", + "\n", + "\n", + "### Error Estimate by Step Doubling\n", + "\n", + "The first and simple approach to arriving at an error estimate is to simply take every step twice. The second time the step is divided up into two steps, producing a different estimate of the solution. The difference in the two solutions can be used to produce an estimate of the truncation error for this step.\n", + "\n", + "How expensive is this method to estimate the error? A single step of fourth order Runge-Kutta always takes four function evaluations. As the second time the step is taken in half-steps, it will take 8 evaluations.\n", + "However, the first function evaluation in taking a step twice is identical to both steps, and thus the overall cost for one step with step doubling is $12 - 1 = 11$ function evaluations. This should be compared to taking two normal half-steps as this corresponds to the overall accuracy achieved. So we are looking at 3 function evaluations more per step, or an increase of computational cost by a factor of $1.375$.\n", + "\n", + "Step doubling works in practice, but the next section presents a slicker way of arriving at an error estimate that is less computationally expensive. It is the commmonly used one today.\n", + "\n", + "\n", + "\n", + "### Error Estimate using Embedded Runge-Kutta\n", + "\n", + "Another way of estimating the truncation error of a step is due to the existence of the special fifth-order Runge-Kutta methods discussed earlier. These methods use six function evaluations which can be recombined to produce a fourth-order method . Again, the difference between the fifth and the fourth order solution is used to calculate an\n", + "estimate of the truncation error. Obviously this method requires fewer function evaluations than step doubling, as the two estimates use the same evaluation points. Originally this method was found by Fehlberg, and later Cash and Karp produced the set of constants presented earlier that produce an efficient and accurate error estimate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Estimate\n", + "In the demo below, compare the error estimate to the true error, on the initial value problem from Lab 4,\n", + "\n", + "$$\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1$$\n", + "\n", + "which has the exact solution\n", + "\n", + "$$y(t) = t + e^{-t}$$\n", + "\n", + "1. Play with the time step and final time, attempting small changes at first. How reasonable is the error estimate?\n", + "\n", + "2. Keep decreasing the time step. Does the error estimate diverge from the computed error? Why?\n", + "\n", + "3. Keep increasing the time step. Does the error estimate diverge? What is happening with the numerical solution?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:24.826024Z", + "start_time": "2022-02-17T03:40:24.821203Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Functions for problem estimate\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "\n", + "\n", + "class Integ55(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'c1 c2 c3'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in initial yinit\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array([self.initvars.yinit])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeff_file_name):\n", + " super().__init__(coeff_file_name)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, theTime):\n", + " \"\"\"\n", + " y[0]=fraction white daisies\n", + " \"\"\"\n", + " user = self.uservars\n", + " f = np.empty_like(self.yinit)\n", + " f[0] = user.c1 * y[0] + user.c2 * theTime + user.c3\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.701593Z", + "start_time": "2022-02-17T03:40:24.827927Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot for problem estimate\n", + "import matplotlib.pyplot as plt\n", + "\n", + "theSolver = Integ55('expon.yaml')\n", + "\n", + "timeVals, yVals, yErrors = theSolver.timeloop5Err()\n", + "timeVals = np.array(timeVals)\n", + "exact = timeVals + np.exp(-timeVals)\n", + "yVals = np.array(yVals)\n", + "yVals = yVals.squeeze()\n", + "yErrors = np.array(yErrors)\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "line1 = theAx.plot(timeVals, yVals, label='adapt')\n", + "line2 = theAx.plot(timeVals, exact, 'r+', label='exact')\n", + "theAx.set_title('lab 5 interactive 5')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('y value')\n", + "theAx.legend(loc='center right')\n", + "\n", + "#\n", + "# we need to unpack yvals (a list of arrays of length 1\n", + "# into an array of numbers using a list comprehension\n", + "#\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "realestError = yVals - exact\n", + "actualErrorLine = theAx.plot(timeVals, realestError, label='actual error')\n", + "estimatedErrorLine = theAx.plot(timeVals, yErrors, label='estimated error')\n", + "theAx.legend(loc='best')\n", + "\n", + "timeVals, yVals, yErrors = theSolver.timeloop5fixed()\n", + "\n", + "np_yVals = np.array(yVals).squeeze()\n", + "yErrors = np.array(yErrors)\n", + "np_exact = timeVals + np.exp(-timeVals)\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "line1 = theAx.plot(timeVals, np_yVals, label='fixed')\n", + "line2 = theAx.plot(timeVals, np_exact, 'r+', label='exact')\n", + "theAx.set_title('lab 5 interactive 5 -- fixed')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('y value')\n", + "theAx.legend(loc='center right')\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "realestError = np_yVals - np_exact\n", + "actualErrorLine = theAx.plot(timeVals, realestError, label='actual error')\n", + "estimatedErrorLine = theAx.plot(timeVals, yErrors, label='estimated error')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('lab 5 interactive 5 -- fixed errors')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### Using Error to Adjust the Stepsize\n", + "\n", + "Both step doubling and embedded methods leave us with the difference\n", + "between two different order solutions to the same step. Provided is a\n", + "desired accuracy, $\\Delta_{des}$. The way this accuracy is specified\n", + "depends on the problem. It can be relative to the solution at step $i$,\n", + "\n", + "$$\\Delta_{des}(i) = RTOL\\cdot |y(i)|$$\n", + "\n", + "where $RTOL$ is the relative\n", + "tolerance desired. An absolute part should be added to this so that the\n", + "desired accuracy does not become zero. There are more ways to adjust the\n", + "error specification to the problem, but the overall goal of the\n", + "algorithm always is to make $\\Delta_{est}(i)$, the estimated error for a\n", + "step, satisfy\n", + "\n", + "$$|\\Delta_{est}(i)|\\leq\\Delta_{des}(i)|$$\n", + "\n", + "Note also that\n", + "for a system of ODEs $\\Delta_{des}$ is of course a vector, and it is\n", + "wise to replace the componentwise comparison by a vector norm.\n", + "\n", + "Note now that the calculated error term is $O(h^{5})$ as it was found as\n", + "an error estimate to fourth-order Runge-Kutta methods. This makes it\n", + "possible to scale the stepsize as\n", + "\n", + "
eq:hnew
\n", + "$$h_{new} = h_{old}[{\\Delta_{des}\\over \\Delta_{est}}]^{1/5}$$\n", + "\n", + "or,\n", + "to give an example of the suggested use of vector norms above, the new\n", + "stepsize is given by\n", + "\n", + "
eq:hnewnorm
\n", + "$$h_{new} = S h_{old}\\{[{1\\over N}\\sum_{i=1}^{N}({\\Delta_{est}(i)\\over \\Delta_{des}(i)})^{2}]^{1/2}\\}^{-1/5}\\}$$\n", + "\n", + "using the\n", + "root-mean-square norm. $S$ appears as a safety factor ($0
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "\n", + "## Coding Runge-Kutta Adaptive Stepsize Control\n", + "\n", + "The Runge-Kutta code developed in Lab 4 solves the given ODE system in fixed timesteps. It is now necessary to exert adaptive timestep control over the solution. The python code for this is at given in\n", + "[these lines.](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab5/lab5_funs.py#L145-L197)\n", + "\n", + "In principle, this is pretty simple:\n", + "\n", + "1. As before, take a step specified by the Runge-Kutta algorithm.\n", + "\n", + "2. Determine whether the estimated error lies within the user specified tolerance\n", + "\n", + "3. If the error is too large, calculate the new stepsize using the equations above, e.g. $h_{new} = S h_{old}\\{[{1\\over N}\\sum_{i=1}^{N}({\\Delta_{est}(i)\\over \\Delta_{des}(i)})^{2}]^{1/2}\\}^{-1/5}\\}$ and retake the step.\n", + "\n", + "This can be accomplished by writing a new [timeloop5Err](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab5/lab5_funs.py#L115-L117) method which evaluates each Runge-Kutta step. This routine must now also return the estimate of the truncation error.\n", + "\n", + "In practice, it is prudent to take a number of safeguards. This involves defining a number of variables that place limits on the change in stepsize:\n", + "\n", + "- A safety factor ($0
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem adaptive\n", + "The demos in the previous section solved the Daisyworld equations using the embedded Runge-Kutta methods with adaptive timestep control.\n", + "\n", + "1. Run the code and find solutions of Daisyworld with the default settings found in adapt.yaml using the timeloop5Err adaptive code\n", + "\n", + "2. Find the solutions again but this time with fixed stepsizes (you can copy and paste code for this if you don't want to code your own - be sure to read the earlier parts of the lab before attempting this question if you are stuck on how to do this!) and compare the solutions, the size of the timesteps, and the number of the timesteps between the fixed and adaptive timestep code.\n", + "\n", + "3. Given the difference in the number of timesteps, how much faster would the fixed timeloop need to be to give the same performance as the adaptive timeloop for this case?\n", + "\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Daisyworld Steady States\n", + "\n", + "We can now use the Runge-Kutta code with adaptive timestep control to find some steady states of Daisyworld by varying the luminosity $LS_0$ in the uservars section of adapt.yaml and recording the daisy fractions at the end of the integration. The code was used in the earlier sections to find some adhoc steady state solutions and the effect of altering some of the model parameters. What is of interest now is the effect of the daisy feedback on the range of parameter values for which non-zero steady states exist. That the feedback does have an effect on the steady states was readily seen in [Problem initial](#prob_initial).\n", + "\n", + "If we fix all other Daisyworld parameters, we find that non-zero steady states will exist for a range of solar luminosities which we characterize by the parameter L. Recall, that L is the multiple of the solar constant $S_0$ that Daisyworld receives. What we will investigate in the next few sections is the effect of the daisy feedback on the range of L for which non-zero steady states can exist.\n", + "\n", + "We accomplish this by fixing the Daisyworld parameters and finding the resulting steady state daisy population for a given value of L. A plot is then made of the steady-state daisy populations versus various values of L.\n", + "\n", + "\n", + "\n", + "### Neutral Daisies\n", + "\n", + "The first case we consider is the case investigated in a previous demo where the albedo of the daisies and the ground are set to the same value. This means the daisy population has no effect on the planetary temperature, i.e. there is no feedback ([Problem coupling](#prob_coupling)).\n", + "\n", + "$~$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Daisy fraction -- daisies have ground albedo\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Emission temperature\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### Black Daisies\n", + "\n", + "Now consider a population of black daisies. Note the sharp jump in the\n", + "graph when the first non-zero daisy steady states appear and the\n", + "corresponding rise in the planetary temperature. The appearance of the\n", + "black daisies results in a strong positive feedback on the temperature.\n", + "Note as well that the graph drops back to zero at a lower value of L\n", + "than in the case of neutral daisies." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Daisies darker than ground\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "Temperature\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### White Daisies\n", + "\n", + "Consider now a population of purely white daisies. In this case there is\n", + "an abrupt drop in the daisy steady state when it approaches zero with a\n", + "corresponding jump in the emission temperature. Another interesting\n", + "feature is the appearance of hysteresis (the dependence of the state of a system on its history). I.e. at an L of 1.4, there are two steady state solutions:\n", + "\n", + "a. fractional coverage of white daisies of about 0.7, and an emission temperature of about 20C (look at the direction of the arrows on each plot to determine which emission temperature is linked to which fractional coverage)\n", + "\n", + "b. no white daisies, and an emission temperature of about 55C. \n", + "\n", + "This hysteresis arises since the plot of steady states is different when solar luminosity is lowered as opposed to being raised incrementally. So which steady state the planet will be in will depend on the value of the solar constant before it was 1.4 - was it lower, in which case we'd be in state a (white daisies at about 0.7; see the arrows for direction); or was it higher (state b, no white daisies). This is what we mean when we say the state depends on its _history_ - it matters what state it was in before, even though we're studying steady states." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Daisies brighter than ground\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "Temperature\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "\n", + "\n", + "### Black and White Daisies\n", + "\n", + "Finally, consider a population of both black and white daisies. This\n", + "blends in features from the cases where the daisy population was purely\n", + "white or black. Note how the appearance of a white daisy population\n", + "initially causes the planetary temperature to actually drop even though\n", + "the solar luminosity has been increased." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "fraction of black and white daisies\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "note extended temperature range with stabilizing feedbacks\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Conclusion\n", + "\n", + "Black daisies can survive at lower mean temperatures than the white daisies and the reverse is true for white daisies. The end result is that the range of L for which the non-zero daisy steady states exist is greater than the case of neutral (or no) daisies . In other words, the feedback from the daisies provide a stabilizing effect that extends the set of environmental conditions in which life on Daisyworld can exist.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Predator\n", + "To make life a little more interesting on Daisyworld, add a population of rabbits that feed upon the daisies. The\n", + "rabbit birth rate will be proportional to the area covered by the daisies while, conversely, the daisy *death rate* will be proportional to the rabbit population.\n", + "\n", + "Add another equation to the Daisyworld model which governs the rabbit population and make the appropriate modifications to the existing daisy equations. Modify the set of equations and solve it with the Runge-Kutta method with adaptive timesteps. Use it to look for steady states and to determine their dependence on the initial conditions and model parameters.\n", + "\n", + "Hand in notebook cells that:\n", + "\n", + "1. Show your modified Daisyworld equations and your new integrator class.\n", + "\n", + "2. At least one set of parameter values and initial conditions that leads to the steady state and a plot of the timeseries for the daisies and rabbits.\n", + "\n", + "3. A discussion of the steady state’s dependence on these values, i.e. what happens when they are altered. Include a few plots for illustration.\n", + "\n", + "4. Does adding this feedback extend the range of habital L values for which non-zero populations exist?\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Appendix: Note on Global Energy Balance\n", + "\n", + "The statement that the earth is in energy balance follows from the First\n", + "Law of Thermodynamics, i.e.\n", + "\n", + "**The energy absorbed by an isolated system is equal to the\n", + " change in the internal energy minus the work extracted**\n", + "\n", + "which itself is an expression of the conservation of energy.\n", + "\n", + "For the earth, the primary source of energy is radiation from the sun.\n", + "The power emitted by the sun, known as the solar luminosity, is\n", + "$L_0=3.9 \\times 10^{26}W$ while the energy flux received at the\n", + "mean distance of the earth from the sun ($1.5\\times 10^{11}m$) is called\n", + "the solar constant, $S_0=1367\\ W m^{-2}$. For Daisy World the solar\n", + "constant is taken to be $S_0=3668\\ W m^{-2}$.\n", + "\n", + "The emission temperature of a planet is the temperature the planet would\n", + "be at if it emitted energy like a blackbody. A blackbody, so-called\n", + "because it is a perfect absorber of radiation, obeys the\n", + "Stefan-Boltzmann Law:\n", + "\n", + "\n", + "\n", + "\\textbf{eq: Stefan-Boltzman}\n", + "$$ F_B\\ (Wm^{-2}) = \\sigma T^4_e$$\n", + "\n", + " where $\\epsilon$ is the energy density and\n", + "$\\sigma = 5.67\\times 10^{-8}Wm^{-2}K^{-4}$. Given the energy absorbed,\n", + "it is easy to calculate the emission temperature $T_e$ with\n", + "Stefan-Boltzman equation.\n", + "\n", + "In general, a planet will reflect some of the radiation it receives,\n", + "with the fraction reflected known as the albedo $\\alpha_p$. So the total\n", + "energy absorbed by the planet is actually flux density received times\n", + "the fraction absorbed times the perpendicular area to the sun ( the\n", + "’shadow area’), i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + " E_{\\rm absorbed}=S_0(1-\\alpha_p)\\pi r_p^2$$\n", + "\n", + "where $r^2_p$ is the\n", + "planet’s radius.\n", + "\n", + "If we still assume the planet emits like a blackbody, we can calculate\n", + "the corresponding blackbody emission temperature. The total power\n", + "emitted would be the flux $F_B$ of the blackbody times its\n", + "surface area, i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + " E_{\\rm blackbody} = \\sigma T^4_e 4\\pi r_p^2$$\n", + "\n", + "Equating the energy absorbed with the energy emitted by a blackbody we\n", + "can calculate the emission temperature,\n", + "\n", + "\n", + "\n", + "$$\n", + " T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)$$\n", + "\n", + "\n", + "\n", + "## Summary: Daisy World Equations\n", + "\n", + "$$\\frac{dA_w}{dt} = A_w ( \\beta_w x - \\chi)$$\n", + "\n", + "$$\\frac{dA_b}{dt} = A_b ( \\beta_b x - \\chi)$$\n", + "\n", + "$$x = 1 - A_w - A_b$$\n", + "\n", + "$$\\beta_i = 1.0 - 0.003265(295.5 K -T_i)^2$$\n", + "\n", + "$$T^4_i = R L \\frac{S_0}{4\\sigma}(\\alpha_p-\\alpha_i) + T^4_e$$\n", + "\n", + "$$\\alpha_p = A_w\\alpha_w + A_b\\alpha_b + A_g\\alpha_g$$\n", + "\n", + "$$T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Appendix: Organization of the adaptive Runge Kutta routines\n", + "\n", + "* The coding follows [Press et al.](adapt_ode.pdf), with the adaptive Runge Kutta defined\n", + " in the Integrator base class [here](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L43-L59)\n", + "\n", + "* The step size choice is made in [timeloop5err](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L115-L118)\n", + "\n", + "* To set up a specific problem, you need to overide two methods as demonstrated in the example code:\n", + "the member function that initalizes the concentrations: [yinit](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L115-L118) and the derivatives routine [derivs5](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L66-L68)\n", + "\n", + "* In [Problem Initial](#prob_initial) we define a new member function:\n", + "\n", + "```python\n", + "\n", + "def find_temp(self, yvals):\n", + " \"\"\"\n", + " Calculate the temperatures over the white and black daisies\n", + " and the planetary equilibrium temperature given the daisy fractions\n", + "\n", + " input: yvals -- array of dimension [2] with the white [0] and black [1]\n", + " daisy fraction\n", + " output: white temperature (K), black temperature (K), equilibrium temperature (K)\n", + " \"\"\"\n", + "```\n", + "which give an example of how to use the instance variable data (self.uservars) in additional calculations." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Appendix: 2 minute intro to object oriented programming\n", + "\n", + "For a very brief introduction to python classes take a look at [these scipy lecture notes](http://www.scipy-lectures.org/intro/language/oop.html)\n", + "that define some of the basic concepts. For perhaps more detail than you want/need to know, see this 2 part\n", + "series on [object oriented programming](https://realpython.com/python3-object-oriented-programming) and inheritence ([supercharge your classes with super()](https://realpython.com/python-super/))\n", + "Briefly, we need a way to store a lot of information, for\n", + "example the Runge-Kutta coefficients, in an organized way that is accessible to multiple functions,\n", + "without having to pass all that information through the function arguments. Python solves this problem\n", + "by putting both the data and the functions together into an class, as in the Integrator class below.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "### Classes and constructors" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.864965Z", + "start_time": "2022-02-17T03:40:25.861468Z" + } + }, + "outputs": [], + "source": [ + "class Integrator:\n", + " def __init__(self, first, second, third):\n", + " print('Constructing Integrator')\n", + " self.a = first\n", + " self.b = second\n", + " self.c = third\n", + "\n", + " def dumpit(self, the_name):\n", + " printlist = [self.a, self.b, self.c]\n", + " print(f'dumping arguments for {the_name}: {printlist}')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* ```__init__()``` is called the class constructor\n", + "\n", + "* a,b,c are called class attributes\n", + "\n", + "* ```dumpit()``` is called a member function or method\n", + "\n", + "* We construct and instance of the class by passing the required arguments to ```__init__```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.870243Z", + "start_time": "2022-02-17T03:40:25.866862Z" + } + }, + "outputs": [], + "source": [ + "the_integ = Integrator(1, 2, 3)\n", + "print(dir(the_integ))\n", + "#note that the_integ now has a, b, c, and dumpit" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* and we call the member function like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.874862Z", + "start_time": "2022-02-17T03:40:25.872305Z" + } + }, + "outputs": [], + "source": [ + "the_integ.dumpit('Demo object')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What does this buy us? Member functions only need arguments specific to them, and can use any\n", + "attribute or other member function attached to the self variable, which doesn't need to be\n", + "part of the function call." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### finding the attributes and methods of a class instance\n", + "\n", + "Python has a couple of functions that allow you to see the methods and\n", + "attributes of objects\n", + "\n", + "To get a complete listing of builtin and user-defined methods and attributes use\n", + "\n", + "```\n", + " dir\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.880876Z", + "start_time": "2022-02-17T03:40:25.877080Z" + } + }, + "outputs": [], + "source": [ + "dir(the_integ)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To see just the attributes, use\n", + "\n", + "```\n", + " vars\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.886283Z", + "start_time": "2022-02-17T03:40:25.882379Z" + } + }, + "outputs": [], + "source": [ + "vars(the_integ)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The inspect.getmembers function gives you everything as a list of (name,object) tuples\n", + "so you can filter the items you're interested in. See:\n", + "\n", + "https://docs.python.org/3/library/inspect.html" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.892154Z", + "start_time": "2022-02-17T03:40:25.888471Z" + } + }, + "outputs": [], + "source": [ + "import inspect\n", + "all_info_the_integ = inspect.getmembers(the_integ)\n", + "only_methods = [\n", + " item[0] for item in all_info_the_integ if inspect.ismethod(item[1])\n", + "]\n", + "print('methods for the_integ: ', only_methods)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Inheritance" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "We can also specialize a class by driving from a base and then adding more data or members,\n", + "or overriding existing values. For example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.898142Z", + "start_time": "2022-02-17T03:40:25.894004Z" + } + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "class Trig(Integrator):\n", + "\n", + " def __init__(self, one, two, three, four):\n", + " print('constructing Trig')\n", + " #\n", + " # first construct the base class\n", + " #\n", + " super().__init__(one, two, three)\n", + " self.d = four\n", + "\n", + " def calc_trig(self):\n", + " self.trigval = np.sin(self.c * self.d)\n", + "\n", + " def print_trig(self, the_date):\n", + " print(f'on {the_date} the value of sin(a*b)=: {self.trigval:5.3f}')\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.903481Z", + "start_time": "2022-02-17T03:40:25.899897Z" + } + }, + "outputs": [], + "source": [ + "sample = Trig(1, 2, 3, 4)\n", + "sample.calc_trig()\n", + "sample.print_trig('July 5')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Initializing using yaml\n", + "\n", + "To specify the intial values for the class, we use a plain text\n", + "format called [yaml](http://www.yaml.org/spec/1.2/spec.html). To write a yaml\n", + "file, start with a dictionary that contains entries that are themselves dictionaries:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.911033Z", + "start_time": "2022-02-17T03:40:25.905692Z" + } + }, + "outputs": [], + "source": [ + "import yaml\n", + "out_dict = dict()\n", + "out_dict['vegetables'] = dict(carrots=5, eggplant=7, corn=2)\n", + "out_dict['fruit'] = dict(apples='Out of season', strawberries=8)\n", + "with open('groceries.yaml', 'w') as f:\n", + " yaml.safe_dump(out_dict, f)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.916796Z", + "start_time": "2022-02-17T03:40:25.912866Z" + } + }, + "outputs": [], + "source": [ + "#what's in the yaml file?\n", + "#each toplevel dictionary key became a category\n", + "import sys #output to sys.stdout because print adds blank lines\n", + "with open('groceries.yaml', 'r') as f:\n", + " for line in f.readlines():\n", + " sys.stdout.write(line)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.924798Z", + "start_time": "2022-02-17T03:40:25.919000Z" + } + }, + "outputs": [], + "source": [ + "#read into a dictionary\n", + "with open('groceries.yaml', 'r') as f:\n", + " init_dict = yaml.safe_load(f)\n", + "print(init_dict)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "### Overriding initial values in a derived class" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Suppose we want to change a value like the strength of the sun, $L$, after it's been\n", + "read in from the initail yaml file? Since a derived class can override the yinit function\n", + "in the Integrator class, we are free to change it to overwrite any variable by reassigning\n", + "the new value to self in the child constructor.\n", + "\n", + "Here's a simple example showing this kind of reinitialization:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.932658Z", + "start_time": "2022-02-17T03:40:25.929740Z" + } + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "\n", + "class Base:\n", + " #\n", + " # this constructor is called first\n", + " #\n", + " def __init__(self, basevar):\n", + " self.L = basevar\n", + "\n", + "\n", + "class Child(Base):\n", + " #\n", + " # this class changes the initialization\n", + " # to add a new variable\n", + " #\n", + " def __init__(self, a, L):\n", + " super().__init__(a)\n", + " #\n", + " # change the L in the child class\n", + " #\n", + " self.L = L" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can use Child(a,Lval) to construct instances with any value of L we want:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.938863Z", + "start_time": "2022-02-17T03:40:25.934853Z" + } + }, + "outputs": [], + "source": [ + "Lvals = np.linspace(0, 100, 11)\n", + "\n", + "#\n", + "# now make 10 children, each with a different value of L\n", + "#\n", + "a = 5\n", + "for theL in Lvals:\n", + " newItem = Child(a, theL)\n", + " print(f'set L value in child class to {newItem.L:3.0f}')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To change L in the IntegCoupling class in [Problem Conduction](#prob_conduction) look at\n", + "changing the value above these lines:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```python\n", + "initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + "self.initvars = initvars(**self.config['initvars'])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Specific example\n", + "\n", + "So to use this technique for [Problem Conduction](#prob_conduction), override `set_yinit` so that\n", + "it will take a new luminosity value newL, and add it to uservars, like this:\n", + "\n", + "```python\n", + "class IntegCoupling(Integrator):\n", + " \"\"\"rewrite the set_yinit method\n", + " to work with luminosity\n", + " \"\"\"\n", + "\n", + " def set_yinit(self, newL):\n", + " #\n", + " # change the luminocity\n", + " #\n", + " self.config[\"uservars\"][\"L\"] = newL # change solar incidence fraction\n", + " #\n", + " # make a new namedtuple factory called uservars that includes newL \n", + " #\n", + " uservars_fac = namedtuple('uservars', self.config['uservars'].keys())\n", + " #\n", + " # use the factory to make the augmented uservars named tuple\n", + " #\n", + " self.uservars = uservars_fac(**self.config['uservars'])\n", + " #\n", + "\n", + "\n", + " def __init__(self, coeffFileName, newL):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit(newL)\n", + " \n", + " ...\n", + "```\n", + "\n", + "then construct a new instance with a value of newL like this:\n", + "\n", + "```python\n", + "theSolver = IntegCoupling(\"coupling.yaml\", newL)\n", + "```\n", + "\n", + "The IntegCoupling constructor first constructs an instance of the Integrator\n", + "class by calling `super()` and passing it the name of the yaml file. Once this\n", + "is done then it\n", + "calls the `IntegCoupling.set_yinit` method which takes the Integrator class instance\n", + "(called \"self\" by convention) and modifies it by adding newL to the usersvars\n", + "attribute.\n", + "\n", + "Try executing\n", + "\n", + "```python\n", + "newL = 50\n", + "theSolver = IntegCoupling(\"coupling.yaml\", newL)\n", + "```\n", + "\n", + "and verify that:\n", + "\n", + "`theSolver.uservars.L` is indeed 50\n", + "\n", + "#### Check your understanding\n", + "\n", + "To see if you're really getting the zeitgeist, try an alternative design where\n", + "you leave the constructor as is, and instead add a new method called:\n", + "\n", + "```python\n", + "def reset_L(self,newL)\n", + "```\n", + "\n", + "so that you could do this:\n", + "\n", + "```python\n", + "newL = 50\n", + "theSolver = IntegCoupling(\"coupling.yaml\")\n", + "theSolver.reset_L(newL)\n", + "```\n", + "\n", + "and get `theSolver.uservars.L` set to 50." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Why bother?\n", + "\n", + "What does object oriented programming buy us? The dream was that companies/coders could ship\n", + "standard base classes, thoroughly tested and documented, and then users could adapt those\n", + "classes to their special needs using inheritence. This turned out to be too ambitous,\n", + "but a dialed-back version of this is definitely now part of many major programming languages." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "679.091px", + "left": "0px", + "top": "66.2926px", + "width": "207.145px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab6/01-lab6.ipynb.txt b/_sources/notebooks/lab6/01-lab6.ipynb.txt new file mode 100644 index 0000000..74f0c14 --- /dev/null +++ b/_sources/notebooks/lab6/01-lab6.ipynb.txt @@ -0,0 +1,1241 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "# Lab 6: The Lorenz equations" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems \n", + "\n", + "[Problem Experiment: Investigation of the behaviour of solutions](#prob_experiment)\n", + "\n", + "[Problem Steady-states: Find the stationary points of the Lorenz system](#prob_steady-states)\n", + "\n", + "[Problem Eigenvalues: Find the eigenvalues of the stationary point (0,0,0)](#prob_eigenvalues)\n", + "\n", + "[Problem Stability: Discuss the effect of r on the stability of the solution](#prob_stability)\n", + "\n", + "[Problem Adaptive: Adaptive time-stepping for the Lorenz equations](#prob_adaptive)\n", + "\n", + "[Problem Sensitivity: Sensitivity to initial conditions](#prob_sensitivity)\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "## Objectives \n", + "\n", + "In this lab, you will investigate the transition to chaos in the Lorenz\n", + "equations – a system of non-linear ordinary differential equations.\n", + "Using interactive examples, and analytical and numerical techniques, you\n", + "will determine the stability of the solutions to the system, and\n", + "discover a rich variety in their behaviour. You will program both an\n", + "adaptive and non-adaptive Runge-Kuttan code for the problem, and\n", + "determine the relative merits of each.\n", + "\n", + "
\n", + "\n", + "## Readings\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. Nevertheless, the original 1963 paper by Lorenz  is\n", + "worthwhile reading from a historical standpoint.\n", + "\n", + "If you would like additional background on any of the following topics,\n", + "then refer to Appendix B for the following:\n", + "\n", + "- **Easy Reading:**\n", + "\n", + " - Gleick  (1987) [pp. 9-31], an interesting overview of the\n", + " science of chaos (with no mathematical details), and a look at\n", + " its history.\n", + "\n", + " - Palmer (1993) has a short article on Lorenz’ work and\n", + " concentrating on its consequences for weather prediction.\n", + "\n", + "- **Mathematical Details:**\n", + "\n", + " - Sparrow (1982), an in-depth treatment of the mathematics\n", + " behind the Lorenz equations, including some discussion of\n", + " numerical methods.\n", + " \n", + " - The original equations by Saltzman (1962) and the\n", + " first Lorentz (1963) paper on the computation.\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "
\n", + "\n", + "## Introduction \n", + "\n", + "\n", + "For many people working in the physical sciences, the *butterfly\n", + "effect* is a well-known phrase. But even if you are unacquainted\n", + "with the term, its consequences are something you are intimately\n", + "familiar with. Edward Lorenz investigated the feasibility of performing\n", + "accurate, long-term weather forecasts, and came to the conclusion that\n", + "*even something as seemingly insignificant as the flap of a\n", + "butterfly’s wings can have an influence on the weather on the other side\n", + "of the globe*. This implies that global climate modelers must\n", + "take into account even the tiniest of variations in weather conditions\n", + "in order to have even a hope of being accurate. Some of the models used\n", + "today in weather forecasting have up to *a million unknown\n", + "variables!*\n", + "\n", + "With the advent of modern computers, many people believed that accurate\n", + "predictions of systems as complicated as the global weather were\n", + "possible. Lorenz’ studies (Lorenz, 1963), both analytical and numerical, were\n", + "concerned with simplified models for the flow of air in the atmosphere.\n", + "He found that even for systems with considerably fewer variables than\n", + "the weather, the long-term behaviour of solutions is intrinsically\n", + "unpredictable. He found that this type of non-periodic, or\n", + "*chaotic* behaviour, appears in systems that are described\n", + "by non-linear differential equations.\n", + "\n", + "The atmosphere is just one of many hydrodynamical systems, which exhibit\n", + "a variety of solution behaviour: some flows are steady; others oscillate\n", + "between two or more states; and still others vary in an irregular or\n", + "haphazard manner. This last class of behaviour in a fluid is known as\n", + "*turbulence*, or in more general systems as\n", + "*chaos*. Examples of chaotic behaviour in physical systems\n", + "include\n", + "\n", + "- thermal convection in a tank of fluid, driven by a heated plate on\n", + " the bottom, which displays an irregular patter of “convection rolls”\n", + " for certain ranges of the temperature gradient;\n", + "\n", + "- a rotating cylinder, filled with fluid, that exhibits\n", + " regularly-spaced waves or irregular, nonperiodic flow patterns under\n", + " different conditions;\n", + "\n", + "- the Lorenzian water wheel, a mechanical system, described in\n", + " [Appendix A](#sec_water-wheel).\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One of the simplest systems to exhibit chaotic behaviour is a system of\n", + "three ordinary differential equations, studied by Lorenz, and which are\n", + "now known as the *Lorenz equations* (see\n", + "equations ([eq: lorentz](#eq_lorentz)). They are an idealization of\n", + "a more complex hydrodynamical system of twelve equations describing\n", + "turbulent flow in the atmosphere, but which are still able to capture\n", + "many of the important aspects of the behaviour of atmospheric flows. The\n", + "Lorenz equations determine the evolution of a system described by three\n", + "time-dependent state variables, $x(t)$, $y(t)$ and $z(t)$. The state in\n", + "Lorenz’ idealized climate at any time, $t$, can be given by a single\n", + "point, $(x,y,z)$, in *phase space*. As time varies, this\n", + "point moves around in the phase space, and traces out a curve, which is\n", + "also called an *orbit* or *trajectory*. \n", + "\n", + "The video below shows an animation of the 3-dimensional phase space trajectories\n", + "of $x, y, z$ for the Lorenz equations presented below. It is calculated with\n", + "the python script by written by Jake VanderPlas: [lorenz_ode.py](https://github.com/rhwhite/numeric_2022/blob/main/notebooks/lab6/lorenz_ode.py)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:19:12.919866Z", + "start_time": "2022-02-28T17:19:12.683509Z" + } + }, + "outputs": [], + "source": [ + "from IPython.display import YouTubeVideo\n", + "YouTubeVideo('DDcCiXLAk2U')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "\n", + "## Using the Integrator class" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " [lorenz_ode.py](https://github.com/rhwhite/numeric_2022/blob/main/notebooks/lab6/lorenz_ode.py) uses the\n", + " odeint package from scipy. That's fine if we are happy with a black box, but we\n", + " can also use the Integrator class from lab 5. Here is the sub-class Integrator61 \n", + " that is specified for the Lorenz equations:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:19:32.024832Z", + "start_time": "2022-02-28T17:19:31.720112Z" + } + }, + "outputs": [], + "source": [ + "import context\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "from collections import namedtuple\n", + "import numpy as np\n", + "\n", + "\n", + "\n", + "class Integ61(Integrator):\n", + "\n", + " def __init__(self, coeff_file_name,initvars=None,uservars=None,\n", + " timevars=None):\n", + " super().__init__(coeff_file_name)\n", + " self.set_yinit(initvars,uservars,timevars)\n", + "\n", + " def set_yinit(self,initvars,uservars,timevars):\n", + " #\n", + " # read in 'sigma beta rho', override if uservars not None\n", + " #\n", + " if uservars:\n", + " self.config['uservars'].update(uservars)\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'x y z'\n", + " #\n", + " if initvars:\n", + " self.config['initvars'].update(initvars)\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " #\n", + " # set dt, tstart, tend if overiding base class values\n", + " #\n", + " if timevars:\n", + " self.config['timevars'].update(timevars)\n", + " timevars = namedtuple('timevars', self.config['timevars'].keys())\n", + " self.timevars = timevars(**self.config['timevars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.x, self.initvars.y, self.initvars.z])\n", + " self.nvars = len(self.yinit)\n", + "\n", + " def derivs5(self, coords, t):\n", + " x,y,z = coords\n", + " u=self.uservars\n", + " f=np.empty_like(coords)\n", + " f[0] = u.sigma * (y - x)\n", + " f[1] = x * (u.rho - z) - y\n", + " f[2] = x * y - u.beta * z\n", + " return f" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The main difference with daisyworld is that I've changed the ```__init__``` function to\n", + "take optional arguments to take initvars, uservars and timevars, to give us\n", + "more flexibility in overriding the default configuration specified in\n", + "[lorenz.yaml](./lorenz.yaml)\n", + "\n", + "I also want to be able to plot the trajectories in 3d, which means that I\n", + "need the Axes3D class from matplotlib. I've written a convenience function\n", + "called plot_3d that sets start and stop points and the viewing angle:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:20:22.889548Z", + "start_time": "2022-02-28T17:20:21.819508Z" + } + }, + "outputs": [], + "source": [ + "import warnings\n", + "warnings.simplefilter(action = \"ignore\", category = FutureWarning)\n", + "%matplotlib inline\n", + "from matplotlib import pyplot as plt\n", + "from mpl_toolkits.mplot3d import Axes3D\n", + "plt.style.use('ggplot')\n", + "\n", + "def plot_3d(ax,xvals,yvals,zvals):\n", + " \"\"\"\n", + " plot a 3-d trajectory with start and stop markers\n", + " \"\"\"\n", + " line,=ax.plot(xvals,yvals,zvals,'r-')\n", + " ax.set_xlim((-20, 20))\n", + " ax.set_ylim((-30, 30))\n", + " ax.set_zlim((5, 55))\n", + " ax.grid(True)\n", + " #\n", + " # look down from 30 degree elevation and an azimuth of\n", + " #\n", + " ax.view_init(30,5)\n", + " line,=ax.plot(xvals,yvals,zvals,'r-')\n", + " ax.plot([-20,15],[-30,-30],[0,0],'k-')\n", + " ax.scatter(xvals[0],yvals[0],zvals[0],marker='o',c='green',s=75)\n", + " ax.scatter(xvals[-1],yvals[-1],zvals[-1],marker='^',c='blue',s=75)\n", + " out=ax.set(xlabel='x',ylabel='y',zlabel='z')\n", + " line.set(alpha=0.2)\n", + " return ax\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the code below I set timevars, uservars and initvars\n", + "to illustrate a sample orbit in phase\n", + "space (with initial value $(5,5,5)$). Notice that the orbit appears to\n", + "be lying in a surface composed of two “wings”. In fact, for the\n", + "parameter values used here, all orbits, no matter the initial\n", + "conditions, are eventually attracted to this surface; such a surface is\n", + "called an *attractor*, and this specific one is termed the\n", + "*butterfly attractor* … a very fitting name, both for its\n", + "appearance, and for the fact that it is a visualization of solutions\n", + "that exhibit the “butterfly effect.” The individual variables are\n", + "plotted versus time in [Figure xyz-vs-t](#fig_xyz-vs-t)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:20:28.951870Z", + "start_time": "2022-02-28T17:20:28.258149Z" + } + }, + "outputs": [], + "source": [ + "#\n", + "# make a nested dictionary to hold parameters\n", + "#\n", + "timevars=dict(tstart=0,tend=27,dt=0.01)\n", + "uservars=dict(sigma=10,beta=2.6666,rho=28)\n", + "initvars=dict(x=5,y=5,z=5)\n", + "params=dict(timevars=timevars,uservars=uservars,initvars=initvars)\n", + "#\n", + "# expand the params dictionary into key,value pairs for\n", + "# the Integ61 constructor using dictionary expansion\n", + "#\n", + "theSolver = Integ61('lorenz.yaml',**params)\n", + "timevals, coords, errorlist = theSolver.timeloop5fixed()\n", + "xvals,yvals,zvals=coords[:,0],coords[:,1],coords[:,2]\n", + "\n", + "\n", + "fig = plt.figure(figsize=(6,6))\n", + "ax = fig.add_axes([0, 0, 1, 1], projection='3d')\n", + "ax=plot_3d(ax,xvals,yvals,zvals)\n", + "out=ax.set(title='starting point: {},{},{}'.format(*coords[0,:]))\n", + "#help(ax.view_init)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "A plot of the solution to the Lorenz equations as an orbit in phase\n", + "space. Parameters: $\\sigma=10$, $\\beta=\\frac{8}{3}$, $\\rho=28$; initial values:\n", + "$(x,y,z)=(5,5,5)$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:20:33.127958Z", + "start_time": "2022-02-28T17:20:32.922941Z" + } + }, + "outputs": [], + "source": [ + "fig,ax = plt.subplots(1,1,figsize=(8,6))\n", + "ax.plot(timevals,xvals,label='x')\n", + "ax.plot(timevals,yvals,label='y')\n", + "ax.plot(timevals,zvals,label='z')\n", + "ax.set(title='x, y, z for trajectory',xlabel='time')\n", + "out=ax.legend()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "**Figure xyz-vs-t**: A plot of the solution to the Lorenz equations versus time.\n", + "Parameters: $\\sigma=10$, $\\beta=\\frac{8}{3}$, $\\rho=28$; initial values:\n", + "$(x,y,z)=(5,5,5)$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "As you saw in the movie, the behaviour of the solution, even though it\n", + "seems to be confined to a specific surface, is anything but regular. The\n", + "solution seems to loop around and around forever, oscillating around one\n", + "of the wings, and then jump over to the other one, with no apparent\n", + "pattern to the number of revolutions. This example is computed for just\n", + "one choice of parameter values, and you will see in the problems later\n", + "on in this lab, that there are many other types of solution behaviour.\n", + "In fact, there are several very important characteristics of the\n", + "solution to the Lorenz equations which parallel what happens in much\n", + "more complicated systems such as the atmosphere:\n", + "\n", + "1. The solution remains within a bounded region (that is, none of the\n", + " values of the solution “blow up”), which means that the solution\n", + " will always be physically reasonable.\n", + "\n", + "2. The solution flips back and forth between the two wings of the\n", + " butterfly diagram, with no apparent pattern. This “strange” way that\n", + " the solution is attracted towards the wings gives rise to the name\n", + " *strange attractor*.\n", + "\n", + "3. The resulting solution depends very heavily on the given initial\n", + " conditions. Even a very tiny change in one of the initial values can\n", + " lead to a solution which follows a totally different trajectory, if\n", + " the system is integrated over a long enough time interval.\n", + "\n", + "4. The solution is irregular or *chaotic*, meaning that it\n", + " is impossible, based on parameter values and initial conditions\n", + " (which may contain small measurement errors), to predict the\n", + " solution at any future time.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "
\n", + "## The Lorenz Equations \n", + "\n", + "\n", + "As mentioned in the previous section, the equations we will be\n", + "considering in this lab model an idealized hydrodynamical system:\n", + "two-dimensional convection in a tank of water which is heated at the\n", + "bottom (as pictured in [Figure Convection](#fig_convection) below).\n", + "\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:31:16.352041Z", + "start_time": "2022-02-28T17:31:16.339061Z" + } + }, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "Image(filename=\"images/convection.png\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure Convection** Lorenz studied the flow of fluid in a tank heated at the bottom, which\n", + "results in “convection rolls”, where the warm fluid rises, and the cold\n", + "fluid is drops to the bottom.\n", + "\n", + "Lorenz wrote the equations in the form\n", + "\n", + "\n", + "\n", + "
\n", + "$$\n", + "\\begin{aligned}\n", + " \\frac{dx}{dt} &=& \\sigma(y-x) \\\\\n", + " \\frac{dy}{dt} &=& \\rho x-y-xz \\\\\n", + " \\frac{dz}{dt} &=& xy-\\beta z \n", + "\\end{aligned}\n", + "$$\n", + "where $\\sigma$, $\\rho$\n", + "and $\\beta$ are real, positive parameters. The variables in the problem can\n", + "be interpreted as follows:\n", + "\n", + "- $x$ is proportional to the intensity of the convective motion (positive\n", + " for clockwise motion, and a larger magnitude indicating more\n", + " vigorous circulation),\n", + "\n", + "- $y$ is proportional to the temperature difference between the ascending\n", + " and descending currents (it’s positive if the warm water is on the\n", + " bottom),\n", + "\n", + "- $z$ is proportional to the distortion of the vertical temperature\n", + " profile from linearity (a value of 0 corresponds to a linear\n", + " gradient in temperature, while a positive value indicates that the\n", + " temperature is more uniformly mixed in the middle of the tank and\n", + " the strongest gradients occur near the boundaries),\n", + "\n", + "- $t$ is the dimensionless time,\n", + "\n", + "- $\\sigma$ is called the Prandtl number (it involves the viscosity and thermal\n", + " conductivity of the fluid),\n", + "\n", + "- $\\rho$ is a control parameter, representing the temperature difference\n", + " between the top and bottom of the tank, and\n", + "\n", + "- $\\beta$ measures the width-to-height ratio of the convection layer." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that these equations are *non-linear* in $x$, $y$\n", + "and $z$, which is a result of the non-linearity of the fluid flow\n", + "equations from which this simplified system is obtained.\n", + "\n", + "**Mathematical Note**: This system of equations is derived by Saltzman (1962) for the\n", + "thermal convection problem. However, the same\n", + "equations ([eq:lorenz](#eq_lorenz)) arise in other physical\n", + "systems as well. One example is the whose advantage over the original\n", + "derivation by Saltzman (which is also used in Lorenz’ 1963 paper ) is\n", + "that the system of ODEs is obtained *directly from the\n", + "physics*, rather than as an approximation to a partial\n", + "differential equation.\n", + "\n", + "Remember from [Section Introduction](#sec_introduction) that the Lorenz equations exhibit\n", + "nonperiodic solutions which behave in a chaotic manner. Using analytical\n", + "techniques, it is actually possible to make some qualitative predictions\n", + "about the behaviour of the solution before doing any computations.\n", + "However, before we move on to a discussion of the stability of the\n", + "problem in Section [lab6:sec:stability], you should do the following\n", + "exercise, which will give you a hands-on introduction to the behaviour\n", + "of solutions to the Lorenz equations.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Boundedness of the Solution\n", + "\n", + "\n", + "The easiest way to see that the solution is bounded in time is by\n", + "looking at the motion of the solution in phase space, $(x,y,z)$, as the\n", + "flow of a fluid, with velocity $(\\dot{x}, \\dot{y}, \\dot{z})$ (the “dot”\n", + "is used to represent a time derivative, in order to simplify notation in\n", + "what follows). The *divergence of this flow* is given by\n", + "$$\\frac{\\partial \\dot{x}}{\\partial x} +\n", + " \\frac{\\partial \\dot{y}}{\\partial y} +\n", + " \\frac{\\partial \\dot{z}}{\\partial z},$$ \n", + "and measures how the volume of\n", + "a fluid particle or parcel changes – a positive divergence means that\n", + "the fluid volume is increasing locally, and a negative volume means that\n", + "the fluid volume is shrinking locally (zero divergence signifies an\n", + "*incompressible fluid*, which you will see more of in and\n", + "). If you look back to the Lorenz\n", + "equations ([eq:lorenz](#eq_lorenz)), and take partial derivatives,\n", + "it is clear that the divergence of this flow is given by\n", + "$$\n", + "\\frac{\\partial \\dot{x}}{\\partial x} +\n", + "\\frac{\\partial \\dot{y}}{\\partial y} +\n", + "\\frac{\\partial \\dot{z}}{\\partial z} = -(\\sigma + b + 1).\n", + "$$\n", + "Since\n", + "$\\sigma$ and $b$ are both positive, real constants, the divergence is a\n", + "negative number, which is always less than $-1$. Therefore, each small\n", + "volume shrinks to zero as the time $t\\rightarrow\\infty$, at a rate which\n", + "is independent of $x$, $y$ and $z$. The consequence for the solution,\n", + "$(x,y,z)$, is that every trajectory in phase space is eventually\n", + "confined to a region of zero volume. As you saw in\n", + "[Problem experiment](#prob_experiment), this region, or\n", + "*attractor*, need not be a point – in fact, the two wings\n", + "of the “butterfly diagram” are a surface with zero volume.\n", + "\n", + "The most important consequence of the solution being bounded is that\n", + "none of the physical variables, $x$, $y$, or $z$ “blows up.”\n", + "Consequently, we can expect that the solution will remain with\n", + "physically reasonable limits.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "[Problem Experiment](#prob_experiment) \n", + "\n", + "Lorenz’ results are based on the following values of the physical parameters taken from Saltzman’s paper (Saltzman, 1962): $$\\sigma=10 \\quad \\mathrm{and} \\quad b=\\frac{8}{3}.$$ \n", + "As you will see in [Section stability](#sec_stability), there is a *critical value of the parameter $\\rho$*, $\\rho^\\ast=470/19\\approx 24.74$ (for these values of $\\sigma$ and $\\beta$); it is *critical* in the sense that for\n", + "any value of $\\rho>\\rho^\\ast$, the flow is unstable.\n", + "\n", + "To allow you to investigate the behaviour of the solution to the Lorenz\n", + "equations, you can try out various parameter values in the following\n", + "interactive example. *Initially, leave $\\sigma$ and $\\beta$ alone, and\n", + "modify only $\\rho$ and the initial conditions.* If you have time,\n", + "you can try varying the other two parameters, and see what happens. Here\n", + "are some suggestions:\n", + "\n", + "- Fix the initial conditions at $(5,5,5)$ and vary $\\rho$ between $0$ and\n", + " $100$.\n", + "\n", + "- Fix $\\rho=28$, and vary the initial conditions; for example, try\n", + " $(0,0,0)$, $(0.1,0.1,0.1)$, $(0,0,20)$, $(100,100,100)$,\n", + " $(8.5,8.5,27)$, etc.\n", + "\n", + "- Anything else you can think of …\n", + "\n", + "1. Describe the different types of behaviour you see and compare them\n", + " to what you saw in [Figure fixed-plot](#fig_fixed-plot). Also, discuss the\n", + " results in terms of what you read in [Section Introduction](#sec_introduction)\n", + " regarding the four properties of the solution.\n", + "\n", + "2. One question you should be sure to ask yourself is: *Does\n", + " changing the initial condition affect where the solution ends\n", + " up?* The answer to this question will indicate whether there\n", + " really is an attractor which solutions approach as\n", + " $t\\rightarrow\\infty$.\n", + "\n", + "3. Finally, for the different types of solution behaviour, can you\n", + " interpret the results physically in terms of the thermal convection\n", + " problem?\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we’re ready to find out why the solution behaves as it does. In [Section Intro](#sec_introduction), you were told about four properties of solutions to the Lorenz equations that are also exhibited by the atmosphere, and in the problem you just worked though, you saw that these were also exhibited by solutions to the Lorenz equations. In the remainder of this section, you will see mathematical reasons for two of those characteristics, namely the boundedness and stability (or instability) of solutions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "
\n", + "\n", + "## Steady States \n", + "\n", + "\n", + "A *steady state* of a system is a point in phase space from which the system will not change in time, once that state has been reached. In other words, it is a point, $(x,y,z)$, such that the solution does not change, or where\n", + "$$\\frac{dx}{dt} = 0 \\quad\\ \\mathrm{and}\\ \\quad \\frac{dy}{dt} = 0 \\quad \\ \\mathrm{and}\\ \\quad \\frac{dz}{dt} = 0.$$ \n", + "This point is usually referred to as a *stationary point* of the system.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "**[Problem_steady-states](#prob_steady-states)**\n", + "\n", + "Set the time derivatives equal to zero in the Lorenz equations ([eq:lorenz](#eq_lorenz)), and solve the resulting system to show that there are three possible steady states, namely the points\n", + "\n", + "- $(0,0,0)$,\n", + "\n", + "- $(\\sqrt{\\beta(\\rho-1)},\\sqrt{\\beta(\\rho -1)},\\rho -1)$, and\n", + "\n", + "- $(-\\sqrt{\\beta (\\rho -1)},-\\sqrt{\\beta(\\rho-1)},\\rho-1)$.\n", + "\n", + "Remember that $\\rho$ is a positive real number, so that that there is *only one* stationary point when $0\\leq \\rho \\leq 1$, but all three stationary points are present when $\\rho >1$.\n", + "\n", + "While working through [Problem experiment](#prob_experiment), did you notice the change in behaviour of the solution as $\\rho$ passes through the value 1? If not, then go back to the interactive example and try out some values\n", + "of $\\rho$ both less than and greater than 1 to see how the solution changes.\n", + "\n", + "A steady state tells us the behaviour of the solution only at a single point. *But what happens to the solution if it is perturbed slightly away from a stationary point? Will it return to the stationary point; or will it tend to move away from the point; or will it oscillate about the steady state; or something else … ?* All of these questions are related to the long-term, *asymptotic* behaviour or *stability* of the solution near a given point. You already should have seen some examples of different asymptotic solution behaviour in the Lorenz equations for different parameter values. The next section describes a general method for determining the stability of a solution near a given stationary point." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "
\n", + "\n", + "## Linearization about the Steady States \n", + "\n", + "\n", + "The difficult part of doing any theoretical analysis of the Lorenz equations is that they are *non-linear*. *So, why not approximate the non-linear problem by a linear one?*\n", + "\n", + "This idea should remind you of what you read about Taylor series in Lab \\#2. There, we were approximating a function, $f(x)$, around a point by expanding the function in a Taylor series, and the first order Taylor approximation was simply a linear function in $x$. The approach we will take here is similar, but will get into Taylor series of functions of more than one variable: $f(x,y,z,\\dots)$.\n", + "\n", + "The basic idea is to replace the right hand side functions in ([eq:lorenz](#eq_lorenz)) with a linear approximation about a stationary point, and then solve the resulting system of *linear ODE’s*. Hopefully, we can then say something about the non-linear system at values of the solution *close to the stationary point* (remember that the Taylor series is only accurate close to the point we’re expanding about).\n", + "\n", + "So, let us first consider the stationary point $(0,0,0)$. If we linearize a function $f(x,y,z)$ about $(0,0,0)$ we obtain the approximation: \n", + "\n", + "$$f(x,y,z) \\approx f(0,0,0) + f_x(0,0,0) \\cdot (x-0) + \n", + "f_y(0,0,0) \\cdot (y-0) + f_z(0,0,0) \\cdot (z-0).$$ \n", + "\n", + "If we apply this formula to the right hand side function for each of the ODE’s in ([eq: lorenz](#eq_lorenz)), then we obtain the following linearized system about $(0,0,0)$: \n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "\\begin{aligned}\n", + " \\frac{dx}{dt} &= -\\sigma x + \\sigma y \\\\\n", + " \\frac{dy}{dt} &= \\rho x-y \\\\\n", + " \\frac{dz}{dt} &= -\\beta z \n", + "\\end{aligned}\n", + "\n", + "\n", + "(note that each right hand side is now a linear function of $x$, $y$ and $z$). It is helpful to write this system in matrix form as\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "\\begin{aligned}\n", + " \\frac{d}{dt} \\left(\n", + " \\begin{array}{c} x \\\\ y \\\\ z \\end{array} \\right) = \n", + " \\left( \\begin{array}{ccc}\n", + " -\\sigma & \\sigma & 0 \\\\\n", + " \\rho & -1 & 0 \\\\\n", + " 0 & 0 & -\\beta \n", + " \\end{array} \\right) \\;\n", + " \\left(\\begin{array}{c} x \\\\ y \\\\ z \\end{array} \\right)\n", + "\\end{aligned}\n", + "\n", + "\n", + "the reason for this being that the *eigenvalues* of the matrix give us valuable information about the solution to the linear system. In fact, it is a well-known result from the study of dynamical systems is that if the matrix in [eq:lorenz_linear_matrix](#eq:lorenz_linear_matrix) has *distinct* eigenvalues $\\lambda_1$, $\\lambda_2$ and $\\lambda_3$, then the solution to this equation is given by\n", + "\n", + "\n", + "\n", + "$$\n", + "x(t) = c_1 e^{\\lambda_1 t} + c_2 e^{\\lambda_2 t} + c_3 e^{\\lambda_3 t},\n", + "$$\n", + "and similarly for the other two solution components, $y(t)$ and $z(t)$ (the $c_i$’s are constants that are determined by the initial conditions of the problem). This should not seem too surprising, if you think that the solution to the scalar equation $dx/dt=\\lambda x$ is $x(t) = e^{\\lambda t}$.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "**[Problem eigenvalues:](prob_eigenvalues)**\n", + "\n", + "Remember from Lab \\#3 that the eigenvalues of a matrix, $A$, are given by the roots of the characteristic equation, $det(A-\\lambda I)=0$. Determine the characteristic equation of the matrix in [eq:lorenz_linear_matrix](#eq:lorenz_linear_matrix), and show that the eigenvalues of the linearized problem are\n", + "\n", + "**eq_eigen0**\n", + "\\begin{equation}\n", + "\\lambda_1 = -\\beta, \\quad \\mathrm{and} \\quad \\lambda_2, \\lambda_3 =\n", + "\\frac{1}{2} \\left( -\\sigma - 1 \\pm \\sqrt{(\\sigma-1)^2 + 4 \\sigma \\rho}\n", + "\\right). \n", + "\\end{equation}\n", + "\n", + "\n", + "When $\\rho>1$, the same linearization process can be applied at the remaining two stationary points, which have eigenvalues that satisfy another characteristic equation:\n", + "\n", + "**eq_eigen01**\n", + "\\begin{equation}\n", + "\\lambda^3+(\\sigma+\\beta +1)\\lambda^2+(\\rho+\\sigma)\\beta \\lambda+2\\sigma \\beta(\\rho-1)=0.\n", + "\\end{equation}\n", + "\n", + "\n", + "If you need a reminder about odes and eignevalues, the following resources may be useful:\n", + "- [Linear ODE review](http://tutorial.math.lamar.edu/Classes/DE/SolutionsToSystems.aspx)\n", + "- [Link between eigenvectors and ODEs](https://math.stackexchange.com/questions/23312/what-is-the-importance-of-eigenvalues-eigenvectors)\n", + "- [Stability theory for ODEs](https://en.wikipedia.org/wiki/Stability_theory)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "### Stability of the Linearized Problem\n", + "\n", + "\n", + "Now that we know the eigenvalues of the system around each stationary point, we can write down the solution to the linearized problem. However, it is not the exact form of the linearized solution that we’re interested in, but rather its *stability*. In fact, the eigenvalues give us all the information we need to know about how the linearized solution behaves in time, and so we’ll only talk about the eigenvalues from now on.\n", + "\n", + "It is possible that two of the eigenvalues in the characteristic equations above can be complex numbers – *what does this mean for the solution?* The details are a bit involved, but the important thing to realize is that if $\\lambda_2,\\lambda_3=a\\pm i\\beta$ are complex (remember that complex roots always occur in conjugate pairs) then the solutions can be rearranged so that they are of the form\n", + "\n", + "\n", + "$$\n", + "x(t) = c_1 e^{\\lambda_1 t} + c_2 e^{a t} \\cos(bt) + c_3 e^{a t}\n", + " \\sin(bt). \n", + " $$ \n", + "In terms of the asymptotic stability of the problem, we need to look at the asymptotic behaviour of the solution as $t\\rightarrow \\infty$, from which several conclusions can be drawn:\n", + "\n", + "1. If the eigenvalues are *real and negative*, then the solution will go to zero as $t \\rightarrow\\infty$. In this case the linearized solution is *stable*.\n", + "\n", + "2. If the eigenvalues are real, and *at least one* is positive, then the solution will blow up as $t \\rightarrow\\infty$. In this case the linearized solution is *unstable*.\n", + "\n", + "3. If there is a complex conjugate pair of eigenvalues, $a\\pm ib$, then the solution exhibits oscillatory behaviour (with the appearance of the terms $\\sin{bt}$ and $\\cos{bt}$). If the real part, $a$, of all eigenvalues is negative, the oscillations will decay in time and the solution is *stable*; if the real part is positive, then the oscillations will grow, and the solution is *unstable*. If the complex eigenvalues have zero real part, then the oscillations will neither decay nor increase in time – the resulting linearized problem is periodic, and we say the solution is *marginally stable*.\n", + "\n", + "Now, an important question:\n", + "\n", + "*Does the stability of the non-linear system parallel that of the linearized systems near the stationary points?*\n", + "\n", + "The answer is “almost always”. We won’t go into why, or why not, but just remember that you can usually expect the non-linear system to behave just as the linearized system near the stationary states.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The discussion of stability of the stationary points for the Lorenz equations will be divided up based on values of the parameter $\\rho$ (assuming $\\sigma=10$ and $\\beta=\\frac{8}{3}$). You’ve already seen that the behaviour of the solution changes significantly, by the appearance of two additional stationary points, when $r$ passes through the value 1. You’ll also see an explanation for the rest of the behaviour you observed:\n", + "\n", + "$0<\\rho<1$:\n", + "\n", + "- there is only one stationary state, namely the point $(0,0,0)$. You can see from [eq:eigen0](eq_eigen0) that for these values of $\\rho$, there are three, real, negative roots. The origin is a *stable* stationary point; that is, it attracts nearby solutions to itself.\n", + "\n", + "$\\rho>1$:\n", + "\n", + "- The origin has one positive, and two negative, real eigenvalues. Hence, the origin is *unstable*. Now, we need only look at the other two stationary points, whose behaviour is governed by the roots of [eq:eigen01](eq_eigen01)\n", + "\n", + "$1<\\rho<\\frac{470}{19}$:\n", + "\n", + "- The other two stationary points have eigenvalues that have negative real parts. So these two points are *stable*.\n", + "\n", + " It’s also possible to show that two of these eigenvalues are real when $\\rho<1.346$, and they are complex otherwise (see Sparrow 1982 for a more complete discussion). Therefore, the solution begins to exhibit oscillatory behaviour beyond a value of $\\rho$ greater than 1.346.\n", + "\n", + "$\\rho>\\frac{470}{19}$:\n", + "\n", + "- The other two stationary points have one real, negative eigenvalue, and two complex eigenvalues with positive real part. Therefore, these two points are *unstable*. In fact, all three stationary points are unstable for these values of $\\rho$.\n", + "\n", + "The stability of the stationary points is summarized in table below." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "| | (0,0,0) | $(\\pm\\sqrt{(\\beta(\\rho-1))},\\pm\\sqrt{\\beta(\\rho-1)},\\beta-1)$ |\n", + "|------------------------|----------|-------------------------------------------------------------------|\n", + "| $0<\\rho<1$ | stable | $-$ |\n", + "| $1<\\rho<\\frac{470}{19}$| unstable| stable |\n", + "| $\\rho>\\frac{470}{19}$| unstable| unstable |\n", + "\n", + "_Summary of the stability of the stationary points for the Lorenz equations; parameters $\\sigma=10$, $\\beta=\\frac{8}{3}$_" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "This “critical value” of $\\rho^\\ast= \\frac{470}{19}$ is actually found\n", + "using the formula $$\\rho^\\ast= \\frac{\\sigma(\\sigma+\\beta+3)}{\\sigma-\\beta-1}.$$ See\n", + "Sparrow (1982) for more details.\n", + "\n", + "A qualitative change in behaviour of in the solution when a parameter is\n", + "varied is called a *bifurcation*. Bifurcations occur at:\n", + "\n", + "- $\\rho=1$, when the origin switches from stable to unstable, and two\n", + " more stationary points appear.\n", + "\n", + "- $\\rho=\\rho^\\ast$, where the remaining two stationary points switch from\n", + " being stable to unstable.\n", + "\n", + "Remember that the linear results apply only near the stationary points,\n", + "and do not apply to all of the phase space. Nevertheless, the behaviour\n", + "of the orbits near these points can still say quite a lot about the\n", + "behaviour of the solutions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "**[Problem Stability](#prob_stability)** \n", + "\n", + "Based on the analytical results from this section, you can now go back to your results from [Problem Experiment](#prob_experiment) and look at them in a new light. Write a short summary of your results (including a few plots or sketches), describing how the solution changes with $\\rho$ in terms of the existence and stability of the stationary points.\n", + "\n", + "There have already been hints at problems with the linear stability analysis. One difficulty that hasn’t been mentioned yet is that for values of $\\rho>\\rho^\\ast$, the problem has oscillatory solutions, which are unstable. *Linear theory does not reveal what happens when these oscillations become large!* In order to study more closely the\n", + "long-time behaviour of the solution, we must turn to numerical integration (in fact, all of the plots you produced in\n", + "Problem [lab6:prob:experiment] were generated using a numerical code).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Numerical Integration \n", + "\n", + "In Lorenz’ original paper, he discusses the application of the forward Euler and leap frog time-stepping schemes, but his actual computations are done using the second order *Heun’s method* (you were introduced to this method in Lab \\#4. Since we already have a lot of experience with Runge-Kutta methods for systems of ODE’s from earlier labs, you’ll be using this approach to solve the Lorenz equations as well. You already have a code from that solves the Daisy World equations, so you can jump right into the programming for the Lorenz equations with the following exercises …\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T19:16:33.115638Z", + "start_time": "2022-02-28T19:16:33.036595Z" + } + }, + "source": [ + "
\n", + "\n", + "**[Problem Adaptive](#prob_adaptive)** \n", + " \n", + "You saw in that adaptive time-stepping saved a considerable amount of computing time for the Daisy World problem. In\n", + "this problem, you will be investigating whether or not an adaptive Runge-Kutta code is the best choice for the Lorenz equations.\n", + "\n", + "Use the Integrator61 object to compute in both adaptive and fixed timeloop solutions for an extended integration. \n", + "Compare the number of time steps taken (plot the time step vs. the integration time for both methods). Which method is\n", + "more efficient? Which is fastest? A simple way to time a portion of a script is to use the ```time``` module to calculate the elapsed time:\n", + "\n", + "```\n", + "import time\n", + "tic = time.time()\n", + "##program here\n", + "elapsed = time.time() - tic\n", + "```\n", + "\n", + "To answer this last question, you will have to consider the cost of the adaptive scheme, compared to the non-adaptive one. The adaptive scheme is obviously more expensive, but by how much? You should think in terms of the number of multiplicative operations that are required in every time step for each method. You don’t have to give an exact operation count, round figures will do.\n", + "\n", + "Optional extra: Finally, we mentioned that the code that produced the animation uses a C module called odeint. It is called [here](https://github.com/rhwhite/numeric_2024/blob/main/notebooks/lab6/lorenz_ode.py#L22-L23) using derivatives defined in \n", + "[lorenz_deriv](https://github.com/rhwhite/numeric_2024/blob/main/notebooks/lab6/lorenz_ode.py#L11-L14).\n", + "Use odeint to solve the same problem you did for the fixed and adaptive timeloops. What is the speed increase you see by using the compiled module?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "**[Problem Sensitivity](#prob_sensitivity)** \n", + " \n", + "One property of chaotic systems such as the Lorenz equations is their *sensitivity to initial\n", + "conditions* – a consequence of the “butterfly effect.” Modify your code from [Problem adaptive](#prob_adaptive) to compute two trajectories (in the chaotic regime $r>r^\\ast$) with different initial conditions *simultaneously*. Use two initial conditions that are very close to each other, say $(1,1,20)$ and $(1,1,20.001)$. Use your “method of choice” (adaptive/non-adaptive), and plot the distance between the two trajectories as a function of time. What do you see?\n", + "\n", + "One important limitation of numerical methods is immediately evident when approximating non-periodic dynamical systems such as the Lorenz equations: namely, *every computed solution is periodic*. That is, when we’re working in floating point arithmetic, there are only finitely many numbers that can be represented, and the solution must eventually repeat itself. When using single precision arithmetic, a typical computer can represent many more floating point numbers than we could ever perform integration steps in a numerical scheme. However, it is still possible that round-off error might introduce a periodic orbit in the numerical solution where one does not really exist. In our computations, this will not be a factor, but it is something to keep in mind.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Other Chaotic Systems\n", + "\n", + "\n", + "There are many other ODE systems that exhibit chaos. An example is one\n", + "studied by Rössler, which obeys a similar-looking system of three ODE’s:\n", + "\n", + "\n", + "\n", + "$$\n", + "\\begin{aligned}\n", + " \\dot{x}&=&-y-z \\\\ \n", + " \\dot{y}&=&x+ay \\\\\n", + " \\dot{z}&=&b+z(x-c) \n", + " \\end{aligned}\n", + " $$ \n", + "\n", + "Suppose that $b=2$, $c=4$,\n", + "and consider the behaviour of the attractor as $a$ is varied. When $a$\n", + "is small, the attractor is a simple closed curve. As $a$ is increased,\n", + "however, this splits into a double loop, then a quadruple loop, and so\n", + "on. Thus, a type of *period-doubling* takes place, and when\n", + "$a$ reaches about 0.375, there is a fractal attractor in the form of a\n", + "band, that looks something like what is known in mathematical circles as\n", + "a *Möbius strip*.\n", + "\n", + "If you’re really keen on this topic, you might be interested in using\n", + "your code to investigate the behaviour of this system of equations,\n", + "*though you are not required to hand anything in for this!*\n", + "\n", + "First, you could perform a stability analysis for\n", + "([lab6:eq:rossler]), like you saw above for the Lorenz\n", + "equations. Then, modify your code to study the Rössler attractor. Use\n", + "the code to compare your analytical stability results to what you\n", + "actually see in the computations.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Summary \n", + "\n", + "In this lab, you have had the chance to investigate the solutions to the Lorenz equations and their stability in quite some detail. You saw that for certain parameter values, the solution exhibits non-periodic, chaotic behaviour. The question to ask ourselves now is: *What does this system tell us about the dynamics of flows in the atmosphere?* In fact, this system has been simplified so much that it is no longer an accurate model of the physics in the atmosphere.\n", + "However, we have seen that the four characteristics of flows in the atmosphere (mentioned in [the Introduction](#sec_intro)) are also present in the Lorenz equations.\n", + "\n", + "Each state in Lorenz’ idealized “climate” is represented by a single point in phase space. For a given set of initial conditions, the evolution of a trajectory describes how the weather varies in time. The butterfly attractor embodies all possible weather conditions that can be attained in the Lorenzian climate. By changing the value of the parameter $\\rho$ (and, for that matter, $\\sigma$ or $\\beta$), the shape of the attractor changes. Physically, we can interpret this as a change in some global property of the weather system resulting in a modification of the possible weather states.\n", + "\n", + "The same methods of analysis can be applied to more complicated models of the weather. One can imagine a model where the depletion of ozone and the increased concentration of greenhouse gases in the atmosphere might be represented by certain parameters. Changes in these parameters result in changes in the shape of the global climate attractor for the system. By studying the attractor, we could determine whether any new, and possibly devastating, weather states are present in this new ozone-deficient atmosphere.\n", + "\n", + "We began by saying in the Introduction that the butterfly effect made accurate long-term forecasting impossible. Nevertheless, it is still possible to derive meaningful qualitative information from such a chaotic dynamical system.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + "\n", + "## A. Mathematical Notes\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "### A.1 The Lorenzian Water Wheel Model\n", + "\n", + "\n", + "*This derivation is adapted from Sparrow \n", + "[Appendix B].*\n", + "\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename=\"images/water-wheel.png\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "**Figure: The Lorenzian water wheel.**\n", + "\n", + "Imagine a wheel which is free to rotate about a horizontal axis, as\n", + "depicted in [Figure water-wheel](#fig_water-wheel).\n", + "\n", + "\n", + "\n", + "To the circumference of the wheel is attached a series of leaky buckets.\n", + "Water flows into the buckets at the top of the wheel, and as the buckets\n", + "are filled with water, the wheel becomes unbalanced and begins to\n", + "rotate. Depending on the physical parameters in this system, the wheel\n", + "may remain motionless, rotate steadily in a clockwise or\n", + "counter-clockwise direction, or reverese its motion in irregular\n", + "intervals. This should begin to remind you of the type of behaviour\n", + "exhibited in the Lorenz system for various parameters.\n", + "\n", + "The following are the variables and parameters in the system:\n", + "\n", + "$r$: the radius of the wheel (constant),\n", + "\n", + "$g$: the acceleration due to gravity (constant),\n", + "\n", + "$\\theta(t)$: is the angular displacement (not a fixed point on the wheel)\n", + " (unknown),\n", + "\n", + "$m(\\theta,t)$: the mass of the water per unit arc, which we assume is a continuous\n", + " function of the angle (unknown),\n", + "\n", + "$\\Omega(t)$: the angular velocity of the wheel,\n", + "\n", + "We also make the following assumptions:\n", + "\n", + "- water is added to the wheel at a constant rate.\n", + "\n", + "- the points on the circumference of the wheel gain water at a rate\n", + " proportional to their height.\n", + "\n", + "- water leaks out at a rate proportional to $m$.\n", + "\n", + "- there is frictional damping in the wheel proportional to the angular\n", + " velocity, $k \\Omega$,\n", + "\n", + "- $A$, $B$, $h$ are additional positive constants.\n", + "\n", + "We’ll pass over some details here, and go right to the equations of\n", + "motion. The equation describing the evloution of the angular momentum is\n", + "\n", + "\n", + "$$\n", + "\\frac{d\\Omega}{dt} = -k \\Omega - \\left( \\frac{gh}{2\\pi aA} \\right)\n", + " m \\cos\\theta.\n", + " $$ \n", + " \n", + " The requirement of conservation\n", + " \n", + "of mass in the system leads to two equations\n", + "$$\\frac{d (m \\sin\\theta)}{dt} = \\Omega m \\cos\\theta - h m \\sin\\theta +\n", + " 2\\pi B\n", + "$$ \n", + "\n", + "and\n", + "$$\n", + "\\frac{d (m \\cos\\theta)}{dt} = -\\Omega m \\sin\\theta - h m \\cos\\theta,\n", + "$$ \n", + "(where all variables dependent on the angle\n", + "have been averaged over $\\theta$).\n", + "\n", + "Using a suitable change of variables, these three equations\n", + "can be written in the same form as the\n", + "Lorenz equations (with $\\beta=1$)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "## B. References\n", + "\n", + "\n", + "Gleick, J., 1987: *Chaos: Making a New Science*. Penguin\n", + "Books.\n", + "\n", + "Lorenz, E. N., 1963: Deterministic nonperiodic flow. *Journal of\n", + "the Atmospheric Sciences*, **20**, 130–141.\n", + "\n", + "Palmer, T., 1993: A weather eye on unpredictability. in N. Hall, editor,\n", + "*Exploring Chaos: A Guide to the New Science of Disorder*,\n", + "chapter 6. W. W. Norton & Co.\n", + "\n", + "Saltzman, B., 1962: Finite amplitude free convection as an initial value\n", + "problem – I. *Journal of the Atmospheric\n", + "Sciences*, **19**, 329–341.\n", + "\n", + "Sparrow, C., 1982: *The Lorenz Equations: Bifurcations, Chaos, and\n", + "Strange Attractors*. volume 41 of *Applied Mathematical\n", + "Sciences*. Springer-Verlag.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": true, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "333.636px", + "left": "10px", + "top": "150px", + "width": "165px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab7/01-lab7.ipynb.txt b/_sources/notebooks/lab7/01-lab7.ipynb.txt new file mode 100644 index 0000000..020b6b5 --- /dev/null +++ b/_sources/notebooks/lab7/01-lab7.ipynb.txt @@ -0,0 +1,1755 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 7: Solving partial differential equations using an explicit, finite difference method.\n", + "\n", + "Lin Yang & Susan Allen & Carmen Guo" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "*This laboratory is long and is typically assigned in two halves. See break after Problem 5 and before Full Equations*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems ##\n", + "- [Problem 1](#Problem-One): Numerical solution on a staggered grid.\n", + "- [Problem 2](#Problem-Two): Stability of the difference scheme\n", + "- [Problem 3](#Problem-Three): Dispersion relation for grid 2\n", + "- [Problem 4](#Problem-Four): Choosing most accurate grid\n", + "- [Problem 5](#Problem-Five): Numerical solution for no y variation\n", + "- [Problem 6](#Problem-Six): Stability on the 2-dimensional grids\n", + "- [Problem 7](#Problem-Seven): Finite difference form of equations\n", + "- [Problem 8](#Problem-Eight): Dispersion relation for D-grid\n", + "- [Problem 9](#Problem-Nine): Accuracy of the approximation on various grids" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Learning Objectives ##\n", + "\n", + "When you have completed reading and working through this lab you will be able to:\n", + "\n", + "- find the dispersion relation for a set of differential equations\n", + " (the “real” dispersion relation).\n", + "\n", + "- find the dispersion relation for a set of difference equations (the\n", + " numerical dispersion relation).\n", + "\n", + "- describe a leap-frog scheme\n", + "\n", + "- construct a predictor-corrector method\n", + "\n", + "- use the given differential equations to determine unspecified\n", + " boundary conditions as necessary\n", + "\n", + "- describe a staggered grid\n", + "\n", + "- state one reason why staggered grids are used\n", + "\n", + "- explain the physical principle behind the CFL condition\n", + "\n", + "- find the CFL condition for a linear, explicit, numerical scheme\n", + "\n", + "- state one criteria that should be considered when choosing a grid\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "These are the suggested readings for this lab. For more details about\n", + "the books and papers, click on the reference link.\n", + "\n", + "- **Rotating Navier Stokes Equations**\n", + "\n", + " -  [Pond and Pickard, 1983](#Ref:PondPickard), Chapters 3,4 and 6\n", + "\n", + "- **Shallow Water Equations**\n", + "\n", + " -  [Gill, 1982](#Ref:Gill), Section 5.6 and 7.2 (not 7.2.1 etc)\n", + "\n", + "- **Poincaré Waves**\n", + "\n", + " -  [Gill, 1982](#Ref:Gill), Section 7.3 to just after equation (7.3.8), section 8.2\n", + " and 8.3\n", + "\n", + "- **Introduction to Numerical Solution of PDE’s**\n", + "\n", + " -  [Press et al, 1992](#Ref:Pressetal), Section 17.0\n", + "\n", + "- **Waves**\n", + "\n", + " -  [Cushman-Roision, 1994](#Ref:Cushman-Roisin), Appendix A" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import context\n", + "from IPython.display import Image\n", + "import IPython.display as display\n", + "# import plotting package and numerical python package for use in examples later\n", + "import matplotlib.pyplot as plt\n", + "# make the plots happen inline\n", + "%matplotlib inline\n", + "# import the numpy array handling library\n", + "import numpy as np\n", + "# import the quiz script\n", + "from numlabs.lab7 import quiz7 as quiz\n", + "# import the pde solver for a simple 1-d tank of water with a drop of rain\n", + "from numlabs.lab7 import rain\n", + "# import the dispersion code plotter\n", + "from numlabs.lab7 import accuracy2d\n", + "# import the 2-dimensional drop solver\n", + "from numlabs.lab7 import interactive1\n", + "# import the 2-dimensional dispersion relation plotter\n", + "from numlabs.lab7 import dispersion_2d" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Physical Example, Poincaré Waves\n", + "\n", + "One of the obvious examples of a physical phenomena governed by a\n", + "partial differential equation is waves. Consider a shallow layer of\n", + "water and the waves on the surface of that layer. If the depth of the\n", + "water is much smaller than the wavelength of the waves, the velocity of\n", + "the water will be the same throughout the depth. So then we can describe\n", + "the state of the water by three variables: $u(x,y,t)$, the east-west\n", + "velocity of the water, $v(x,y,t)$, the north-south velocity of the water\n", + "and $h(x,y,t)$, the height the surface of the water is deflected. As\n", + "specified, each of these variables are functions of the horizontal\n", + "position, $(x,y)$ and time $t$ but, under the assumption of shallow\n", + "water, not a function of $z$.\n", + "\n", + "In oceanographic and atmospheric problems, the effect of the earth’s\n", + "rotation is often important. We will first introduce the governing\n", + "equations including the Coriolis force ([Full Equations](#Full-Equations)). However,\n", + "most of the numerical concepts can be considered without all the\n", + "complications in these equations. We will also consider two simplier\n", + "sets; one where we assume there is no variation of the variables in the\n", + "y-direction ([No variation in y](#No-variation-in-y)) and one where, in addition, we assume\n", + "that the Coriolis force is negligible ([Simple Equations](#Simple-Equations)).\n", + "\n", + "The solution of the equations including the Coriolis force are Poincaré\n", + "waves whereas without the Coriolis force, the resulting waves are called\n", + "shallow water gravity waves.\n", + "\n", + "The remainder of this section will present the equations and discuss the\n", + "dispersion relation for the two simplier sets of equations. If your wave\n", + "theory is rusty, consider reading Appendix A in [Cushman-Roisin, 1994](#Ref:Cushman-Roisin)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Introduce Full Equations \n", + "[full-equations]:(#Introduce-Full-Equations)\n", + "\n", + "The linear shallow water equations on an f-plane over a flat bottom are\n", + "
\n", + "(Full Equations, Eqn 1)\n", + "$$\\frac{\\partial u}{\\partial t} - fv = -g\\frac{\\partial h}{\\partial x}$$\n", + "
\n", + "(Full Equations, Eqn 2)\n", + "$$\\frac{\\partial v}{\\partial t} + fu = -g\\frac{\\partial h}{\\partial y} $$\n", + "
\n", + "(Full Equations, Eqn 3)\n", + "$$\\frac{\\partial h}{\\partial t} + H\\frac{\\partial u}{\\partial x} + H\\frac{\\partial v}{\\partial y} = 0$$ \n", + "
\n", + "where\n", + "\n", + "- $\\vec{u} = (u,v)$ is the horizontal velocity,\n", + "\n", + "- $f$ is the Coriolis frequency,\n", + "\n", + "- $g$ is the acceleration due to gravity,\n", + "\n", + "- $h$ is the surface elevation, and\n", + "\n", + "- $H$ is the undisturbed depth of the fluid.\n", + "\n", + "We will return to these equations in section [Full Equations](#Full-Equations)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### No variation in y\n", + "[no-variation-in-y.unnumbered]: (#No-variation-in-y)\n", + "\n", + "To simplify the problem assume there is no variation in y. This\n", + "simplification gives:\n", + "
\n", + "(No variation in y, first eqn)\n", + "$$\\frac{\\partial u}{\\partial t} - fv = -g\\frac{\\partial h}{\\partial x}$$ \n", + "
\n", + "(No variation in y, second eqn)\n", + "$$\\frac{\\partial v}{\\partial t} + fu = 0$$\n", + "
\n", + "(No variation in y, third eqn)\n", + "$$\\frac{\\partial h}{\\partial t} + H\\frac{\\partial u}{\\partial x} = 0$$\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Introduce Simple Equations\n", + "[simple-equations]:(#Simple-Equations)\n", + "\n", + "If we consider waves in the absence of the earth’s rotation, $f=0$,\n", + "which implies $v=0$ and we get\n", + "
\n", + "$$\\frac{\\partial u}{\\partial t} = -g\\frac{\\partial h}{\\partial x}$$\n", + "
\n", + "$$\\frac{\\partial h}{\\partial t} + H\\frac{\\partial u}{\\partial x} = 0$$\n", + "
\n", + "\n", + "These simplified equations give shallow water gravity waves. For\n", + "example, a solution is a simple sinusoidal wave:\n", + "
\n", + "(wave solution- h)\n", + "$$h = h_{0}\\cos{(kx - \\omega t)}$$\n", + "
\n", + "(wave solution- u)\n", + "$$u = \\frac{h_{0}\\omega}{kH}\\cos{(kx - \\omega t)}$$ \n", + "
\n", + "where $h_{0}$ is the amplitude, $k$ is the\n", + "wavenumber and $\\omega$ is the frequency (See [Cushman-Roisin, 1994](#Ref:Cushman-Roisin) for a nice\n", + "review of waves in Appendix A).\n", + "\n", + "Substitution of ([wave solution- h](#lab7:sec:hwave)) and ([wave solution- u](#lab7:sec:uwave)) back into\n", + "the differential equations gives a relation between $\\omega$ and k.\n", + "Confirm that \n", + "
\n", + "(Analytic Dispersion Relation)\n", + "$$\\omega^2 = gHk^2,$$\n", + "
\n", + "which is the dispersion relation for these waves." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### No variation in y\n", + "[no-variation-in-y-1.unnumbered]:(#No-variation-in-y)\n", + "\n", + "Now consider $f\\not = 0$.\n", + "\n", + "By assuming\n", + "$$h= h_{0}e^{i(kx - \\omega t)}$$\n", + "$$u= u_{0}e^{i(kx - \\omega t)}$$\n", + "$$v= v_{0}e^{i(kx - \\omega t)}$$\n", + "\n", + "and substituting into the differential equations, eg, for [(No variation in y, first eqn)](#lab7:sec:firsteq)\n", + "$$-iwu_{0}e^{i(kx - \\omega t)} - fv_{0}e^{i(kx - \\omega t)} + ikgh_{0}e^{i(kx - \\omega t)} = 0$$\n", + "and cancelling the exponential terms gives 3 homogeneous equations for\n", + "$u_{0}$, $v_{0}$ and $h_{0}$. If the determinant of the matrix derived\n", + "from these three equations is non-zero, the only solution is\n", + "$u_{0} = v_{0} = h_{0} = 0$, NO WAVE! Therefore the determinant must be\n", + "zero." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Quiz: Find the Dispersion Relation\n", + "\n", + "What is the dispersion relation for 1-dimensional Poincare waves?\n", + "\n", + "A) $\\omega^2 = f^2 + gH (k^2 + \\ell^2)$\n", + "\n", + "B) $\\omega^2 = gH k^2$\n", + "\n", + "C) $\\omega^2 = f^2 + gH k^2$\n", + "\n", + "D) $\\omega^2 = -f^2 + gH k^2$\n", + "\n", + "In the following, replace 'x' by 'A', 'B', 'C' or 'D' and run the cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.dispersion_quiz(answer = 'XXX'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Numerical Solution\n", + "\n", + "### Simple Equations\n", + "[simple-equations]:(#Simple-Equations)\n", + "\n", + "Consider first the simple equations with $f = 0$. In order to solve\n", + "these equations numerically, we need to discretize in 2 dimensions, one\n", + "in space and one in time. Consider first the most obvious choice, shown\n", + "in Figure [Unstaggered Grid](#lab7:fig:nonstagger)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/nonstagger.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Unstaggered Grid.\n", + "
\n", + "\n", + "We will use centred difference schemes in both $x$ and $t$. The\n", + "equations become:\n", + "
\n", + "(Non-staggered, Eqn One)\n", + "$$\\frac {u(t+dt, x)-u(t-dt, x)}{2 dt} + g \\frac {h(t, x+dx) - h(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "(Non-staggered, Eqn Two)\n", + "$$\\frac {h(t+dt, x)-h(t-dt, x)}{2 dt} + H \\frac {u(t, x+dx) - u(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "We can rearrange these equations to\n", + "give $u(t+dt, x)$ and $h(t+dt, x)$. For a small number of points, the\n", + "resulting problem is simple enough to solve in a notebook.\n", + "\n", + "For a specific example, consider a dish, 40 cm long, of water 1 cm deep.\n", + "Although the numerical code presented later allows you to vary the\n", + "number of grid points, in the discussion here we will use only 5 spatial\n", + "points, a distance of 10 cm apart. The lack of spatial resolution means\n", + "the wave will have a triangular shape. At $t=0$ a large drop of water\n", + "lands in the centre of the dish. So at $t=0$, all points have zero\n", + "velocity and zero elevation except at $x=3dx$, where we have\n", + "$$h(0, 3dx) = h_{0} = 0.01 cm$$\n", + "\n", + "A centred difference scheme in time, such as defined by equations\n", + "([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)), is\n", + "usually refered to as a *Leap frog scheme*. The new values, $h(t+dt)$\n", + "and $u(t+dt)$ are equal to values two time steps back $h(t-dt)$ and\n", + "$u(t-dt)$ plus a correction based on values calculated one time step\n", + "back. Hence the time scheme “leap-frogs” ahead. More on the consequences\n", + "of this process can be found in section [Computational Mode](#Computational-Mode).\n", + "\n", + "As a leap-frog scheme requires two previous time steps, the given\n", + "conditions at $t=0$ are not sufficient to solve\n", + "([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)). We\n", + "need the solutions at two time steps in order to step forward." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Predictor-Corrector to Start\n", + "[lab7:sec:pred-cor]:(#Predictor-Corrector-to-Start)\n", + "\n", + "In section 4.2.2 of Lab 2, predictor-corrector methods were introduced.\n", + "We will use a predictor-corrector based on the forward Euler scheme, to\n", + "find the solution at the first time step, $t=dt$. Then the second order\n", + "scheme ([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)), ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)) can be used.\n", + "\n", + "Using the forward Euler Scheme, the equations become\n", + "
\n", + "$$\\frac {u(t+dt, x)-u(t, x)}{dt} + g \\frac {h(t, x+dx) - h(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "$$\\frac {h(t+dt, x)-h(t, x)}{dt} + H \\frac {u(t, x+dx) - u(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "\n", + "1. Use this scheme to predict $u$ and $h$ at $t=dt$.\n", + "\n", + "2. Average the solution at $t=0$ and that predicted for $t=dt$, to\n", + " estimate the solution at $t=\\frac{1}{2}dt$. You should confirm that\n", + " this procedure gives: $$u(\\frac{dt}{2}) = \\left\\{ \\begin{array}{ll}\n", + " 0 & { x = 3dx}\\\\\n", + " \\left({-gh_{0}dt}\\right)/\\left({4dx}\\right) & { x = 2dx}\\\\\n", + " \\left({gh_{0}dt}\\right)/\\left({4dx}\\right) & { x = 4dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + " $$h(\\frac{dt}{2}) = \\left\\{ \\begin{array}{ll}\n", + " h_{0} & { x = 3dx}\\\\\n", + " 0 & { x \\not= 3dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + "3. The corrector step uses the centred difference scheme in time (the\n", + " leap-frog scheme) with a time step of ${dt}/{2}$ rather than dt. You\n", + " should confirm that this procedure gives:\n", + " $$u(dt) = \\left\\{ \\begin{array}{ll}\n", + " 0 & { x = 3dx}\\\\\n", + " \\left({-gh_{0}dt}\\right)/\\left({2dx}\\right) & { x = 2dx}\\\\\n", + " \\left({gh_{0}dt}\\right)/\\left({2dx}\\right) & { x = 4dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + " $$h(dt) = \\left\\{ \\begin{array}{ll}\n", + " 0 & { x = 2dx, 4dx}\\\\\n", + " h_{0} - \\left({gHdt^2 h_{0}}\\right)/\\left({4dx^2}\\right) & { x = 3dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + "Note that the values at $x=dx$ and $x=5dx$ have not been specified.\n", + "These are boundary points and to determine these values we must consider\n", + "the boundary conditions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Boundary Conditions\n", + "\n", + "If we are considering a dish of water, the boundary conditions at\n", + "$x=dx, 5dx$ are those of a wall. There must be no flow through the wall.\n", + "$$u(dx) = 0$$\n", + "$$u(5dx) = 0$$\n", + "But these two conditions are not\n", + "sufficient; we also need $h$ at the walls. If $u=0$ at the wall for all\n", + "time then $\\partial u/\\partial t=0$ at the wall, so $\\partial h/\\partial x=0$ at the wall. Using a\n", + "one-sided difference scheme this gives\n", + "$$\\frac {h(2dx) - h(dx)}{dx} = 0$$\n", + "or\n", + "$$h(dx) = h(2dx)$$\n", + "and\n", + "$$\\frac {h(4dx) - h(5dx)}{dx} = 0$$\n", + "or\n", + "$$h(5dx) = h(4dx)$$\n", + "which gives the required boundary conditions on $h$ at the wall." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Simple Equations on a Non-staggered Grid\n", + "\n", + "1. Given the above equations and boundary conditions, we can find the\n", + " values of $u$ and $h$ at all 5 points when $t = 0$ and $t = dt$.\n", + "\n", + "2. From ([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)), we can find the values of $u$ and\n", + " $h$ for $t = 2dt$ using $u(0, x)$, $u(dt, x)$, $h(0, x)$, and\n", + " $h(dt, x)$.\n", + "\n", + "3. Then we can find the values of $u$ and $h$ at $t = 3dt$ using\n", + " $u(dt, x)$, $u(2dt, x)$, $h(dt, x)$, and $h(2dt, x)$.\n", + "\n", + "We can use this approach recursively to determine the values of $u$ and\n", + "$h$ at any time $t = n * dt$. The python code that solves this problem\n", + "is provided in the file rain.py. It takes two arguments, the first is the\n", + "number of time steps and the second is the number of horizontal grid\n", + "points. \n", + "\n", + "The output is two coloured graphs. The color represents time with black\n", + "the earliest times and red later times. The upper plot shows the water\n", + "velocity (u) and the lower plot shows the water surface. To start with\n", + "the velocity is 0 (black line at zero across the whole domain) and the\n", + "water surface is up at the mid point and zero at all other points (black\n", + "line up at midpoint and zero elsewhere)\n", + "\n", + "Not much happens in 6 time-steps. Do try longer and more grid points." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "rain.rain([6, 5])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you want to change something in the script (say the colormap I've chosen, viridis, doesn't work for you), you can edit rain.py in an editor or spyder. To make it take effect here though, you have to reload rain. See next cell for how to. You will also need to do this if you do problem one or other tests changing rain.py but running in a notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import importlib\n", + "importlib.reload(rain)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Staggered Grids\n", + "[lab7:sec:staggered]:(#Staggered-Grids)\n", + "\n", + "After running the program with different numbers of spatial points, you\n", + "will discover that the values of $u$ are always zero at the odd numbered\n", + "points, and that the values of $h$ are always zero at the even numbered\n", + "points. In other words, the values of $u$ and $h$ are zero in every\n", + "other column starting from $u(t, dx)$ and $h(t, 2dx)$, respectively.\n", + "\n", + "A look at ([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)) can help us understand why this is the\n", + "case:\n", + "\n", + "$u(t + dt, x)$ is dependent on $h(t , x + dx)$ and $h(t , x - dx)$,\n", + "\n", + "but $h(t , x + dx)$ is in turn dependent on $u$ at $x + 2dx$ and at\n", + "$x$,\n", + "\n", + "and $h(t , x - dx)$ is in turn dependent on $u$ at $x - 2dx$ and at\n", + "$x$.\n", + "\n", + "Thus, if we just look at $u$ at a particular $x$, $u(x)$ will depend on\n", + "$u(x + 2dx)$, $u(x - 2dx)$, $u(x + 4dx)$, $u(x - 4dx)$, $u(x + 6dx)$,\n", + "$u(x - 6dx),$ ... but not on $u(x + dx)$ or $u(x - dx)$. Therefore, the\n", + "problem is actually decoupled and consists of two independent problems:\n", + "one problem for all the $u$’s at odd numbered points and all the $h$’s\n", + "at even numbered points, and the other problem for all the $u$’s at even\n", + "numbered points and all the $h$’s at odd numbered points, as shown in\n", + "Figure [Unstaggered Dependency](#lab7:fig:dependency)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/dependency.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Unstaggered Dependency\n", + "
\n", + "\n", + "In either problem, only the variable that is relevant to that problem\n", + "will be considered at each point. So for one problem, if at point $x$ we\n", + "consider the $u$ variable, at $x + dx$ and $x -dx$ we consider $h$. In\n", + "the other problem, at the same point $x$, we consider the variable $h$.\n", + "\n", + "Now we can see why every second $u$ point and $h$ point are zero for\n", + "*rain*. We start with all of\n", + "$u(dx), h(2dx), u(3dx), h(4dx), u(5dx) = 0$, which means they remain at\n", + "zero.\n", + "\n", + "Since the original problem can be decoupled, we can solve for $u$ and\n", + "$h$ on each decoupled grid separately. But why solve two problems?\n", + "Instead, we solve for $u$ and $h$ on a single staggered grid; whereas\n", + "before we solved for $u$ and $h$ on the complete, non-staggered grid.\n", + "Figure [Decoupling](#lab7:fig:decoupling) shows the decoupling of the grids." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/decoupling.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Decoupling: The two staggered grids and the unstaggered grid. Note that the\n", + "unstaggered grid has two variables at each grid/time point whereas the\n", + "staggered grids only have one.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now consider the solution of the same problem on a staggered grid. The\n", + "set-up of the problem is slightly different this time; we are\n", + "considering 4 spatial points in our discussion instead of 5, shown in\n", + "Figure [Staggered Grid](#lab7:fig:stagger). We will also be using $h_{i}$ and $u_{i}$ to\n", + "denote the spatial points instead of $x = dx * i$." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/stagger.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Staggered Grid: The staggered grid for the drop in the pond problem.\n", + "
\n", + "\n", + "The original equations, boundary and initial conditions are changed to\n", + "reflect the staggered case. The equations are changed to the following:\n", + "
\n", + "(Staggered, Eqn 1)\n", + "$$\\frac {u_{i}(t+dt)-u_{i}(t-dt)}{2 dt} + g \\frac {h_{i + 1}(t) - h_{i}(t)}{dx} = 0$$\n", + "
\n", + "(Staggered, Eqn 2)\n", + "$$\\frac {h_{i}(t+dt)-h_{i}(t-dt)}{2 dt} + H \\frac {u_{i}(t) - u_{i - 1}(t)}{dx} = 0$$\n", + "
\n", + "\n", + "The initial conditions are: At $t = 0$ and $t = dt$, all points have\n", + "zero elevation except at $h_{3}$, where \n", + "$$h_{3}(0) = h_{0}$$\n", + "$$h_{3}(dt) = h_{3}(0) - h_{0} Hg \\frac{dt^2}{dx^2}$$ \n", + "At $t = 0$ and\n", + "$t = dt$, all points have zero velocity except at $u_{2}$ and $u_{3}$,\n", + "where \n", + "$$u_{2}(dt) = - h_{0} g \\frac{dt}{dx}$$\n", + "$$u_{3}(dt) = - u_{2}(dt)$$ \n", + "This time we assume there is a wall at\n", + "$u_{1}$ and $u_{4}$, so we will ignore the value of $h_{1}$. The\n", + "boundary conditions are: \n", + "$$u_{1}(t) = 0$$ \n", + "$$u_{4}(t) = 0$$\n", + "\n", + "### Problem One\n", + "[lab7:prob:staggered]:(#Problem-One)\n", + "\n", + "Modify *rain.py* to solve this problem (Simple\n", + "equations on a staggered grid). Submit your code and a final plot for\n", + "one case.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stability: the CFL condition\n", + "[lab7:stability]:(#Stability:-the-CFL-condition)\n", + "\n", + "In the previous problem, $dt = 0.001$s is used to find $u$ and $h$. If\n", + "you increase $dt$ by 100 times, and run *rain.py* on your staggered grid\n", + "code again, you can see that the magnitude of $u$ has increased from\n", + "around 0.04 to $10^8$! Try this by changing $dt = 0.001$ to\n", + "$dt = 0.1$ in the code and compare the values of $u$ when run with\n", + "different values of $dt$. This tells us that the scheme we have been\n", + "using so far is unstable for large values of $dt$.\n", + "\n", + "To understand this instability, consider a spoked wagon wheel in an old\n", + "western movie (or a car wheel with a pattern in a modern TV movie) such\n", + "as that shown in Figure [Wheel](#lab7:fig:wheel-static)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wheel_static.png',width='35%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wheel: A spoked wagon wheel.\n", + "
\n", + "\n", + "Sometimes the wheels appear to be going backwards. Both TV and\n", + "movies come in frames and are shown at something like 30 frames a second.\n", + "So a movie discretizes time. If the wheel moves just a little in the\n", + "time step between frames, your eye connects the old position with the\n", + "new position and the wheel moves forward $-$ a single frame is shown in\n", + "Figure [Wheel Left](#lab7:fig:wheel-left)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wheel_left.png',width='35%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wheel Left: The wheel appears to rotate counter-clockwise if its speed is slow\n", + "enough.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"hgQ66frbBEs\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "However, if the wheel is moving faster, your eye connects each spoke\n", + "with the next spoke and the wheel seems to move backwards $-$ a single\n", + "frame is depicted in Figure [Wheel Right](#lab7:fig:wheel-right)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wheel_right.png',width='35%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wheel Right: When the wheel spins quickly enough, it appears to rotate\n", + "clockwise!\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"w8iQIwX-ek8\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In a similar manner, the time discretization of any wave must be small\n", + "enough that a given crest moves less than half a grid point in a time\n", + "step. Consider the wave pictured in Figure [Wave](#lab7:fig:wave-static)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wave_static.png',width='65%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wave: A single frame of the wave.\n", + "
\n", + "\n", + "If the wave moves slowly, it seems to move in the correct direction\n", + "(i.e. to the left), as shown in Figure [Wave Left](#lab7:fig:wave-left)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wave_left.png',width='65%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wave Left: The wave moving to the left also appears to be moving to the left if\n", + "it’s speed is slow enough.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"CVybMbfYRXM\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "However if the wave moves quickly, it seems to move backward, as in\n", + "Figure [Wave Right](#lab7:fig:wave-right)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wave_right.png',width='65%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Figure Wave Right: If the wave moves too rapidly, then it appears to be moving in the\n", + "opposite direction." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"al2VrnkYyD0\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In summary, an explicit numerical scheme is unstable if the time step is\n", + "too large. Such a large time step does not resolve the process being\n", + "modelled. Now we need to calculate, for our problem, the maximum value\n", + "of the time step is for which the scheme remains stable. To do this\n", + "calculation we will derive the dispersion relation of the waves\n", + "*in* the numerical scheme. The maximum time step for\n", + "stability will be the maximum time step for which all waves either\n", + "maintain their magnitude or decay.\n", + "\n", + "Mathematically, consider the equations ([Staggered, Eqn 1](#lab7:eq:staggerGrid1)) and\n", + "([Staggered, Eqn 2](#lab7:eq:staggerGrid2)). Let $x=md$ and $t=p\\, dt$ and consider a\n", + "solution \n", + "
\n", + "(u-solution)\n", + "$$\\begin{aligned}\n", + "u_{mp} &=& {\\cal R}e \\{ {\\cal U} \\exp [i(kx - \\omega t)] \\}\\\\\n", + "&=& {\\cal R}e \\{ {\\cal U} \\exp [i(kmd - \\omega p\\, dt)] \\} \\nonumber\\end{aligned}$$\n", + "$$\\begin{aligned}\n", + "h_{mp} &=& {\\cal R}e \\{ {\\cal H} \\exp [i(k[x - dx/2] - \\omega t)] \\}\\\\\n", + "&=& {\\cal R}e \\{ {\\cal H} \\exp [i(k[m - 1/2]d - \\omega p\\, dt)] \\} \\nonumber\\end{aligned}$$\n", + "where ${\\cal R}e$ means take the real part and ${\\cal U}$ and ${\\cal H}$\n", + "are constants. Substitution into ([Staggered, Eqn 1](#lab7:eq:staggerGrid1)) and\n", + "([Staggered, Eqn 2](#lab7:eq:staggerGrid2)) gives two algebraic equations in ${\\cal U}$\n", + "and ${\\cal H}$ which can be written: \n", + "$$\\left[\n", + "\\begin{array}{cc} - \\sin(\\omega dt)/ dt & 2 g \\sin(kd/2)/d \\\\\n", + "2 H \\sin(kd/2)/d & -\\sin(\\omega \\, dt)/ dt \\\\\n", + "\\end{array}\n", + "\\right]\n", + "\\left[\n", + "\\begin{array}{c} {\\cal U}\\\\ {\\cal H}\\\\ \n", + "\\end{array} \\right]\n", + "= 0.$$ \n", + "
\n", + "where $\\exp(ikd)-\\exp(-ikd)$ has been written $2 i \\sin(kd)$ etc.\n", + "In order for there to be a non-trivial solution, the determinant of the\n", + "matrix must be zero. This determinant gives the dispersion relation\n", + "
\n", + "(Numerical Dispersion Relation)\n", + "$$ \\frac{\\sin^2(\\omega \\, dt)}{dt^2} = 4 gH \\frac {\\sin^2(kd/2)}{d^2}$$\n", + "
\n", + "Which can be compared to the ([Analytic Dispersion Relation](#lab7:eq:disp)), the “real” dispersion\n", + "relation. In particular, if we decrease both the time step and the space\n", + "step, $dt \\rightarrow 0$ and $d \\rightarrow 0$, \n", + "([Numerical Dispersion Relation](#lab7:eq:numerDisp))\n", + "approaches ([Analytic Dispersion Relation](#lab7:eq:disp)). The effect of just the discretization in\n", + "space can be found by letting just $dt \\rightarrow 0$ which gives\n", + "
\n", + "(Continuous Time, Discretized Space Dispersion Relation)\n", + "$$\\omega^2 = 4 gH \\frac {\\sin^2(kd/2)}{d^2}$$\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The “real” dispersion relation, ([Analytic Dispersion Relation](#lab7:eq:disp)), and the numerical\n", + "dispersion relation with continuous time,\n", + "([Continuous Time, Discretized Space Dispersion Relation](#lab7:eq:numerDispSpace)), both give $\\omega^2$ positive and\n", + "therefore $\\omega$ real. However, this is not necessarily true for the\n", + "numerical dispersion relation ([Numerical Dispersion Relation](#lab7:eq:numerDisp)). What does a\n", + "complex $\\omega$ mean? Well, go back to ([u-solution](#eq:udis)). A complex\n", + "$\\omega = a + ib$ gives $u \\propto \\exp(-iat)\\exp(bt)$. The first\n", + "exponential is oscillatory (like a wave) but the second gives\n", + "exponential growth if $b$ is positive or exponential decay if $b$ is\n", + "negative. Obviously, for a stable solution we must have $b \\le 0$. So,\n", + "using ([Numerical Dispersion Relation](#lab7:eq:numerDisp)) we must find $\\omega$ and determine if it\n", + "is real.\n", + "\n", + "Now, because ([Numerical Dispersion Relation](#lab7:eq:numerDisp)) is a transcendental equation, how\n", + "to determine $\\omega$ is not obvious. The following works:\n", + "\n", + "- Re-expand $\\sin(\\omega\\,dt)$ as\n", + " $(\\exp(i \\omega\\,dt)-\\exp(-i\\omega\\,dt))/2i$.\n", + "\n", + "- Write $\\exp(-i\\omega\\,dt)$ as $\\lambda$ and note that this implies\n", + " $\\exp(i\\omega\\, dt) = 1/\\lambda$. If $\\omega dt = a + ib$ then\n", + " $b = ln |\\lambda|$. For stability the magnitude of $\\lambda$ must\n", + " be less than one.\n", + "\n", + "- Write $4 gH \\sin^2(kd/2)/d^2$ as $q^2$, for brevity.\n", + "\n", + "- Substitute in ([Numerical Dispersion Relation](#lab7:eq:numerDisp)) which gives:\n", + " $$-(\\lambda-1/\\lambda)^2 = 4 q^2 dt^2$$ \n", + " or\n", + " $$\\lambda^4 - 2 (1 - 2 q^2 dt^2) \\lambda^2 + 1 = 0$$ \n", + " or\n", + "
\n", + " (Lambda Eqn)\n", + " $$\\lambda = \\pm \\left(1-2q^2 dt^2 \\pm 2 q dt \\left( q^2\n", + " dt^2 - 1 \\right)^{1/2} \\right)^{1/2}$$ \n", + "
\n", + "\n", + "A plot of the four roots for $\\lambda$ is shown below in\n", + "Figure [Roots](#lab7:fig:allmag)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/allmag.png',width='45%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Roots: Magnitude of the four roots of $\\lambda$ as a function of $q dt$ (not\n", + "$\\omega dt$).\n", + "
\n", + "\n", + "The four roots correspond to the “real” waves travelling to the right\n", + "and left, as well two *computational modes* (see\n", + "Section [Computational Mode](#Computational-Mode) for more information). The plots for\n", + "the four roots overlap, so it is most helpful to view [separate plots for each of the roots](#lab7:fig:sepmag). " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/multimag.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Separate Roots: Magnitude of the four roots of $\\lambda$ as a function of $q dt$ (not $\\omega dt$).\n", + "
\n", + "\n", + "Now for stability\n", + "$\\lambda$ must have a magnitude less than or equal to one. From\n", + "Figure [Separate Roots](#lab7:fig:sepmag), it is easy to see that this is the same as\n", + "requiring that $|q dt|$ be less than 1.0.\n", + "\n", + "Substituting for $q$\n", + "$$1 > q^2 dt^2 = \\frac {4gH}{d^2} \\sin^2(kd/2) dt^2$$ \n", + "for all $k$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The maximum wavenumber that can be resolved by a grid of size $d$ is\n", + "$k = \\pi/d$. At this wavenumber, the sine takes its maximum value of 1.\n", + "So the time step \n", + "
\n", + "$$dt^2 < \\frac { d^2}{4 g H}$$\n", + "
\n", + "\n", + "For this case ($f = 0$) the ratio of the space step to the time step\n", + "must be greater than the wave speed $\\sqrt\n", + "{gH}$, or $$d / dt > 2 \\sqrt{gH}.$$ This stability condition is known\n", + "as **the CFL condition** (named after Courant, Friedrich and Levy).\n", + "\n", + "On a historical note, the first attempts at weather prediction were\n", + "organized by Richardson using a room full of human calculators. Each\n", + "person was responsible for one grid point and passed their values to\n", + "neighbouring grid points. The exercise failed dismally, and until the\n", + "theory of CFL, the exact reason was unknown. The equations Richardson\n", + "used included fast sound waves, so the CFL condition was\n", + "$$d/dt > 2 \\times 300 {\\rm m/s}.$$ \n", + "Richardson’s spatial step, $d$, was\n", + "too small compared to $dt$ and the problem was unstable.\n", + "\n", + "### Problem Two \n", + "[lab7:prob:stability]:(#Problem-Two)\n", + "> a) Find the CFL condition (in seconds) for $dt$\n", + "for the Python example in Problem One.\n", + "\n", + "\n", + "\n", + "Test your value. \n", + "\n", + "b) Find the CFL condition (in seconds) for $dt$ for the Python\n", + "example in *rain.py*, ie., for the non-staggered grid. Test your value.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Accuracy\n", + "[lab7:accuracy]:(#Accuracy)\n", + "\n", + "A strong method to determine the accuracy of a scheme is to compare the\n", + "numerical solution to an analytic solution. The equations we are\n", + "considering are wave equations and so we will compare the properties of\n", + "the waves. Wave properties are determined by the dispersion relation and\n", + "we will compare the *numerical dispersion relation* and the exact, continuous\n", + "*analytic dispersion relation*. Both the time step and the space step\n", + "(and as we’ll see below the grid) affect the accuracy. Here we will only\n", + "consider the effect of the space step. So, consider the numerical\n", + "dispersion relation assuming $dt \\rightarrow 0$ (reproduced here from\n", + "([Continuous Time, Discretized Space Dispersion Relation](#lab7:eq:numerDispSpace)))\n", + "$$\\omega^2 = 4 gH \\frac {\\sin^2(kd/2)}{d^2}$$ \n", + "with the exact, continuous\n", + "dispersion relation ([Analytic Dispersion Relation](#lab7:eq:disp)) $$\\omega^2 = gHk^2$$\n", + "\n", + "We can plot the two dispersion relations as functions of $kd$, The graph\n", + "is shown in Figure [Simple Accuracy](#lab7:fig:simpleaccuracy)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/simple_accuracy.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Simple Accuracy\n", + "
\n", + "\n", + "We can see that the accuracy is good for long waves ($k$ small) but for\n", + "short waves, near the limit of the grid resolutions, the discrete\n", + "frequency is too small. As the phase speed is ${\\omega}/{k}$, the phase\n", + "speed is also too small and most worrying, the group speed\n", + "${\\partial \\omega}/\n", + "{\\partial k}$ goes to zero!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choosing a Grid\n", + "\n", + "#### No variation in y\n", + "[no-variation-in-y-2.unnumbered]:(#No-variation-in-y)\n", + "\n", + "For the simple case above, there is little choice in grid. Let’s\n", + "consider the more complicated case, $f \\not = 0$. Then $v \\not = 0$ and\n", + "we have to choose where on the grid we wish to put $v$. There are two\n", + "choices:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/simple_grid1.png',width='50%')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/simple_grid2.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Two Simple Grids\n", + "
\n", + "\n", + "For each of these, we can calculate the discrete dispersion relation\n", + "discussed above.\n", + "\n", + "For grid 1\n", + "$$\\omega ^2 = f^2 \\cos^2(\\frac{kd}{2}) + \\frac{4gH \\sin^2(\\frac{kd}{2})}{d^2}$$\n", + "\n", + "### Problem Three\n", + "[lab7:prob:grid2]:(#Problem-Three)\n", + "\n", + "Show that for grid 2\n", + "$$\\omega ^2 = f^2 + \\frac{4gH \\sin^2(\\frac{kd}{2})}{d^2}$$\n", + "\n", + "We can plot the two dispersion relations as a function of $k$ given the\n", + "ratio of $d/R$, where $d$ is the grid size and $R$ is the Rossby radius\n", + "which is given by $$R = \\frac {\\sqrt{gH}}{f}.$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "accuracy2d.main(0.5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Four\n", + "\n", + "[lab7:prob:accurate]:(#Problem-Four)\n", + "\n", + "Which grid gives the best accuracy for $d=R/2$?\n", + "Explain in what ways it is more accurate. Consider the accuracy of the frequency, but also the accuracy of the group speed (the gradient of the frequency with respect to wavenumber). Describe what ranges of wavenumber the accuracy is good and what ranges it is less good.\n", + "\n", + "### Problem Five\n", + "\n", + "[lab7:prob:noy]:(#Problem-Five)\n", + "\n", + "Modify *rain.py* to solve equations\n", + "([No variation in y, first eqn](#lab7:sec:firsteq)), ([No variation in y, second eqn](#lab7:sec:secondeq)) and ([No variation in y, third eqn](#lab7:sec:thirdeq)) on the most accurate\n", + "grid." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# End of Lab 7a and Beginning of Lab 7b #" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Full Equations\n", + "[Full-Equations]:(#Full-Equations)\n", + "\n", + "In order to solve the full equations\n", + "([Full Equations, Eqn 1](#lab7:eq:swea)), ([Full Equations, Eqn 2](#lab7:eq:sweb)) and ([Full Equations, Eqn 3](#lab7:eq:swec)) numerically, we need to\n", + "discretize in 3 dimensions, two in space and one in time.\n", + " [Mesinger and Arakawa, 1976](#Ref:MesingerArakawa) introduced [five different spatial discretizations](http://clouds.eos.ubc.ca/~phil/numeric/labs/lab7/lab7_files/images/allgrid.gif).\n", + " \n", + " \n", + "Consider first the most obvious choice an\n", + "unstaggered grid or Arakawa A grid, shown in Figure [Arakawa A Grid](#lab7:fig:gridA). We\n", + "might expect, from the studies above, that an unstaggered grid may not\n", + "be the best choice. The grid A is not two de-coupled grids because of\n", + "weak coupling through the Coriolis force. However, we will see that this\n", + "grid is not as accurate as some of the staggered grids (B, C, D and E)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/grid1.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Arakawa A Grid.\n", + "
\n", + "\n", + "As the problem becomes more complicated, we need to simplify the\n", + "notation; hence define a discretization operator:\n", + "$$(\\delta_x \\alpha)_{(m,n)} \\equiv \\frac 1 {2d} ( \\alpha_{m+1,n} - \\alpha_{m-1,n} )$$ \n", + "\n", + "where $d$ is the grid spacing in both the $x$ and\n", + "$y$ directions. Note that this discretization is the same centered\n", + "difference we have used throughout this lab.\n", + "\n", + "The finite difference approximation to the full shallow water equations\n", + "on the A grid becomes: \n", + "$$\\frac{\\partial u}{\\partial t} = -g \\delta_x h + fv$$\n", + "$$\\frac{\\partial v}{\\partial t} = -g \\delta_y h - fu$$ \n", + "$$\\frac{\\partial h}{\\partial t} = -H (\\delta_x u + \\delta_y v)$$\n", + "As before, consider a centre difference scheme (leap-frog method) for\n", + "the time step as well, so that\n", + "$$\\frac{\\partial u}{\\partial t}(t) = \\frac {u(t+1)-u(t-1)}{2 dt}$$ \n", + "Putting this together with the spatial scheme we have:\n", + "\n", + "(Numerical Scheme: Grid A)\n", + "$$\\frac {u(t+1)-u(t-1)}{2 dt} = -g \\delta_x h(t) + fv(t)$$\n", + "$$\\frac{v(t+1)-v(t-1)}{2 dt} = -g \\delta_y h(t) - fu(t)$$\n", + "$$\\frac{h(t+1)-h(t-1)}{2 dt} = -H (\\delta_x u(t) + \\delta_y v(t))$$ \n", + "Each of these equations can be rearranged to give $u(t+1)$, $v(t+1)$ and\n", + "$h(t+1)$, respectively. Then given the values of the three variables at\n", + "every grid point at two times, ($t$ and $t-1$), these three equations\n", + "allow you to calculate the updated values, the values of the variables\n", + "at $t+1$. Once again, the following questions arise regarding the\n", + "scheme:\n", + "\n", + "- **Is it stable?**\n", + "\n", + "- **Is it accurate?**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Stability\n", + "\n", + "To determine the stability of the scheme, we need to repeat the analysis\n", + "of section [Stability](#Stability:-the-CFL-condition) for the 2 spatial dimensions used here. The\n", + "first step is to assume a form of the solutions: \n", + "
\n", + "$$\\begin{aligned}\n", + "z_{mnp} &=& {\\cal R}e \\{ {\\cal Z} \\exp [i(kx + \\ell y - \\omega t)] \\}\\\\\n", + "&=& {\\cal R}e \\{ {\\cal Z} \\exp [i(kmd + \\ell n d- \\omega p\\, dt)] \\} \\nonumber\\end{aligned}$$\n", + "
\n", + "where $z$ represents any of $u$, $v$ and $h$ and we have let $x=md$,\n", + "$y=nd$ and $t=p\\,dt$. Substitution into ([Numerical Scheme: Grid A](#eq:numericalGridA)) gives three algebraic equation in\n", + "${\\cal U}$, ${\\cal V}$ and ${\\cal H}$: $$\\left[\n", + "\\begin{array}{ccc} - i \\sin(\\omega dt)/ dt & - f & i g \\sin(kd)/d \\\\\n", + "f & - i \\sin(\\omega dt)/ dt & i g \\sin(\\ell d)/d \\\\\n", + "i H \\sin(kd)/d & i H \\sin(\\ell d)/d & -i \\sin(\\omega \\, dt)/ dt \\\\\n", + "\\end{array}\n", + "\\right]\n", + "\\left[\n", + "\\begin{array}{c} {\\cal U}\\\\ {\\cal V} \\\\ {\\cal H}\\\\ \n", + "\\end{array} \\right]\n", + "= 0.$$\n", + "\n", + "Setting the determinate to zero gives the dispersion relation:\n", + "
\n", + "(Full Numerical Dispersion Relation)\n", + "$$\n", + "\\frac {\\sin^2(\\omega\\,dt)}{dt^2} = f^2 + \\frac{gH}{d^2} \\left( \\sin^2(kd) + \\sin^2(\\ell d) \\right)$$\n", + "
\n", + "Still following section [Stability](#Stability:-the-CFL-condition), let\n", + "$\\lambda = \\exp (i \\omega\\, dt)$ and let\n", + "$q^2 = f^2 + {gH}/{d^2} \\left( \\sin^2(kd) + \\sin^2(\\ell d) \\right)$,\n", + "substitution into ([Full Numerical Dispersion Relation](#lab7:eq:full:numDisp)) gives\n", + "$$-(\\lambda-1/\\lambda)^2 = 4 q^2 dt^2$$ or equation ([Lambda Eqn](#lab7:eq:lambda))\n", + "again. For stability $\\lambda$ must be less than 1, so\n", + "$$1 > q^2 dt^2 = {dt^2} \\left(f^2 + {gH}/{d^2} \\left( \\sin^2(kd) + \\sin^2(\\ell d) \\right)\n", + "\\right)$$ The sines take their maximum values at $k=\\pi/(2d)$ and\n", + "$\\ell=\\pi/(2d)$ giving $$dt^2 < \\frac{1}{f^2 + 2 gH/d^2}$$ This is the\n", + "CFL condition for the full equations on the Arakawa A grid. Note that\n", + "the most unstable mode moves at $45^o$ to the grid." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/fourgrids.png', width='80%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Figure Four More Grids. Note that E is simply a rotation of grid B by $45^\\circ$.\n", + "\n", + "\n", + "### Problem Six\n", + "\n", + "[lab7:prob:stability~2~d]:(#Problem-Six)\n", + "\n", + "Use the interactive example below to investigate stability of the various grids. Calculate the stability for each grids A, B, and C. Find the dt for stability (to one significant figure) given $f = 1 \\times 10^{-4}$s$^{-1}$, $g = 10$ m s$^{-2}$, $H = 4000$ m and $dx = 20$ km. Is it the same for all four grids? Why not? " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# grid A is grid 1, grid B is grid 2 and and grid C is grid 3\n", + "# ngrid is the number of grid points in x and y\n", + "# dt is the time step in seconds\n", + "# T is the time plotted is seconds to 4*3600 is 4 hours\n", + "interactive1.interactive1(grid=3, ngrid=11, dt=150, T=4*3600)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Accuracy\n", + "[accuracy]:(#Accuracy)\n", + "\n", + "To determine the accuracy of the spatial discretization, we will compare\n", + "the *numerical dispersion relation* ([Full Numerical Dispersion Relation](#lab7:eq:full:numDisp)) for\n", + "$dt \\rightarrow 0$ $$\\omega^2 = f^2 + gH \\frac {\\sin^2(kd)}{d^2} +\n", + "gH \\frac {\\sin^2 (\\ell d)}{d^2}$$ with the exact, *continuous dispersion\n", + "relation* \n", + "
\n", + "(Full Analytic Dispersion Relation)\n", + "$$\n", + "\\omega^2 = f^2 + gH(k^2+\\ell^2)$$\n", + "
\n", + "\n", + "We can plot the two dispersion relations as functions of $k$ and $\\ell$,\n", + "given the ratio of $d/R$, where $d$ is the grid size and $R$ is the\n", + "Rossby radius defined in the previous section. For example, the exact\n", + "$\\omega$ and its discrete approximation, using Grid A and $d/R = 1/2$,\n", + "can be compared in Figure [Accuracy Comparison](#lab7:fig:accuracy-demo)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/accuracy_demo.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Accuracy Comparison: A comparison of the exact $\\omega$ and the discrete approximation\n", + "using Grid A and with $d/R=1/2$.\n", + "
\n", + "\n", + "It is easy to see that the Grid A approximation is not accurate enough.\n", + "There are a number of other possibilities for grids, all of which\n", + "*stagger* the unknowns; that is, different variables are placed at\n", + "different spatial positions as discussed in\n", + "section [Staggered Grids](#Staggered-Grids).\n", + "\n", + "Four other grids, which are known as [Mesinger and Arakawa](#Ref:MesingerArakawa) B, C, D and E\n", + "grids as shown above [Figure Four More Grids](#FigureFourGrids). " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To work with these grids, we must introduce an averaging operator,\n", + "defined in terms of *half-points* on the grid:\n", + "$$\\overline{\\alpha}_{mn}^{x} = \\frac{\\alpha_{m+\\frac{1}{2},n} + \\alpha_{m-\\frac{1}{2},n}}{2}$$\n", + "and modify the difference operator\n", + "$$(\\delta_{x}\\alpha)_{mn} = \\frac{\\alpha_{m+\\frac{1}{2},n} -\n", + " \\alpha_{m-\\frac{1}{2},n}}{d}$$\n", + "\n", + "### Problem Seven\n", + "\n", + "[lab7:prob:finite~d~ifference~f~orm]:(#Problem-Seven)\n", + "\n", + "A. For grid B, write down the finite difference form of the shallow\n", + " water equations.\n", + "\n", + "B. For grid C, write down the finite difference form of the shallow\n", + " water equations.\n", + "\n", + "C. For grid D, write down the finite difference form of the shallow\n", + " water equations.\n", + "\n", + "The dispersion relation for each grid can be found in a manner analgous\n", + "to that for the A grid. For the B grid the dispersion relation is\n", + "$$\\left( \\frac {\\omega}{f} \\right)^2\n", + "= 1 + 4 \\left( \\frac {R}{d} \\right)^2 \\left( \\sin^2 \\frac {k d}{2}\\cos^2 \\frac{\\ell d}{2} + \\cos^2 \\frac {k d}{2}\\sin^2 \\frac{\\ell d}{2} \\right)$$\n", + "and for the C grid it is $$\\left( \\frac {\\omega}{f} \\right)^2\n", + "= \\cos^2 \\frac {k d}{2} \\cos^2 \\frac{\\ell d}{2} + 4 \\left( \\frac {R}{d} \\right)^2 \\left( \\sin^2 \\frac {k d}{2} + \\sin^2 \\frac {\\ell d}{2} \\right)$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Eight\n", + "[lab7:prob:disp~D~]:(#Problem-Eight)\n", + "\n", + "Find the dispersion relation for the D grid.\n", + "\n", + "In the interactive exercise below, you will enter the dispersion for\n", + "each of the grids. Study each plot carefully for accuracy of phase and\n", + "group speed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def disp_analytic(kd, ld, Rod=0.5):\n", + " Omegaof = 1 + Rod**2 * (kd**2 + ld**2)\n", + " return Omegaof\n", + "# define disp_A, disp_B, disp_C here and run the cell" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# replace the second disp_analytic with one of your numerical dispersion functions, e.g. disp_A\n", + "dispersion_2d.dispersion_2d(disp_analytic, disp_analytic, Rod=0.5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Nine\n", + "[lab7:prob:accuracy]:(#Problem-Nine)\n", + "\n", + "A. For $R/d = 2$ which grid gives the most accurate solution? As well as the closeness of fit of $\\omega/f$, also consider the group speed (gradient of the curve). The group speed is the speed at which wave energy propagates. \n", + "\n", + "B. For $R/d = 0.2$ which grid gives the most accurate solution? As well as the closeness of fit of $\\omega/f$, also consider the group speed (gradient of the curve). The group speed is the speed at which wave energy propagates. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Details\n", + "\n", + "### Starting the Simulation Full Equations \n", + "\n", + "[lab7start]:(#Starting-the-Simulation:-Full-Equations)\n", + "\n", + "The *leap-frog scheme* requires values of $u$, $v$ and $h$ at step 1 as\n", + "well as at step 0. However, the initial conditions only provide starting\n", + "values at step 0, so we must find some other way to obtain values at\n", + "step 1. There are various methods of obtaining the second set of\n", + "starting values, and the code used in this laboratory uses a\n", + "*predictor/corrector method* to obtain values at step 1 from the values\n", + "at step 0. For the simple equations this process was discussed in\n", + "section [Predictor-Corrector to Start](#Predictor-Corrector-to-Start). For the full equations the procedure goes\n", + "as follows:\n", + "\n", + "- the solution at step $1$ is predicted using a *forward Euler step*:\n", + " $$\\begin{array}{l}\n", + " u(1) = u(0) + dt (f v(0)-g \\delta_x h(0))\\\\\n", + " v(1) = v(0) + dt (-f u(0)-g \\delta_y h(0))\\\\\n", + " h(1) = h(0) + dt (-H(\\delta_x u(0) + \\delta_y v(0))) \n", + " \\end{array}$$\n", + "\n", + "- then, step $\\frac{1}{2}$ is estimated by averaging step $0$ and the\n", + " predicted step $1$: $$\\begin{array}{l} \n", + " u(1/2) = 1/2 (u(0)+u(1)) \\\\\n", + " v(1/2) = 1/2 (v(0)+v(1)) \\\\\n", + " h(1/2) = 1/2 (h(0)+h(1)) \n", + " \\end{array}$$\n", + "\n", + "- finally, the step $1$ approximation is corrected using leap frog\n", + " from $0$ to $1$ (here, we use only a half time-step\n", + " $\\frac{1}{2} dt$): $$\\begin{array}{l}\n", + " u(1) = u(0) + dt (f v(1/2)-g\\delta_x h(1/2)) \\\\\n", + " v(1) = v(0) + dt(-f u(1/2)-g\\delta_y h(1/2)) \\\\\n", + " h(1) = h(0) + dt(-H (\\delta_x u(1/2)+\\delta_y v(1/2))) \n", + " \\end{array}$$\n", + "\n", + "### Initialization\n", + "\n", + "\n", + "The initial conditions used for the stability demo for the full\n", + "equations are Poincare waves as described in the physical example in\n", + "Section [Physical Example, Poincaré Waves](#Physical-Example,-Poincar%C3%A9-Waves).\n", + "\n", + "Assuming a surface height elevation $$h = \\cos (kx+\\ell y)$$ \n", + "equations\n", + "([Full Eqns, Eqn 1](#lab7:eq:swea),[Full Eqns, Eqn 2](#lab7:eq:sweb)) give \n", + "$$\\begin{aligned}\n", + " u &=& \\frac {-1} {H(k^2+\\ell^2)} \\, \\left( k \\omega \\cos(kx+\\ell\n", + " y)+f \\ell \\sin(kx+\\ell y) \\right) \\nonumber \\\\\n", + " v &=& \\frac 1 {H(k^2+\\ell^2)} \\, \\left( -\\ell \\omega \\cos(kx+\\ell\n", + " y)+f k \\sin(kx+\\ell y)\\right) \\nonumber \\end{aligned}$$ \n", + "where\n", + "$\\ell$ and $k$ are selected by the user. It is assumed $g = 9.8$m/s$^2$,\n", + "$H = 400$m and $f = 10^{-4}$/s. The value of the frequency $\\omega$ is\n", + "given by the dispersion relation, ([Full Analytic Dispersion Relation](#lab7:eq:full_disp))." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Boundary Conditions\n", + "\n", + "The boundary conditions used for the stability demo for the full\n", + "equations are *“periodic”* in both x and y. Anything which propagates\n", + "out the left hand side comes in the right hand side, *etc*. This\n", + "condition forces a periodicity on the flow, the wavelengths of the\n", + "simulated waves must be sized so that an integral number of waves fits\n", + "in the domain.\n", + "\n", + "Specifically, for a $m \\times n$ grid, the boundary conditions for the\n", + "variable $h$ along the right and left boundaries are \n", + "$$\\begin{aligned}\n", + " h(i=1,j) &=& h(i=m-1,j)\\,\\,\\, {\\rm for}\\,\\,\\, j=2 \\,\\, {\\rm to} \\,\\,n-1\\\\ \\nonumber\n", + " h(i=m,j) &=& h(i=2,j) \\,\\,\\, {\\rm for}\\,\\,\\, j=2 \\,\\, {\\rm to} \\,\\,n-1\n", + " \\end{aligned}$$ and along the top and bottom boundaries\n", + "$$\\begin{aligned}\n", + " h(i,j=1) &=& h(i,j=n-1)\\,\\,\\, {\\rm for} \\,\\,\\, i=2 \\,\\, {\\rm to} \\,\\,m-1\\\\ \\nonumber\n", + " h(i,j=n) &=& h(i,j=2) \\,\\,\\, {\\rm for} \\,\\,\\, i=2 \\,\\, {\\rm to} \\,\\,m-1 .\n", + " \\end{aligned}$$ The conditions for $u$ and $v$ are identical." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Computational Mode \n", + "[lab7computational_mode]:(#Computational-Mode)\n", + "\n", + "In section [Stability](#Stability) it was determined that there are four\n", + "oscillatory modes. Consider letting $dt \\rightarrow 0$ in\n", + "([Lambda Eqn](#lab7:eq:lambda)). Two of the roots give $+1$ and two roots give $-1$.\n", + "\n", + "Consider the variable $u$ at time $p \\, dt$ and one time step later at\n", + "time $(p+1) \\, dt$.\n", + "$$\\frac{u_{m(p+1)}}{u_{mp}} = \\frac {{\\cal R}e \\{ {\\cal U} \\exp [i(kmd - \\omega (p+1)\\, dt)] \\} } {{\\cal R}e \\{ {\\cal U} \\exp [i(kmd - \\omega p\\, dt)] \\} } = \\exp(i \\omega \\, dt) = \\lambda$$\n", + "Thus, $\\lambda$ gives the ratio of the velocity at the next time step to\n", + "its value at the current time step. Therefore, as the time step gets\n", + "very small, physically we expect the system not to change much between\n", + "steps. In the limit of zero time step, the value of $u$ or $h$ should\n", + "not change between steps; that is $\\lambda$ should be $1$.\n", + "\n", + "A value of $\\lambda = -1$ implies $u$ changes sign between each step, no\n", + "matter how small the step. This mode is not a physical mode of\n", + "oscillation, but rather a *computational mode*, which is entirely\n", + "non-physical. It arises from using a centre difference scheme in time\n", + "and not staggering the grid in time. There are schemes that avoid\n", + "introducing such spurious modes (by staggering in time), but we won’t\n", + "discuss them here (for more information, see [Mesinger and Arakawa, 1976](#Ref:MesingerArakawa)\n", + " [Ch. II]). However, the usual practice in geophysical fluid dynamics is\n", + "to use the leap-frog scheme anyway (since it is second order in time)\n", + "and find a way to keep the computational modes “small”, in some sense.\n", + "\n", + "For a reasonably small value of $dt$, the computational modes have\n", + "$\\lambda \\approx -1$. Therefore, these modes can be eliminated almost\n", + "completely by averaging two adjacent time steps. To understand why this\n", + "is so, think of a computational mode $\\hat{u}_{mp}$ at time level $p$,\n", + "which is added to its counterpart,\n", + "$\\hat{u}_{m(p+1)} \\approx -\\hat{u}_{mp}$ at the next time step: *their\n", + "sum is approximately zero!* For the code in this lab, it is adequate to\n", + "average the solution in this fashion only every 101 time steps (though\n", + "larger models may need to be averaged more often). After the averaging\n", + "is performed, the code must be restarted; see [Section on Starting](#Starting-the-Simulation-Full-Equations)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary \n", + "\n", + "**Poincare waves:** These are waves that obey the dispersion relation $\\omega ^2=f^2+k^2 c^2$, where $c$ is the wave speed, $k$ is the magnitude of the wavenumber vector, $f$ is the Coriolis frequency, and $\\omega$ is the wave frequency.\n", + "\n", + "**CFL condition:** named after Courant, Friedrichs and Levy, who first derived the relationship. This is a stability condition for finite difference schemes (for propagation problems) that corresponds physically to the idea that the continuous domain of dependence must contain the corresponding domain of dependence for the discrete problem.\n", + "\n", + "**dispersive wave:** Any wave whose speed varies with the wavenumber. As a consequence, waves of different wavelengths that start at the same location will move away at different speeds, and hence will spread out, or *disperse*.\n", + "\n", + "**dispersion relation:** For dispersive waves, this relation links the frequency (and hence also the phase speed) to the wavenumber, for a given wave. See also, *dispersive wave*.\n", + "\n", + "**dispersive wave:** Any wave whose speed varies with the wavenumber. As a consequence, waves of different wavelengths that start at the same location will move away at different speeds, and hence will spread out, or *disperse*.\n", + "\n", + "**dispersion relation:** For dispersive waves, this relation links the frequency (and hence also the phase speed) to the wavenumber, for a given wave. See also, *dispersive wave*.\n", + "\n", + "**staggered grid:** This refers to a problem with several unknown functions, where the discrete unknowns are not located at same grid points; rather, they are *staggered* from each other. For example, it is often the case that in addition to the grid points themselves, some uknowns will be placed at the center of grid cells, or at the center of the sides of the grid cells." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**leap-frog scheme:** This term is used to refer to time discretization schemes that use the centered difference formula to discretize the first time derivative in PDE problems. The resulting difference scheme relates the solution at the next time step to the solution *two* time steps previous. Hence, the even- and odd-numbered time steps are linked together, with the resulting computation performed in a *leap-frog* fashion.\n", + "\n", + "**periodic boundary conditions:** Spatial boundary conditions where the value of the solution at one end of the domain is required to be equal to the value on the other end (compare to dirichlet boundary values, where the the solution at both ends is fixed at a specific value or values). This enforces periodicity on the solution, and in terms of a fluid flow problem, these conditions can be thought of more intuitively as requiring that any flow out of the one boundary must return through the corresponding boundary on the other side.\n", + "\n", + "**computational mode:** When performing a modal analysis of a numerical scheme, this is a mode in the solution that does not correspond to any of the \"real\" (or physical) modes in the continuous problem. It is an artifact of the discretization process only, and can sometimes lead to spurious computational results (for example, with the leap-frog time stepping scheme)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## References\n", + "\n", + "\n", + "Cushman-Roisin, B., 1994: Introduction to Geophysical Fluid Dynamics, Prentice Hall.\n", + "\n", + "\n", + "Gill, A.E., 1982: Atmosphere-Ocean Dynamics, Academic Press, Vol. 30 International Geophysics Series, New York. \n", + "\n", + "\n", + "Mesinger, F. and A. Arakawa, 1976: Numerical Methods Used in Atmospheric Models,GARP Publications Series No.~17, Global Atmospheric Research Programme.\n", + "\n", + "\n", + "Pond, G.S. and G.L. Pickard, 1983: Introductory Dynamic Oceanography, Pergamon, Great Britain, 2nd Edition.\n", + "\n", + "\n", + "Press, W.H., S.A. Teukolsky, W.T. Vetterling and B.P. Flannery, 1992: Numerical Recipes in FORTRAN: The Art of Scientific Computing, Cambridge University Press, Cambridge, 2n Edition.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "255.625px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab8/01-lab8.ipynb.txt b/_sources/notebooks/lab8/01-lab8.ipynb.txt new file mode 100644 index 0000000..997b311 --- /dev/null +++ b/_sources/notebooks/lab8/01-lab8.ipynb.txt @@ -0,0 +1,1994 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 8: Solution of the Quasi-geostrophic Equations \n", + "\n", + " Lin Yang & John M. Stockie \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems ##\n", + "\n", + "- [Problem 1:](#Problem-One) Discretization of the Jacobian term\n", + "- [Problem 2:](#Problem-Two) Numerical instability in the “straightforward” Jacobian \n", + "- [Problem 3:](#Problem-Three) Implement the SOR relaxation\n", + "- [Problem 4:](#Problem-Four) No-slip boundary conditions\n", + "- [Problem 5:](#Problem-Five) Starting values for the time integration\n", + "- [Problem 6:](#Problem-Six) Duplication of classical results" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Goals ##\n", + "\n", + "This lab is an introduction to the use of implicit schemes for the\n", + "solution of PDE’s, using as an example the quasi-geostrophic equations\n", + "that govern the large-scale circulation of the oceans.\n", + "\n", + "You will see that the discretization of the governing equations leads to\n", + "a large, sparse system of linear equations. The resulting matrix problem\n", + "is solved with relaxation methods, one of which you will write the code\n", + "for, by modifying the simpler Jacobi relaxation. There are two types of\n", + "boundary conditions typically used for this problem, one of which you\n", + "will program yourself – your computations are easily compared to\n", + "previously-obtained “classical” results." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Learning Objectives ##\n", + "\n", + "After reading and working through this lab you will be able to:\n", + "* Explain one reason why one may need to solve a large system of linear equations even though the underlying method is explicit\n", + "* Describe the relaxation method\n", + "* Rescale a partial-differential equation\n", + "* Write down the center difference approximation for the Laplacian operator\n", + "* Describe what a ghost point is" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "There are no required readings for this lab. If you would like some\n", + "additional background beyond the material in the lab itself, then you\n", + "may refer to the references listed below:\n", + "\n", + "- **Equations of motion:**\n", + "\n", + " - [Pedlosky](#Ref:Pedlosky) Sections 4.6 & 4.11 (derivation of QG\n", + " equations)\n", + "\n", + "- **Nonlinear instability:**\n", + "\n", + " - [Mesinger & Arakawa](#Ref:MesingerArakawa) (classic paper with\n", + " description of instability and aliasing)\n", + "\n", + " - [Arakawa & Lamb](#Ref:ArakawaLamb) (non-linear instability in the QG\n", + " equations, with the Arakawa-Jacobian)\n", + "\n", + "- **Numerical methods:**\n", + "\n", + " - [Strang](#Ref:Strang) (analysis of implicit schemes)\n", + "\n", + " - [McCalpin](#Ref:McCalpin) (QGbox model)\n", + "\n", + "- **Classical numerical results:**\n", + "\n", + " - [Veronis](#Ref:Veronis) (numerical results)\n", + "\n", + " - [Bryan](#Ref:Bryan) (numerical results)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import context\n", + "from IPython.display import Image\n", + "# import the quiz script\n", + "from numlabs.lab8 import quiz8 as quiz" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction ##\n", + "\n", + "An important aspect in the study of large-scale circulation in the ocean\n", + "is the response of the ocean to wind stress. Solution of this problem\n", + "using the full Navier-Stokes equations is quite complicated, and it is\n", + "natural to look for some way to simplify the governing equations. A\n", + "common simplification in many models of large-scale, wind-driven ocean\n", + "circulation, is to assume a system that is homogeneous and barotropic.\n", + "\n", + "It is now natural to ask:\n", + "\n", + "> *Does the simplified model capture the important dynamics in the\n", + "> real ocean?*\n", + "\n", + "This question can be investigated by solving the equations numerically,\n", + "and comparing the results to observations in the real ocean. Many\n", + "numerical results are already available, and so the purpose of this lab\n", + "is to introduce you to the numerical methods used to solve this problem,\n", + "and to compare the computed results to those from some classical papers\n", + "on numerical ocean simulations.\n", + "\n", + "Some of the numerical details (in Sections [Right Hand Side](#Right-Hand-Side), [Boundary Conditions](#Boundary-Conditions), [Matrix Form of Discrete Equations](#Matrix-Form-of-the-Discrete-Equations), [Solution of the Poisson Equation by Relaxation](#Solution-of-the-Poisson-Equation-by-Relaxation)\n", + "and the\n", + "appendices) are quite technical, and may be passed over the first time\n", + "you read through the lab. You can get a general idea of the basic\n", + "solution procedure without them. However, you should return to them\n", + "later and understand the material contained in them, since these\n", + "sections contain techniques that are commonly encountered when solving\n", + "PDE’s, and an understanding of these sections is required for you to\n", + "answer the problems in the Lab.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## The Quasi-Geostrophic Model ##\n", + "\n", + "Consider a rectangular ocean with a flat bottom, as pictured in\n", + "[Figure Model Ocean](#Figure-Model-Ocean), and ignore curvature effects, by confining the region of interest to a *mid-latitude $\\beta$-plane*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/rect.png',width='45%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Model Ocean The rectangular ocean with flat bottom, ignoring curvature\n", + "effects.\n", + "
\n", + "\n", + "More information on what is a $\\beta$-plane and on the neglect of\n", + "curvature terms in the $\\beta$-plane approximation is given in the\n", + "appendix.\n", + "\n", + "If we assume that the ocean is homogeneous (it has constant density\n", + "throughout), then the equations governing the fluid motion on the\n", + "$\\beta$-plane are: \n", + "\n", + "
(X-Momentum Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac{\\partial u}{\\partial t} + u \\frac {\\partial u}{\\partial x} + v \\frac {\\partial u}{\\partial y} + w \\frac{\\partial u}{\\partial z} - fv = - \\, \\frac{1}{\\rho} \\, \\frac {\\partial p}{\\partial x}\n", + "+ A_v \\, \\frac{\\partial^2 u}{\\partial z^2} + A_h \\, \\nabla^2 u\n", + "\\end{equation}\n", + "\n", + "
(Y-Momentum Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} + w \\frac{\\partial v}{\\partial z} + fu = - \\, \\frac{1}{\\rho} \\, \\frac{\\partial p}{\\partial y}\n", + "+ A_v \\, \\frac{\\partial^2 v}{\\partial z^2} + A_h \\, \\nabla^2 v\n", + "\\end{equation}\n", + "\n", + "
(Hydrostatic Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac{\\partial p}{\\partial z} = - \\rho g\n", + "\\end{equation}\n", + "\n", + "
(Continuity Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac {\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} = - \\, \\frac{\\partial w}{\\partial z}\n", + "\\end{equation}\n", + "\n", + "where\n", + "\n", + "- ([X-Momentum Eqn](#eq:xmom)) and ([Y-Momentum Eqn](#eq:ymom)) are the lateral momentum equations,\n", + "\n", + "- ([Hydrostatic Eqn](#eq:hydrostatic)) is the hydrostatic balance (and replaces the vertical momentum\n", + " equation), and\n", + "\n", + "- ([Continuity Eqn](#eq:continuity)) is the continuity (or incompressibility or conservation of volume) condition.\n", + "\n", + "The variables and parameters appearing above are:\n", + "\n", + "- $(u,v,w)$, the fluid velocity components;\n", + "\n", + "- $f(y)$, the Coriolis parameter (assumed to be a linear function of\n", + " $y$);\n", + "\n", + "- $\\rho$, the density (assumed constant for a homogeneous fluid);\n", + "\n", + "- $A_v$ and $A_h$, the vertical and horizontal coefficients of\n", + " viscosity, respectively (constants);\n", + "\n", + "- $g$, the gravitational acceleration (constant).\n", + "\n", + "Equations ([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)) form a non-linear system of PDE’s, for which there are many\n", + "numerical methods available. However, due to the complexity of the\n", + "equations, the methods themselves are *very complex*, and\n", + "consume a large amount of CPU time. It is therefore advantageous for us\n", + "to reduce the equations to a simpler form, for which common, and more\n", + "efficient numerical solution techniques can be used.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "By applying a sequence of physically-motivated approximations (see\n", + "[Appendix Simplification of the QG Model Equations](#Simplification-of-the-QG-Model-Equations])) and by using the boundary conditions, the system([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)) can be\n", + "reduced to a single PDE: \n", + "
\n", + "(Quasi-Geotrophic Eqn)\n", + "$$\n", + " \\frac{\\partial}{\\partial t} \\, \\nabla_h^2 \\psi + {\\cal J} \\left( \\psi, \\nabla_h^2 \\psi \\right)\n", + " + \\beta \\, \\frac {\\partial \\psi}{\\partial x} = \\frac{1}{\\rho H} \\, \\nabla_h \\times \\tau - \\kappa\n", + " \\, \\nabla_h^2 \\psi + A_h \\, \\nabla_h^4 \\psi $$\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " where\n", + "\n", + "- $\\psi$ is the stream function, defined by\n", + " $$u = - \\frac{\\partial \\psi}{\\partial y},$$\n", + " $$v = \\frac{\\partial \\psi}{\\partial x}$$\n", + "\n", + "- $$\\nabla_h = \\left(\\frac{\\partial}{\\partial x},\\frac{\\partial}{\\partial y}\\right)$$ \n", + " is the “horizontal”\n", + " gradient operator, so-called because it involves only derivatives in\n", + " $x$ and $y$;\n", + "\n", + "- $${\\cal J} (a,b) = \\frac{\\partial a}{\\partial x} \\frac{\\partial\n", + " b}{\\partial y} - \\frac{\\partial a}{\\partial y} \\frac{\\partial b}{\\partial x}$$ \n", + " is the *Jacobian* operator;\n", + "\n", + "- $\\vec{\\tau}(x,y) = \\left(\\,\\tau_1(x,y),\\tau_2(x,y)\\,\\right)$ is the\n", + " wind stress boundary condition at the surface $z=0$. A simple form\n", + " of the wind stress might assume an ocean “box” that extends from\n", + " near the equator to a latitude of about $60^\\circ$, for which\n", + " typical winds are easterly near the equator and turn westerly at\n", + " middle latitudes. A simple function describing this is\n", + " $$\\vec{\\tau} = \\tau_{max} (-\\cos y, 0),$$ \n", + " which is what we will use in this lab. \n", + " \n", + " More complicated wind stress functions are possible. See [McCalpin’s](#Ref:McCalpin)\n", + " QGBOX documentation [p. 24] for another\n", + " example." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- $\\beta = df/dy$ is a constant, where $f(y) = f_0+\\beta y$ (see\n", + " [Appendix Definition of the Beta-plane](#Definition-of-the-Beta-plane));\n", + "\n", + "- $\\kappa = {1}/{H} \\left[ (A_v f_0)/{2} \\right]^{1/2}$ is the bottom friction scaling (constant); and\n", + "\n", + "- $H$ is the vertical depth of the water column (constant).\n", + "\n", + "Notice that the original (second order) system of four equations in four\n", + "unknowns ($u$, $v$, $w$, $p$) has now been reduced to a single (fourth\n", + "order) PDE in one unknown function, $\\psi$. It will become clear in the\n", + "next section just how much simpler the system of equations has become …\n", + "\n", + "Before going on, though, we need to close the system with the\n", + "*boundary conditions* for the stream function $\\psi$. We\n", + "must actually consider two cases, based on whether or not the lateral\n", + "eddy viscosity parameter, $A_h$, is zero:\n", + "\n", + "- **if $A_h=0$:** the boundary conditions are\n", + " *free-slip*; that is, $\\psi=0$ on the boundary.\n", + "\n", + "- **if $A_h\\neq 0$:** the boundary conditions are\n", + " *no-slip*; that is both $\\psi$ and its normal\n", + " derivative $\\nabla\\psi\\cdot\\hat{n}$ are zero on the boundary (where\n", + " $\\hat{n}$ is the normal vector to the boundary)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Scaling the Equations of Motion ###\n", + "\n", + "In physical problems, it is not uncommon for some of the quantities of\n", + "interest to differ in size by many orders of magnitude. This is of\n", + "particular concern when computing numerical solutions to such problems,\n", + "since then round-off errors can begin to pollute the computations (see\n", + "Lab 2).\n", + "\n", + "This is also the case for the QG equations, where the parameters have a\n", + "large variation in size. The QG model parameters, and typical numerical\n", + "values, are given in [Table of Parameters](#tab:parameters). For such problems it is customary to *rescale* the\n", + "variables in such a way that the size differences are minimized.\n", + "\n", + "\n", + "**Problem Parameters**\n", + "\n", + "| Symbol | Name | Range of Magnitude | Units |\n", + "| :----------: | :-------------------------: | :---------------------------------------: | :----------------: |\n", + "| $R$ | Earth’s radius | $6.4 \\times 10^6$ | $m$ |\n", + "|$\\Omega$ | Angular frequency for Earth | $7.27 \\times 10^{-5}$ | $s^{-1}$ |\n", + "| $H$ | Depth of active layer | $100 \\rightarrow 4000$ | $m$ |\n", + "| $B$ | Length and width of ocean | $1.0 \\rightarrow 5.0 \\times 10^6$ | $m$ |\n", + "| $\\rho$ | Density of water | $10^3$ | $kg/m^3$ |\n", + "| $A_h$ | Lateral eddy viscosity | $0$ or $10^1 \\rightarrow 10^4$ | $m^2/s$ |\n", + "| $A_v$ | Vertical eddy viscosity | $10^{-4} \\rightarrow 10^{-1}$ | $m^2/s$ |\n", + "| $\\tau_{max}$ | Maximum wind stress | $10^{-2} \\rightarrow 1$ | $kg m^{-1} s^{-2}$ |\n", + "| $\\theta_0$ | Latitude | $0 \\rightarrow \\frac{\\pi}{3}$ | - |" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Derived Quantities**\n", + "\n", + "| Symbol | Name | Range of Magnitude | Units |\n", + "| :----------: | :-------------------------------------------: | :----------------------------------------: | :----------------: |\n", + "| $\\beta$ | $\\beta =2\\Omega \\cos \\theta_0 / R$ | $1.1 \\rightarrow 2.3 \\times 10^{-11}$ | $m^{-1} s^{-1}$ |\n", + "| $f_0$ | $f_0 = 2 \\Omega \\sin \\theta_0$ | $0.0 \\rightarrow 1.3 \\times 10^{-4}$ | $s^{-1}$ |\n", + "| $U_0$ | Velocity scale = $\\tau_{max}/(\\beta\\rho H B)$ | $10^{-5} \\rightarrow 10^{-1}$ | $m s^{-1}$ | \n", + "| $\\kappa$ | bottom friction parameter | $0.0 \\rightarrow 10^{-5}$ | $m^2 s^{-2}$ |\n", + "\n", + "**Non-dimensional Quantities**\n", + "\n", + "| Symbol / Name | Range of Magnitude for Quantity |\n", + "| :----------------------------------------------------: | :------------------------------------------: |\n", + "| $\\epsilon$ / Vorticity ratio = $U_0/(\\beta B^2)$ | (computed) | \n", + "| $\\frac{\\tau_{max}}{\\epsilon\\beta^2 \\rho H B^3}$ | $10^{-12} \\rightarrow 10^{-14}$ |\n", + "| $\\frac{\\kappa}{\\beta B}$ | $4 \\times 10^{-4} \\rightarrow 6 \\times 10^1$ |\n", + "| $\\frac{A_h}{\\beta B^3}$ | $10^{-7} \\rightarrow 10^{-4}$ |\n", + "\n", + "
\n", + "**Table of Parameters**\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let us go through this scaling process for the evolution\n", + "equation [(Quasi-Geostrophic Eqn)](#eq:quasi) for the stream function, which is reproduced\n", + "here for easy comparison:\n", + "\n", + "$$\\frac{\\partial}{\\partial t} \\nabla^2_h \\psi = - \\beta \\frac{\\partial \\psi}{\\partial x} - {\\cal J}(\\psi, \\nabla_h^2\\psi)+ \\frac{1}{\\rho H} \\nabla_h \\times \\vec{\\tau} - \\kappa \\nabla_h^2 \\psi + A_h \\nabla_h^4 \\psi$$ \n", + " \n", + "The basic idea is to find typical\n", + "*scales of motion*, and then redefine the dependent and\n", + "independent variables in terms of these scales to obtain\n", + "*dimensionless variables*.\n", + "\n", + "For example, the basin width and length, $B$, can be used as a scale for\n", + "the dependent variables $x$ and $y$. Then, we define dimensionless\n", + "variables \n", + "
\n", + "(x-scale eqn)\n", + "$$x^\\prime = \\frac{x}{B}$$\n", + "
\n", + "(y-scale eqn)\n", + "$$y^\\prime = \\frac{y}{B}$$\n", + "
\n", + "\n", + "Notice that where $x$ and $y$ varied between\n", + "0 and $B$ (where $B$ could be on the order of hundreds of kilometres),\n", + "the new variables $x^\\prime$ and $y^\\prime$ now vary between 0 and 1\n", + "(and so the ocean is now a unit square).\n", + "\n", + "Similarly, we can redefine the remaining variables in the problem as\n", + "
\n", + "(t-scale eqn)\n", + "$$\n", + " t^\\prime = \\frac{t}{\\left(\\frac{1}{\\beta B}\\right)} $$\n", + "
\n", + "($\\psi$-scale eqn)\n", + "$$ \\psi^\\prime = \\frac{\\psi}{\\epsilon \\beta B^3} $$\n", + "
\n", + "($\\tau$-scale eqn)\n", + "$$ \\vec{\\tau}^\\prime = \\frac{\\vec{\\tau}}{\\tau_{max}}\n", + " $$
\n", + "\n", + "where the scales have been\n", + "specially chosen to represent typical sizes of the variables. Here, the\n", + "parameter $\\epsilon$ is a measure of the the ratio between the “relative\n", + "vorticity” ($\\max|\\nabla_h^2 \\psi|$) and the planetary vorticity (given\n", + "by $\\beta B$)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we need only substitute for the original variables in the\n", + "equations, and replace derivatives with their dimensionless\n", + "counterparts; for example, using the chain rule,\n", + "$$\\frac{\\partial}{\\partial x} = \\frac{\\partial x^\\prime}{\\partial x}\n", + "\\frac{\\partial}{\\partial x^\\prime}.$$ \n", + "Then the equation of motion becomes \n", + "\n", + "\n", + "(Rescaled Quasi-Geostrophic Eqn)\n", + "$$ \\frac{\\partial}{\\partial t^\\prime} \\nabla^{\\prime 2}_h \\psi^\\prime = - \\, \\frac{\\partial \\psi^\\prime}{\\partial x^\\prime} - \\epsilon {\\cal J^\\prime}(\\psi^\\prime, \\nabla_h^{\\prime 2}\\psi^\\prime) + \\frac{\\tau_{max}}{\\epsilon \\beta^2 \\rho H B^3} \\nabla^\\prime_h \\times \\vec{\\tau}^\\prime - \\, \\frac{\\kappa}{\\beta B} \\nabla_h^{\\prime 2} \\psi^\\prime + \\frac{A_h}{\\beta B^3} \\nabla_h^{\\prime 4} \\psi^\\prime $$ \n", + "The superscript\n", + "“$\\,^\\prime$” on $\\nabla_h$ and ${\\cal J}$ signify that the derivatives\n", + "are taken with respect to the dimensionless variables. Notice that each\n", + "term in ([Rescaled Quasi-Geostrophic Eqn](#eq:qg-rescaled)) is now dimensionless, and that there are\n", + "now 4 dimensionless combinations of parameters \n", + "$$\\epsilon, \\;\\; \\frac{\\tau_{max}}{\\epsilon \\beta^2 \\rho H B^3}, \\;\\; \\frac{\\kappa}{\\beta B}, \\;\\; \\mbox{ and} \\;\\; \\frac{A_h}{\\beta B^3}.$$ \n", + "These four expressions define four new\n", + "dimensionless parameters that replace the original (unscaled) parameters\n", + "in the problem.\n", + "\n", + "The terms in the equation now involve the dimensionless stream function,\n", + "$\\psi^\\prime$, and its derivatives, which have been scaled so that they\n", + "are now of order 1 in magnitude. The differences in sizes between terms\n", + "in the equation are now embodied solely in the four dimensionless\n", + "parameters. A term which is multiplied by a small parameter is thus\n", + "truly small in comparison to the other terms, and hence additive\n", + "round-off errors will not contribute substantially to a numerical\n", + "solution based on this form of the equations.\n", + "\n", + "For the remainder of this lab, we will use the scaled version of the\n", + "equations. Consequently, the notation will be simplified by dropping the\n", + "“primes” on the dimensionless variables. But, **do not\n", + "forget**, that any solution (numerical or analytical) from the\n", + "scaled equations must be converted back into dimensional variables\n", + "using [the scale equations](#eq:xscale)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Discretization of the QG equations ##\n", + "\n", + "At first glance, it is probably not clear how one might discretize the\n", + "QG equation ([Rescaled Quasi-Geostrophic Eqn](#eq:qg-rescaled)) from the previous section. This equation is an evolution\n", + "equation for $\\nabla_h^2 \\psi$ (the Laplacian of the stream function)\n", + "but has a right hand side that depends not only on $\\nabla_h^2 \\psi$,\n", + "but also on $\\psi$ and $\\nabla_h^4 \\psi$. The problem may be written in\n", + "a more suggestive form, by letting $\\chi = \\partial\\psi/\\partial t$.\n", + "Then, the ([Rescaled Quasi-Geostrophic Eqn](#eq:qg-rescaled)) becomes \n", + "\n", + "\n", + "(Poisson Eqn)\n", + "$$\\nabla_h^2 \\chi = F(x,y,t), \n", + "$$\n", + "\n", + "where $F(x,y,t)$ contains all of the terms\n", + "except the time derivative. We will see that the discrete version of\n", + "this equation is easily solved for the new unknown variable $\\chi$,\n", + "after which \n", + "
\n", + "$$\\frac{\\partial\\psi}{\\partial t} = \\chi\n", + "$$
\n", + "\n", + "may be used to evolve the stream function in\n", + "time.\n", + "\n", + "The next two sections discuss the spatial and temporal discretization,\n", + "including some details related to the right hand side, the boundary\n", + "conditions, and the iterative scheme for solving the large sparse system\n", + "of equations that arises from the Poisson equation for $\\chi$. Following\n", + "that is an summary of the steps in the solution procedure.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Spatial Discretization\n", + "\n", + "Assume that we are dealing with a square ocean, with dimensions\n", + "$1\\times 1$ (in non-dimensional coordinates) and begin by dividing the\n", + "domain into a grid of discrete points\n", + "$$x_i = i \\Delta x, \\;\\;i = 0, 1, 2, \\dots, M$$\n", + "$$y_j = j \\Delta y, \\;\\;j = 0, 1, 2, \\dots, N$$\n", + "where $\\Delta x = 1/M$\n", + "and $\\Delta y = 1/N$. In order to simplify the discrete equations, it\n", + "will be helpful to assume that $M=N$, so that\n", + "$\\Delta x = \\Delta y \\equiv d$. We can then look for approximate values\n", + "of the stream function at the discrete points; that is, we look for\n", + "$$\\Psi_{i,j} \\approx \\psi(x_i,y_j)$$ \n", + "(and similarly for $\\chi_{i,j}$).\n", + "The computational grid and placement of unknowns is pictured in\n", + "Figure [Spatial Grid](#Spatial-Grid)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/spatial.png',width='45%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Spatial Grid\n", + "
\n", + "\n", + "Derivatives are replaced by their centered, second-order finite\n", + "difference approximations \n", + "$$\n", + " \\left. \\frac{\\partial \\Psi}{\\partial x} \\right|_{i,j}\n", + " \\approx \n", + " \\frac{\\Psi_{i+1,j}-\\Psi_{i-1,j}}{2d}\n", + " \\left. \\frac{\\partial^2 \\Psi}{\\partial x^2} \\right|_{i,j} \n", + " \\approx\n", + " \\frac{\\Psi_{i+1,j} - 2 \\Psi_{i,j} + \\Psi_{i-1,j}}{d^2}\n", + "$$ \n", + "and similarly for the\n", + "$y$-derivatives. The discrete analogue of the ([Poisson equation](#eq:poisson)),\n", + "centered at the point $(x_i,y_j)$, may be written as\n", + "$$\\frac{\\chi_{i+1,j} - 2\\chi_{i,j} +\\chi_{i-1,j}}{d^2} + \n", + " \\frac{\\chi_{i,j+1} - 2\\chi_{i,j} +\\chi_{i,j-1}}{d^2} = F_{i,j}$$ \n", + "or,\n", + "after rearranging,\n", + "\n", + "\n", + "(Discrete $\\chi$ Eqn)\n", + "$$\\chi_{i+1,j}+\\chi_{i-1,j}+\\chi_{i,j+1}+\\chi_{i,j-1}-4\\chi_{i,j} =\n", + " d^2F_{i,j}.\n", + "$$\n", + "\n", + "Here, we’ve used\n", + "$F_{i,j} = F(x_i,y_j,t)$ as the values of the right hand side function\n", + "at the discrete points, and said nothing of how to discretize $F$ (this\n", + "will be left until [Right Hand Side](#Right-Hand-Side). The ([Discrete $\\chi$ equation](#eq:discrete-chi)) is an equation centered at the grid point $(i,j)$, and relating\n", + "the values of the approximate solution, $\\chi_{i,j}$, at the $(i,j)$\n", + "point, to the four neighbouring values, as described by the *5-point difference stencil* pictured in\n", + "[Figure Stencil](#fig:stencil)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/2diff.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Stencil:\n", + "The standard 5-point centered difference stencil for the Laplacian\n", + "(multiply by $\\frac{1}{d^2}$ to get the actual coefficients.
\n", + "\n", + "These stencil diagrams are a compact way of representing the information\n", + "contained in finite difference formulas. To see how useful they are, do\n", + "the following:\n", + "\n", + "- Choose the point on the grid in [Figure Spatial Grid](#Spatial-Grid) that\n", + " you want to apply the difference formula ([Discrete $\\chi$ Eqn](#eq:discrete-chi)).\n", + "\n", + "- Overlay the difference stencil diagram on the grid, placing the\n", + " center point (with value $-4$) on this point.\n", + "\n", + "- The numbers in the stencil are the multiples for each of the\n", + " unknowns $\\chi_{i,j}$ in the difference formula.\n", + "\n", + "An illustration of this is given in [Figure Overlay](#fig:overlay)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/2diffgrid.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "**Figure Overlay:**\n", + " The 5-point difference stencil overlaid on the grid.\n", + "
\n", + " \n", + "Before going any further with the discretization, we need to provide a\n", + "few more details for the discretization of the right hand side function,\n", + "$F(x,y,t)$, and the boundary conditions. If you’d rather skip these for\n", + "now, and move on to the time discretization\n", + "([Section Temporal Discretization](#Temporal-Discretization))\n", + "or the outline of the solution procedure\n", + "([Section Outline of Solution Procedure](#Outline-of-Solution-Procedure)), then you may do so now." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Right Hand Side\n", + "\n", + "The right hand side function for the ([Poisson equation](#eq:poisson)) is reproduced here\n", + "in scaled form (with the “primes” dropped): \n", + "$$F(x,y,t) = - \\, \\frac{\\partial \\psi}{\\partial x} - \\epsilon {\\cal J}(\\psi,\\nabla_h^{2}\\psi) + \\frac{\\tau_{max}}{\\epsilon \\beta^2 \\rho H B^3} \\nabla_h \\times \\vec{\\tau} - \\frac{\\kappa}{\\beta B} \\nabla_h^{2} \\psi + \\frac{A_h}{\\beta B^3} \\nabla_h^{4} \\psi$$\n", + "\n", + "Alternatively, the Coriolis and Jacobian terms can be grouped together\n", + "as a single term: \n", + "$$- \\, \\frac{\\partial\\psi}{\\partial x} - \\epsilon {\\cal J}(\\psi, \\nabla^2_h\\psi) = - {\\cal J}(\\psi, y + \\epsilon \\nabla^2_h\\psi)$$\n", + "\n", + "Except for the Jacobian term, straightforward second-order centered\n", + "differences will suffice for the Coriolis force\n", + "$$\\frac{\\partial\\psi}{\\partial x} \\approx \\frac{1}{2d} \\left(\\Psi_{i+1,j} - \\Psi_{i-1,j}\\right),$$ \n", + "the wind stress \n", + "$$\\nabla_h \\times \\vec{\\tau} \\approx\n", + " \\frac{1}{2d} \\, \n", + " \\left( \\tau_{2_{i+1,j}}-\\tau_{2_{i-1,j}} - \n", + " \\tau_{1_{i,j+1}}+\\tau_{1_{i,j-1}} \\right),$$\n", + "and the second order viscosity term \n", + "$$\\nabla_h^2 \\psi \\approx\n", + " \\frac{1}{d^2} \\left( \\Psi_{i+1,j}+\\Psi_{i-1,j}+\\Psi_{i,j+1} +\n", + " \\Psi_{i,j-1} - 4 \\Psi_{i,j} \\right).$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The higher order (biharmonic) viscosity term, $\\nabla_h^4 \\psi$, is\n", + "slightly more complicated. The difference stencil can be derived in a\n", + "straightforward way by splitting into $\\nabla_h^2 (\\nabla_h^2 \\psi)$ and\n", + "applying the discrete version of the Laplacian operator twice. The\n", + "resulting difference formula is \n", + "
\n", + "(Bi-Laplacian)\n", + "$$ \\nabla_h^4 \\psi \n", + " = \\nabla_h^2 ( \\nabla_h^2 \\psi ) $$ $$\n", + " \\approx \\frac{1}{d^4} \\left( \\Psi_{i+2,j} + \\Psi_{i,j+2} +\n", + " \\Psi_{i-2,j} + \\Psi_{i,j-2} \\right. + \\, 2 \\Psi_{i+1,j+1} + 2 \\Psi_{i+1,j-1} + 2 \\Psi_{i-1,j+1} +\n", + " 2 \\Psi_{i-1,j-1}\n", + " \\left. - 8 \\Psi_{i,j+1} - 8 \\Psi_{i-1,j} - 8 \\Psi_{i,j-1} - 8 \\, \\Psi_{i+1,j} + 20 \\Psi_{i,j} \\right)\n", + " $$
\n", + "which is pictured in the difference stencil in [Figure Bi-Laplacian Stencil](#fig:d4stencil)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/4diff.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "**Figure Bi-Laplacian Stencil:**\n", + "13-point difference stencil for the centered difference formula for\n", + "$\\nabla_h^4$ (multiply by $\\frac{1}{d^4}$ to get the actual\n", + "coefficients).
\n", + "\n", + "The final term is the Jacobian term, ${\\cal J}(\\psi,\n", + "\\nabla^2_h\\psi)$ which, as you might have already guessed, is the one\n", + "that is going to give us the most headaches. To get a feeling for why\n", + "this might be true, go back to ([Rescaled Quasi-Geostropic Equation](#eq:qg-rescaled))  and notice that the only\n", + "nonlinearity arises from this term. Typically, it is the nonlinearity in\n", + "a problem that leads to difficulties in a numerical scheme. Remember the\n", + "formula given for ${\\cal J}$ in the previous section:\n", + "\n", + "\n", + "(Jacobian: Expansion 1)\n", + "$${\\cal J}(a,b) = \\frac{\\partial a}{\\partial x} \\, \\frac{\\partial b}{\\partial y} - \n", + " \\frac{\\partial a}{\\partial y} \\, \\frac{\\partial b}{\\partial x}\n", + " $$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Problem One\n", + "> Apply the standard centered difference formula\n", + " (see Lab 1 if you need to refresh you memory) to get a difference\n", + " approximation to the Jacobian based on ([Jacobian: Expansion 1](#eq:jacob1). You will use this later in\n", + " [Problem Two](#Problem-Two)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We’ve seen before that there is usually more than one way to discretize\n", + "a given expression. This case is no different, and there are many\n", + "possible ways to derive a discrete version of the Jacobian. Two other\n", + "approaches are to apply centered differences to the following equivalent\n", + "forms of the Jacobian: \n", + "
\n", + "(Jacobian: Expansion 2)\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial x} \\, \\left( a \\frac{\\partial b}{\\partial y} \\right) -\n", + " \\frac{\\partial}{\\partial y} \\left( a \\frac{\\partial b}{\\partial x} \\right)\n", + " $$
\n", + "
\n", + "(Jacobian: Expansion 3)\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial y} \\, \\left( b \\frac{\\partial a}{\\partial x} \\right) -\n", + " \\frac{\\partial}{\\partial x} \\left( b \\frac{\\partial a}{\\partial y} \\right)\n", + " $$\n", + "
\n", + " \n", + "Each formula leads to a different discrete formula, and we will see in\n", + "[Section Aliasing Error and Nonlinear Instability](#Aliasing-Error-and-Nonlinear-Instability)\n", + " what effect the non-linear term has on\n", + "the discrete approximations and how the use of the different formulas\n", + "affect the behaviour of the numerical scheme. Before moving on, try to\n", + "do the following two quizzes." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Quiz on Jacobian Expansion \\#2\n", + "\n", + "Using second order centered differences, what is the discretization of the second form of the Jacobian given by\n", + "\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial x} \\, \\left( a \\frac{\\partial b}{\\partial y} \\right) -\n", + " \\frac{\\partial}{\\partial y} \\left( a \\frac{\\partial b}{\\partial x} \\right)\n", + " $$\n", + "\n", + "- A: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- B: $$\\frac 1 {4d^2} \\left[ a_{i+1,j} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- C: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1/2,j} - a_{i-1/2,j} \\right) \\left( b_{i,j+1/2} - b_{i,j-1/2} \\right) - \\left( a_{i,j+1/2} - a_{i,j-1/2} \\right) \\left( b_{i+1/2,j} - b_{i-1/2,j} \\right) \\right]$$\n", + "\n", + "- D: $$\\frac 1 {4d^2} \\left[ b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) - b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) - b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) + b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- E: $$\\frac 1 {4d^2} \\left[ a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j-1} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i-1,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- F: $$\\frac 1 {4d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- G: $$\\frac 1 {4d^2} \\left[ b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) - b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) - b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) + b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) \\right]$$\n", + " \n", + "In the following, replace 'x' by 'A', 'B', 'C', 'D', 'E', 'F', 'G', or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.jacobian_2(answer = 'x'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Quiz on Jacobian Expansion #3 \n", + "\n", + "Using second order centered differences, what is the discretization of the third form of the Jacobian given by\n", + "\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial y} \\, \\left( b \\frac{\\partial a}{\\partial x} \\right) -\n", + " \\frac{\\partial}{\\partial x} \\left( b \\frac{\\partial a}{\\partial y} \\right)\n", + " $$\n", + "\n", + "- A: - A: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- B: $$\\frac 1 {4d^2} \\left[ a_{i+1,j} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- C: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1/2,j} - a_{i-1/2,j} \\right) \\left( b_{i,j+1/2} - b_{i,j-1/2} \\right) - \\left( a_{i,j+1/2} - a_{i,j-1/2} \\right) \\left( b_{i+1/2,j} - b_{i-1/2,j} \\right) \\right]$$\n", + "\n", + "- D: $$\\frac 1 {4d^2} \\left[ b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) - b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) - b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) + b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- E: $$\\frac 1 {4d^2} \\left[ a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j-1} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i-1,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- F: $$\\frac 1 {4d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- G: $$\\frac 1 {4d^2} \\left[ b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) - b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) - b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) + b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) \\right]$$\n", + " \n", + "In the following, replace 'x' by 'A', 'B', 'C', 'D', 'E', 'F', 'G', or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.jacobian_3(answer = 'x'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Boundary Conditions\n", + "\n", + "One question that arises immediately when applying the difference\n", + "stencils in [Figure Stencil](#fig:stencil) and [Figure Bi-Laplacian Stencil](#fig:d4stencil)\n", + "is\n", + "\n", + "> *What do we do at the boundary, where at least one of the nodes\n", + "> of the difference stencil lies outside of the domain?*\n", + "\n", + "The answer to this question lies with the *boundary\n", + "conditions* for $\\chi$ and $\\psi$. We already know the boundary\n", + "conditions for $\\psi$ from [Section The Quasi-Geostrophic Model](#The-Quasi-Geostrophic-Model):\n", + "\n", + "Free slip:\n", + "\n", + "> The free slip boundary condition, $\\psi=0$, is applied when $A_h=0$,\n", + " which we can differentiate with respect to time to get the identical\n", + " condition $\\chi=0$. In terms of the discrete unknowns, this\n", + " translates to the requirement that\n", + "> $$\\Psi_{0,j} = \\Psi_{N,j} = \\Psi_{i,0} = \\Psi_{i,N} = 0 \\;\\; \\mbox{ for} \\; i,j = 0,1,\\ldots,N,$$ \n", + " \n", + "> and similarly for $\\chi$. All\n", + " boundary values for $\\chi$ and $\\Psi$ are known, and so we need only\n", + " apply the difference stencils at *interior points* (see\n", + " [Figure Ghost Points](#fig:ghost)). When $A_h=0$, the high-order viscosity\n", + " term is not present, and so the only stencil appearing in the\n", + " discretization is the 5-point formula (the significance of this will\n", + " become clear when we look at no-slip boundary conditions)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/ghost3.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "**Figure Ghost Points:**\n", + "The points on the computational grid, which are classified into\n", + " interior, real boundary, and ghost boundary points. The 5- and\n", + " 13-point difference stencils, when overlaid on the grid, demonstrate\n", + " that only the real boundary values are needed for the free-slip\n", + " boundary values (when $A_h=0$), while ghost points must be\n", + " introduced for the no-slip conditions (when $A_h\\neq 0$, and the\n", + " higher order viscosity term is present).\n", + "
\n", + "\n", + "\n", + "No-slip:\n", + "\n", + "> The no-slip conditions appear when $A_h\\neq 0$, and include the free\n", + " slip conditions $\\psi=\\chi=0$ (which we already discussed above),\n", + " and the normal derivative condition $\\nabla\\psi\\cdot\\hat{n}=0$,\n", + " which must be satisfied at all boundary points. It is clear that if\n", + " we apply the standard, second-order centered difference\n", + " approximation to the first derivative, then the difference stencil\n", + " extends *beyond the boundary of the domain and contains at\n", + " least one non-existent point! How can we get around this\n", + " problem?*\n", + "\n", + "> The most straightforward approach (and the one we will use in this\n", + " Lab) is to introduce a set of *fictitious points* or\n", + " *ghost points*,\n", + " \n", + "> $$\\Psi_{-1,j}, \\;\\; \\Psi_{N+1,j}, \\;\\; \\Psi_{i,-1}, \\;\\; \\Psi_{i,N+1}$$\n", + "\n", + "> for $i,j=0,1,2,\\ldots,N+1$, which extend one grid space outside of\n", + " the domain, as shown in [Figure Ghost Points](#fig:ghost). We can then\n", + " discretize the Neumann condition in a straightforward manner. For\n", + " example, consider the point $(0,1)$, pictured in\n", + " [Figure No Slip Boundary Condition](#fig:noslip), at which the discrete version of\n", + " $\\nabla\\psi\\cdot\\hat{n}=0$ is\n", + "\n", + "> $$\\frac{1}{2d} ( \\Psi_{1,1} - \\Psi_{-1,1}, \\Psi_{0,2} - \\Psi_{0,0} ) \\cdot (-1,0) = 0,$$ \n", + " \n", + "> (where $(-1,0)$ is the unit\n", + " outward normal vector), which simplifies to\n", + " \n", + "> $$\\Psi_{-1,1} = \\Psi_{1,1}.$$\n", + "\n", + "> The same can be done for all the\n", + " remaining ghost points: the value of $\\Psi$ at at point outside the\n", + " boundary is given quite simply as the value at the corresponding\n", + " interior point reflected across the boundary.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/noslip.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure No Slip Boundary Condition:\n", + "The discrete Neumann boundary conditions are discretized using\n", + " ghost points. Here, at point $(0,1)$, the unit outward normal vector\n", + " is $\\hat{n}=(-1,0)$, and the discrete points involved are the four\n", + " points circled in red. The no-slip condition simply states that\n", + " $\\Psi_{-1,1}$ is equal to the interior value $\\Psi_{1,1}$.\n", + "
\n", + "\n", + "Now, remember that when $A_h\\neq 0$, the $\\nabla_h^4\\psi$ term\n", + " appears in the equations, which is discretized as a 13-point\n", + " stencil . Looking at [Figure Ghost Points](#fig:ghost), it is easy to see that\n", + " when the 13-point stencil is applied at points adjacent to the\n", + " boundary (such as $(N-1,N-1)$ in the Figure) it involves not only\n", + " real boundary points, but also ghost boundary points (compare this\n", + " to the 5-point stencil). But, as we just discovered above, the\n", + " presence of the ghost points in the stencil poses no difficulty,\n", + " since these values are known in terms of interior values of $\\Psi$\n", + " using the boundary conditions.\n", + " \n", + "Just as there are many Runge-Kutta schemes, there are many finite difference stencils for the different derivatives.\n", + "For example, one could use a 5-point, $\\times$-shaped stencil for $\\nabla^2\\psi$. The flexibility of\n", + "having several second-order stencils is what makes it possible to determine an energy- and enstrophy-conserving scheme for the Jacobian which we do later.\n", + "\n", + "A good discussion of boundary conditions is given by [McCalpin](#Ref:McCalpin) in his\n", + "QGBOX code documentation, on page 44." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Matrix Form of the Discrete Equations\n", + "\n", + "In order to write the discrete equations  in matrix form, we must first\n", + "write the unknown values $\\chi_{i,j}$ in vector form. The most obvious\n", + "way to do this is to traverse the grid (see\n", + "[Figure Spatial Grid](#Spatial-Grid)), one row at a time, from left to right,\n", + "leaving out the known, zero boundary values, to obtain the ordering:\n", + "$$\n", + " \\vec{\\chi} =\n", + " \\left(\\chi_{1,1},\\chi_{2,1},\\dots,\\chi_{N-1,1},\n", + " \\chi_{1,2},\\chi_{2,2},\\dots,\\chi_{N-1,2}, \\dots, \\right. \n", + " \\left.\\chi_{N-1,N-2},\n", + " \\chi_{1,N-1},\\chi_{2,N-1},\\dots,\\chi_{N-1,N-1}\\right)^T$$\n", + " \n", + "and similarly for $\\vec{F}$. The resulting matrix (with this ordering of\n", + "unknowns) results in a matrix of the form given in\n", + "[Figure Matrix](#fig:matrix)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/matrix.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Matrix: The matrix form for the discrete Laplacian. The 5 diagonals (displayed\n", + "in blue and red) represent the non-zero values in the matrix $-$ all\n", + "other values are zero.\n", + "
\n", + "\n", + "The diagonals with the 1’s (pictured in red) contain some zero entries\n", + "due to the boundary condition $u=0$. Notice how similar this matrix\n", + "appears to the *tridiagonal matrix* in the Problems from\n", + "Lab 3, which arose in the discretization of the second derivative in a\n", + "boundary value problem. The only difference here is that the Laplacian\n", + "has an additional second derivative with respect to $y$, which is what\n", + "adds the additional diagonal entries in the matrix.\n", + "\n", + "Before you think about running off and using Gaussian elimination (which\n", + "was reviewed in Lab 3), think about the size of the matrix you would\n", + "have to solve. If $N$ is the number of grid points, then the matrix is\n", + "size $N^2$-by-$N^2$. Consequently, Gaussian elimination will require on\n", + "the order of $N^6$ operations to solve the matrix only once. Even for\n", + "moderate values of $N$, this cost can be prohibitively expensive. For\n", + "example, taking $N=101$ results in a $10000\\times 10000$ system of\n", + "linear equations, for which Gaussian elimination will require on the\n", + "order of $10000^3=10^{12}$ operations! As mentioned in Lab 3, direct\n", + "methods are not appropriate for large sparse systems such as this one. A\n", + "more appropriate choice is an iterative or *relaxation\n", + "scheme*, which is the subject of the next section.." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Solution of the Poisson Equation by Relaxation\n", + "\n", + "One thing to notice about the matrix in [Figure Matrix](#fig:matrix) is that\n", + "it contains many zeros. Direct methods, such as Gaussian elimination\n", + "(GE), are so inefficient for this problem because they operate on all of\n", + "these zero entries (in fact, there are other direct methods that exploit\n", + "the sparse nature of the matrix to reduce the operation count, but none\n", + "are as efficient as the methods we will talk about here).\n", + "\n", + "However, there is another class of solution methods, called\n", + "*iterative methods* (refer to Lab 3) which are natural\n", + "candidates for solving such sparse problems. They can be motivated by\n", + "the fact that since the discrete equations are only approximations of\n", + "the PDE to begin with, *why should we bother computing an exact\n", + "solution to an approximate problem?* Iterative methods are based\n", + "on the notion that one sets up an iterative procedure to compute\n", + "successive approximations to the solution $-$ approximations that get\n", + "closer and closer to the exact solution as the iteration proceeds, but\n", + "never actually reach the exact solution. As long as the iteration\n", + "converges and the approximate solution gets to within some tolerance of\n", + "the exact solution, then we are happy! The cost of a single iterative\n", + "step is designed to depend on only the number of non-zero elements in\n", + "the matrix, and so is considerably cheaper than a GE step. Hence, as\n", + "long as the iteration converges in a “reasonable number” of steps, then\n", + "the iterative scheme will outperform GE." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Iterative methods are also know as *relaxation methods*, of\n", + "which the *Jacobi method* is the simplest. Here are the\n", + "basic steps in the Jacobi iteration (where we’ve dropped the time\n", + "superscript $p$ to simplify the notation):\n", + "\n", + "1. Take an initial guess, $\\chi_{i,j}^{(0)}$. Let $n=0$.\n", + "\n", + "2. For each grid point $(i,j)$, compute the *residual\n", + " vector* $$ R_{i,j}^{(n)} = F_{i,j} - \\nabla^2\\chi^{(n)}_{i,j} $$ \n", + " $$ = F_{i,j} - \\frac{1}{d^2} ( \\chi_{i+1,j}^{(n)} + \\chi_{i,j+1}^{(n)} + \\chi_{i-1,j}^{(n)} +\\chi_{i,j-1}^{(n)} - 4 \\chi_{i,j}^{(n)} )$$ \n", + " (which is non-zero unless $\\chi_{i,j}$ is the exact solution).\n", + "\n", + " You should not confuse the relaxation iteration index (superscript\n", + " $\\,^{(n)}$) with the time index (superscript $\\,^p$). Since the\n", + " relaxation iteration is being performed at a single time step, we’ve\n", + " dropped the time superscript for now to simplify notation. Just\n", + " remember that all of the discrete values in the relaxation are\n", + " evaluated at the current time level $p$.\n", + "\n", + "3. “Adjust” $\\chi_{i,j}^{(n)}$, (leaving the other neighbours\n", + " unchanged) so that $R_{i,j}^{(n)}=0$. That is, replace\n", + " $\\chi_{i,j}^{(n)}$ by whatever you need to get a zero residual. This\n", + " replacement turns out to be:\n", + " $$\\chi_{i,j}^{(n+1)} = \\chi_{i,j}^{(n)} - \\frac{d^2}{4} R_{i,j}^{(n)},$$ \n", + " which defines the iteration.\n", + "\n", + "4. Set $n\\leftarrow n+1$, and repeat steps 2 & 3 until the residual is\n", + " less than some tolerance value. In order to measure the size of the\n", + " residual, we use a *relative maximum norm* measure,\n", + " which says\n", + " $$d^2 \\frac{\\|R_{i,j}^{(n)}\\|_\\infty}{\\|\\chi_{i,j}^{(n)}\\|_\\infty} < TOL$$\n", + " where $$\\|R_{i,j}^{(n)}\\|_\\infty = \\max_{i,j} |R_{i,j}^{(n)}|$$ is\n", + " the *max-norm* of $R_{i,j}$, or the maximum value of\n", + " the residual on the grid (there are other error tolerances we could\n", + " use but this is one of the simplest and most effective). Using this\n", + " measure for the error ensures that the residual remains small\n", + " *relative* to the solution, $\\chi_{i,j}$. A typical\n", + " value of the tolerance that might be used is $TOL=10^{-4}$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are a few **important things** to note about the\n", + "basic relaxation procedure outlined above\n", + "\n", + "- This Jacobi method is the simplest form of relaxation. It requires\n", + " that you have two storage vectors, one for $\\chi_{i,j}^{(n)}$ and\n", + " one for $\\chi_{i,j}^{(n+1)}$.\n", + "\n", + "- The relaxation can be modified by using a single vector to store the\n", + " $\\chi$ values. In this case, as you compute the residual vector and\n", + " update $\\chi$ at each point $(i,j)$, the residual involves some\n", + " $\\chi$ values from the previous iteration and some that have already\n", + " been updated. For example, if we traverse the grid by rows (that is,\n", + " loop over $j$ first and then $i$), then the residual is now given by\n", + " $$R_{i,j}^{(n)} = F_{i,j} - \\frac{1}{d^2} ( \\chi_{i+1,j}^{(n)} +\n", + " \\chi_{i,j+1}^{(n)} + \n", + " \\underbrace{\\chi_{i-1,j}^{(n+1)} +\n", + " \\chi_{i,j-1}^{(n+1)}}_{\\mbox{{updated already}}} - 4\n", + " \\chi_{i,j}^{(n)} ),$$ \n", + " (where the $(i,j-1)$ and $(i-1,j)$ points\n", + " have already been updated), and then the solution is updated\n", + " $$\\chi_{i,j}^{(n+1)} = \\chi_{i,j}^{(n)} - \\frac{d^2}{4} R_{i,j}^{(n)}.$$ \n", + " Not only does this relaxation scheme save on\n", + " storage (since only one solution vector is now required), but it\n", + " also converges more rapidly (typically, it takes half the number of\n", + " iterations as Jacobi), which speeds up convergence somewhat, but\n", + " still leaves the cost at the same order as Jacobi, as we can see\n", + " from [Cost of Schemes Table](#tab:cost). This is known as the\n", + " *Gauss-Seidel* relaxation scheme.\n", + " \n", + "- In practice, we actually use a modification of Gauss-Seidel\n", + " $$\\chi_{i,j}^{(n+1)} = \\chi_{i,j}^{(n)} - \\frac{\\mu d^2}{4} R_{i,j}^{(n)}$$ \n", + " where $1<\\mu<2$ is the *relaxation\n", + " parameter*. The resulting scheme is called *successive\n", + " over-relaxation*, or *SOR*, and it improves\n", + " convergence considerably (see [Cost of Schemes Table](#tab:cost).\n", + "\n", + " What happens when $0<\\mu<1$? Or $\\mu>2$? The first case is called\n", + " *under-relaxation*, and is useful for smoothing the\n", + " solution in multigrid methods. The second leads to an iteration that\n", + " never converges.\n", + "\n", + "- *Does the iteration converge?* For the Poisson problem,\n", + " yes, but not in general." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- *How fast does the iteration converge?* and *How\n", + " much does each iteration cost?* The answer to both of these\n", + " questions gives us an idea of the cost of the relaxation procedure …\n", + " \n", + " Assume we have a grid of size $N\\times N$. If we used Gaussian\n", + " elimination to solve this matrix system (with $N^2$ unknowns), we\n", + " would need to perform on the order of $N^6$ operations (you saw this\n", + " in Lab \\#3). One can read in any numerical linear algebra textbook\n", + " ([Strang 1988,](#Ref:Strang) for example), that the number of iterations\n", + " required for Gauss-Seidel and Jacobi is on the order of $N^3$, while\n", + " for SOR it reduces to $N^2$. There is another class of iterative\n", + " methods, called *multigrid methods*, which converge in\n", + " a constant number of iterations (the optimal result)\n", + " \n", + " If you look at the arithmetic operations performed in the the\n", + " relaxation schemes described above, it is clear that a single\n", + " iteration involves on the order of $N^2$ operations (a constant\n", + " number of multiplications for each point).\n", + "\n", + " Putting this information together, the cost of each iterative scheme\n", + " can be compared as in [Cost of Schemes Table](#tab:cost).\n", + " \n", + " \n", + " \n", + "| Method | Order of Cost |\n", + "| :----------------------: | :---------------: |\n", + "| Gaussian Elimination | $N^6$ |\n", + "| Jacobi | $N^5$ |\n", + "| Gauss-Seidel | $N^5$ |\n", + "| SOR | $N^4$ |\n", + "| Multigrid | $N^2$ |\n", + "\n", + "
\n", + " Cost of Schemes Table: Cost of iterative schemes compared to direct methods.
\n", + " \n", + "- Multigrid methods are obviously the best, but are also\n", + " *extremely complicated* … we will stick to the much\n", + " more manageable Jacobi, Gauss-Seidel and SOR schemes.\n", + "\n", + "- There are other methods (called conjugate gradient and capacitance\n", + " matrix methods) which improve on the relaxation methods we’ve seen.\n", + " These won’t be described here." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Temporal Discretization ###\n", + "\n", + "Let us now turn to the time evolution equation for the stream function.\n", + "Supposing that the initial time is $t=0$, then we can approximate the\n", + "solution at the discrete time points $t_p = p\\Delta t$, and write the\n", + "discrete solution as $$\\Psi_{i,j}^p \\approx \\psi(x_i,y_j,t_p).$$ Notice\n", + "that the spatial indices appear as subscripts and the time index as a\n", + "superscript on the discrete approximation $\\chi$.\n", + "\n", + "We can choose any discretization for time that we like, but for the QG\n", + "equation, it is customary (see [Mesinger and Arakawa](#Ref:MesingerArakawa), for\n", + "example) to use a centered time difference to approximate the derivative\n", + "in :\n", + "$$\\frac{\\Psi_{i,j}^{p+1} - \\Psi_{i,j}^{p-1}}{2\\Delta t} = \\chi_{i,j}^p$$\n", + "or, after rearranging,\n", + "\n", + "\n", + "(Leapfrog Eqn)\n", + "$$\n", + "\\Psi_{i,j}^{p+1} = \\Psi_{i,j}^{p-1} + 2\\Delta t \\chi_{i,j}^p\n", + "$$\n", + "This time differencing method is called the\n", + "*leap frog scheme*, and was introduced in Lab 7. A\n", + "pictorial representation of this scheme is given in\n", + "[Figure Leap-Frog Scheme](#fig:leap-frog)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/leapfrog.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Leap-Frog Scheme: A pictorial representation of the “leap-frog” character of the\n", + "time-stepping scheme. The values of $\\chi$ at even time steps are linked\n", + "together with the odd $\\Psi$ values; likewise, values of $\\chi$ at odd\n", + "time steps are linked to the even $\\Psi$ values.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are two additional considerations related to the time\n", + "discretization:\n", + "\n", + "- The viscous terms ($\\nabla_h^2\\psi$ and $\\nabla_h^4\\psi$) are\n", + " evaluated at the $p-1$ time points, while all other terms in the\n", + " right hand side are evaluated at $p$ points. The reasoning for this\n", + " is described in [McCalpin’s](#Ref:McCalpin) QGBOX\n", + " documentation [p. 8]:\n", + "\n", + " > Note that the frictional terms are all calculated at the old\n", + " $(n-1)$ time level, and are therefore first-order accurate in\n", + " time. This ’time lagging’ is necessary for linear computational\n", + " stability.\n", + "\n", + "- This second item *will not be implemented in this lab or the\n", + " problems*, but should still be kept in mind …\n", + "\n", + " The leap-frog time-stepping scheme has a disadvantage in that it\n", + " introduces a “computational mode” …  [McCalpin](#Ref:McCalpin) [p. 23]\n", + " describes this as follows:\n", + "\n", + " > *Leap-frog models are plagued by a phenomenon called the\n", + " “computational mode”, in which the odd and even time levels become\n", + " independent. Although a variety of sophisticated techniques have\n", + " been developed for dealing with this problem, McCalpin’s model\n", + " takes a very simplistic approach. Every *narg* time\n", + " steps, adjacent time levels are simply averaged together (where\n", + " *narg*$\\approx 100$ and odd)*\n", + "\n", + " Why don’t we just abandon the leap-frog scheme? Well, [Mesinger and Arakawa](#Ref:MesingerArakawa) [p. 18] make the following observations\n", + " regarding the leap-frog scheme:\n", + "\n", + " - its advantages: simple and second order accurate; neutral within\n", + " the stability range.\n", + "\n", + " - its disadvantages: for non-linear equations, there is a tendency\n", + " for slow amplification of the computational mode.\n", + "\n", + " - the usual method for suppressing the spurious mode is to insert\n", + " a step from a two-level scheme occasionally (or, as McCalpin\n", + " suggests, to occasionally average the solutions at successive\n", + " time steps).\n", + "\n", + " - In Chapter 4, they mention that it is possible to construct\n", + " grids and/or schemes with the same properties as leap-frog and\n", + " yet the computational mode is absent.\n", + "\n", + " The important thing to get from this is that when integrating for\n", + " long times, the computational mode will grow to the point where it\n", + " will pollute the solution, unless one of the above methods is\n", + " implemented. For simplicity, we will not be worrying about this in\n", + " Lab \\#8." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Outline of Solution Procedure ###\n", + "\n", + "Now that we have discretized in both space and time, it is possible to\n", + "outline the basic solution procedure.\n", + "\n", + "1. Assume that at $t=t_p$, we know $\\Psi^0, \\Psi^1, \\dots,\n", + " \\Psi^{p-1}$\n", + "\n", + "2. Calculate $F_{i,j}^p$ for each grid point $(i,j)$ (see\n", + " [Section Right Hand Side](#Right-Hand-Side)). Keep in mind that the viscosity terms\n", + " ($\\nabla_h^2\\psi$ and $\\nabla_h^4\\psi$) are evaluated at time level\n", + " $p-1$, while all other terms are evaluated at time level $p$ (this\n", + " was discussed in [Section Temporal Discretization](#Temporal-Discretization)).\n", + "\n", + "3. Solve the ([Discrete $\\chi$ equation](#eq:discrete-chi)) for $\\chi_{i,j}^p$ (the actual\n", + " solution method will be described in [Section Solution of the Poisson Equation by Relaxation](#Solution-of-the-Poisson-Equation-by-Relaxation).\n", + "\n", + "4. Given $\\chi_{i,j}^p$, we can find $\\Psi_{i,j}^{p+1}$ by using the\n", + " ([Leap-frog time stepping scheme](#leapfrog))\n", + "\n", + "5. Let $p \\leftarrow p+1$ and return to step 2." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that step 1 requires a knowledge of two starting values, $\\Psi^0$\n", + "and $\\Psi^1$, at the initial time. An important addition to the\n", + "procedure below is some way to get two starting values for $\\Psi$. Here\n", + "are several alternatives:\n", + "\n", + "- Set $\\Psi^0$ and $\\Psi^1$ both to zero.\n", + "\n", + "- Set $\\Psi^0=0$, and then use a forward Euler step to find $\\Psi^1$.\n", + "\n", + "- Use a predictor-corrector step, like that employed in Lab 7.\n", + "\n", + "### Problem Two\n", + "> Now that you’ve seen how the basic\n", + "numerical scheme works, it’s time to jump into the numerical scheme. The\n", + "code has already been written for the discretization described above,\n", + "with free-slip boundary conditions and the SOR relaxation scheme. The\n", + "code is in **qg.py** and the various functions are:\n", + "\n", + ">**main**\n", + ": the main routine, contains the time-stepping and the output.\n", + "\n", + ">**param()**\n", + ": sets the physical parameters of the system.\n", + "\n", + ">**numer\\_init()**\n", + ": sets the numerical parameters.\n", + "\n", + ">**vis(psi, nx, ny)**\n", + ": calculates the second order ($\\nabla^2$) viscosity term (not\n", + " leap-frogged).\n", + "\n", + ">**wind(psi, nx, ny)**\n", + ": calculates the the wind term.\n", + "\n", + ">**mybeta(psi, nx, ny)**\n", + ": calculates the beta term\n", + "\n", + ">**jac(psi, vis, nx, ny)**\n", + ": calculate the Jacobian term. (Arakawa Jacobian given here).\n", + "\n", + ">**chi(psi, vis_curr, vis_prev, chi_prev, nx, ny, dx, r_coeff, tol, max_count, epsilon, wind_par, vis_par)**\n", + ": calculates $\\chi$ using a call to relax\n", + "\n", + ">**relax(rhs, chi_prev, dx, nx, ny, r_coeff, tol, max_count)**\n", + ": does the relaxation.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> Your task in this problem is to program the “straightforward”\n", + "discretization of the Jacobian term, using [(Jacobian: Expansion 1)](#eq:jacob1), that you derived in\n", + "[Problem One](#Problem-One). The only change this involves is\n", + "inserting the code into the function **jac**. Once\n", + "finished, run the code. The parameter functions **param**\n", + "**init\\_numer** provide some sample parameter values for you to\n", + "execute the code with. Try these input values and observe what happens\n", + "in the solution. Choose one of the physical parameters to vary. Does\n", + "changing the parameter have any effect on the solution? in what way?\n", + "\n", + "> Hand in the code for the Jacobian, and a couple of plots demonstrating\n", + "the solution as a function of parameter variation. Describe your results\n", + "and make sure to provide parameter values to accompany your explanations\n", + "and plots.\n", + "\n", + ">If the solution is unstable, check your CFL condition. The relevant\n", + "waves are Rossby waves with wave speed: $$c=\\beta k^{-2}$$ where $k$ is\n", + "the wave-number. The maximum wave speed is for the longest wave,\n", + "$k=\\pi/b$ where $b$ is the size fo your domain.\n", + "\n", + "> If the code is still unstable, even though the CFL is satisfied, see\n", + "[Section Aliasing Error and Nonlinear Instability](#Aliasing-Error-and-Nonlinear-Instability). The solution is nonlinear unstable.\n", + "Switch to the Arakawa Jacobian for stability." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Three\n", + ">The code provided for [Problem Two](#Problem-Two) implements the SOR relaxation\n", + "scheme. Your job in this problem is to modify the relaxation code to\n", + "perform a Jacobi iteration.\n", + "\n", + ">Hand in a comparison of the two methods, in tabular form. Use two\n", + "different relaxation parameters for the SOR scheme. (Include a list of\n", + "the physical and numerical parameter you are using). Also submit your\n", + "code for the Jacobi relaxation scheme." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Four\n", + "> Modify the code to implement the no-slip boundary\n", + "conditions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Five\n", + ">The code you’ve been working with so far uses the\n", + "simplest possible type of starting values for the unknown stream\n", + "function: both are set to zero. If you’re keen to play around with the\n", + "code, you might want to try modifying the SOR code for the two other\n", + "methods of computing starting values: using a forward Euler step, or a\n", + "predictor-corrector step (see [Section Outline of Solution Procedure](#Outline-of-Solution-Procedure))." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Aliasing Error and Nonlinear Instability\n", + "\n", + "In [Problem Two](#Problem-Two), you encountered an example\n", + "of the instability that can occur when computing numerical solutions to\n", + "some *nonlinear problems*, of which the QG equations is\n", + "just one example. This effect has in fact been known for quite some\n", + "time. Early numerical experiments by [N. Phillips in 1956](#Ref:Phillips)\n", + "exploded after approximately 30 days of integration due to nonlinear\n", + "instability. He used the straightforward centered difference formula for\n", + "as you did.\n", + "\n", + "It is important to realize that this instability does not occur in the\n", + "physical system modeled by the equations of motion. Rather is an\n", + "artifact of the discretization process, something known as\n", + "*aliasing error*. Aliasing error can be best understood by\n", + "thinking in terms of decomposing the solution into modes. In brief,\n", + "aliasing error arises in non-linear problems when a numerical scheme\n", + "amplifies the high-wavenumber modes in the solution, which corresponds\n", + "physically to a spurious addition of energy into the system. Regardless\n", + "of how much the grid spacing is reduced, the resulting computation will\n", + "be corrupted, since a significant amount of energy is present in the\n", + "smallest resolvable scales of motion. This doesn’t happen for every\n", + "non-linear problem or every difference scheme, but is an issue that one\n", + "who is using numerical codes must be aware of." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example One ###\n", + "\n", + ">Before moving on to how we can handle the\n", + "instability in our discretization of the QG equations, you should try\n", + "out the following demo on aliasing error. It is taken from an example in\n", + "[Mesinger and Arakawa](#Ref:MesingerArakawa) [p. 35ff.], based on the simplest\n", + "of non-linear PDE’s, the advection equation:\n", + "$$\\frac{du}{dt}+u\\frac{du}{dx} = 0.$$ \n", + "If we decompose the solution into\n", + "Fourier mode, and consider a single mode with wavenumber $k$,\n", + "$$u(x) = \\sin{kx}$$ \n", + "then the solution will contain additional modes, due\n", + "to the non-linear term, and given by\n", + "$$u \\frac{du}{dx} = k \\sin{kx} \\cos{kx} =\\frac{1}{2}k \\sin{2kx}.$$\n", + "\n", + ">With this as an introduction, keep the following in mind while going\n", + "through the demo:\n", + "\n", + ">- on a computational grid with spacing $\\Delta x$, the discrete\n", + " versions of the modes can only be resolved up to a maximum\n", + " wavenumber, $k_{max}=\\frac{\\pi}{\\Delta x}$.\n", + "\n", + ">- even if we start with modes that are resolvable on the grid, the\n", + " non-linear term introduces modes with a higher wavenumber, which may\n", + " it not be possible to resolve. These modes, when evaluated at\n", + " discrete points, appear as modes with lower wavenumber; that is,\n", + " they are *aliased* to the lower modes (this becomes\n", + " evident in the demo as the wavenumber is increased …).\n", + "\n", + ">- not only does aliasing occur, but for this problem, these additional\n", + " modes are *amplified* by a factor of $\\frac{1}{2}k$.\n", + " This is the source of the *aliasing error* – such\n", + " amplified modes will grow in time, no matter how small the time step\n", + " taken, and will pollute the computations." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The previous example is obviously a much simpler non-linearity than that\n", + "of the QG equations, but the net effect is the same. The obvious\n", + "question we might ask now is:\n", + "\n", + "> *Can this problem be averted for our discretization of the QG\n", + "> equations with the leap-frog time-stepping scheme, or do we have to\n", + "> abandon it?*\n", + "\n", + "There are several possible solutions, presented by [Mesinger and Arakawa](#Ref:MesingerArakawa) [p. 37], summarized here, including\n", + "\n", + "- filter out the top half or third of the wavenumbers, so that the\n", + " aliasing is eliminated.\n", + "\n", + "- use a differencing scheme that has a built-in damping of the\n", + " shortest wavelengths.\n", + "\n", + "- the most elegant, and one that allows us to continue using the\n", + " leap-frog time stepping scheme, is one suggested by Arakawa, is one\n", + " that aims to eliminate the spurious inflow of energy into the system\n", + " by developing a discretization of the Jacobian term that satisfies\n", + " discrete analogues of the conservation properties for average\n", + " vorticity, enstrophy and kinetic energy.\n", + " \n", + "This third approach will be the one we take here. The details can be\n", + "found in the Mesinger-Arakawa paper, and are not essential here; the\n", + "important point is that there is a discretization of the Jacobian that\n", + "avoids the instability problem arising from aliasing error. This\n", + "discrete Jacobian is called the *Arakawa Jacobian* and is\n", + "obtained by averaging the discrete Jacobians obtained by using standard\n", + "centered differences on the formulae [(Jacobian: Expansion 1)](#eq:jacob1), [(Jacobian: Expansion 2)](#eq:jacob2) and [(Jacobian: Expansion 3)](#eq:jacob3) (see\n", + "[Problem One](#Problem-One) and the two quizzes following it in\n", + "[Section Right Hand Side](#Right-Hand-Side).\n", + "\n", + "You will not be required to derive or code the Arakawa Jacobian (the\n", + "details are messy!), and the code will be provided for you for all the\n", + "problems following [Problem Two](#Problem-Two)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Classical Solutions ##\n", + "\n", + "[Bryan (1963)](#Ref:Bryan) and [Veronis (1966)](#Ref:Veronis)\n", + "\n", + "### Problem Six ###\n", + "> Using the SOR code from\n", + "Problems [Three](#Problem-Three) (free-slip BC’s) and [Four](#Problem-Four)\n", + "(no-slip BC’s), try to reproduce the classical results of Bryan and\n", + "Veronis." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes ##\n", + "\n", + "### Definition of the Beta-plane ###\n", + "\n", + "A $\\beta$-plane is a plane approximation of a curved section of the\n", + "Earth’s surface, where the Coriolis parameter, $f(y)$, can be written\n", + "roughly as a linear function of $y$ $$f(y) = f_0 + \\beta y$$ for $f_0$\n", + "and $\\beta$ some constants. The motivation behind this approximation\n", + "follows." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/coriolis.png',width='30%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Rotating Globe:\n", + "A depiction of the earth and its angular frequency of rotation,\n", + "$\\Omega$, the local planetary vorticity vector in blue, and the Coriolis\n", + "parameter, $f_0$, at a latitude of $\\theta_0$.\n", + "
\n", + "\n", + "Consider a globe (the earth) which is rotating with angular frequency\n", + "$\\Omega$ (see [Figure Rotating Globe](#fig:globe)),\n", + "and assume that the patch of ocean under consideration is at latitude\n", + "$\\theta$. The most important component of the Coriolis force is the\n", + "local vertical (see the in [Figure Rotating Globe](#fig:globe)), which is defined in\n", + "terms of the Coriolis parameter, $f$, to be $$f/2 = \\Omega \\sin\\theta.$$\n", + "This expression may be simplified somewhat by ignoring curvature effects\n", + "and approximating the earth’s surface at this point by a plane $-$ if\n", + "the plane is located near the middle latitudes, then this is a good\n", + "approximation. If $\\theta_0$ is the latitude at the center point of the\n", + "plane, and $R$ is the radius of the earth (see\n", + "[Figure Rotating Globe](#fig:globe)), then we can apply trigonometric ratios to\n", + "obtain the following expression on the plane:\n", + "$$f = \\underbrace{2\\Omega\\sin\\theta_0}_{f_0} +\n", + " \\underbrace{\\frac{2\\Omega\\cos\\theta_0}{R}}_\\beta \\, y$$ \n", + "Not surprisingly, this plane is called a *mid-latitude\n", + "$\\beta$-plane*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/beta-plane.png',width='30%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Beta-plane:\n", + "The $\\beta$-plane approximation, with points on the plane located\n", + "along the local $y$-axis. The Coriolis parameter, $f$, at any latitude\n", + "$\\theta$, can be written in terms of $y$ using trigonometric\n", + "ratios.
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Simplification of the QG Model Equations ##\n", + "\n", + "The first approximation we will make eliminates several of the\n", + "non-linear terms in the set of equations: ([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)). A common simplification that is made in\n", + "this type of flow is the *quasi-geostrophic* (QG)\n", + "approximation, where the horizontal pressure gradient and horizontal\n", + "components of the Coriolis force are matched:\n", + "$$fv \\approx \\frac{1}{\\rho} \\, \\frac {\\partial p}{\\partial x}, \\, \\, fu \\approx - \\,\\frac{1}{\\rho}\n", + "\\, \\frac{\\partial p}{\\partial y} .$$ \n", + "Remembering that the fluid is homogeneous (the density is\n", + "constant), ([Continuity Eqn](#eq:continuity)) implies\n", + "$$\\frac{\\partial^2 p}{\\partial x\\partial z} = 0, \\, \\, \\frac{\\partial^2 p}{\\partial y\\partial z} = 0.$$\n", + "We can then differentiate the QG balance equations to obtain\n", + "$$\\frac{\\partial v}{\\partial z} \\approx 0, \\, \\, \\frac{\\partial u}{\\partial z} \\approx 0.$$ \n", + "Therefore, the terms\n", + "$w \\, \\partial u/\\partial z$ and $w \\, \\partial v/\\partial z$ can be neglected in ([(X-Momentum Eqn)](#eq:xmom)) and\n", + "([(Y-Momentum Eqn)](#eq:ymom)).\n", + "\n", + "The next simplification is to eliminate the pressure terms in\n", + "([(X-Momentum Eqn)](#eq:xmom)) and\n", + "([(Y-Momentum Eqn)](#eq:ymom)) by cross-differentiating. If we define\n", + "the vorticity $$\\zeta = \\partial v/\\partial x - \\partial u/\\partial y$$ then we can\n", + "cross-differentiate the two momentum equations and replace them with a\n", + "single equation in $\\zeta$:\n", + "$$\\frac{\\partial \\zeta}{\\partial t} + u \\frac{\\partial \\zeta}{\\partial x} + v \\frac{\\partial \\zeta}{\\partial y} + v\\beta + (\\zeta+f)(\\frac {\\partial u}{\\partial x}+\\frac{\\partial v}{\\partial y}) =\n", + "A_v \\, \\frac{\\partial^2 \\zeta}{\\partial z^2} + A_h \\, \\nabla_h^2 \\zeta,$$ \n", + "where\n", + "$\\beta \\equiv df/dy$. Notice the change in notation for derivatives,\n", + "from $\\nabla$ to $\\nabla_h$: this indicates that derivative now appear\n", + "only with respect to the “horizontal” coordinates, $x$ and $y$, since\n", + "the $z$-dependence in the solution has been eliminated.\n", + "\n", + "The third approximation we will make is to assume that vorticity effects\n", + "are dominated by the Coriolis force, or that $|\\zeta| \\ll f$. Using\n", + "this, along with the ([Continuity Eqn](#eq:continuity)) implies that\n", + "\n", + "\n", + "(Vorticity Eqn)\n", + "$$\\frac{\\partial \\zeta}{\\partial t} + u \\frac {\\partial \\zeta}{\\partial x} + v \\frac{\\partial \\zeta}{\\partial y} + v \\beta -f \\, \\frac{\\partial w}{\\partial z} = A_v \\,\n", + "\\frac{\\partial^2 \\zeta}{\\partial z^2} + A_h \\, \\nabla_h^2 \\zeta .$$ \n", + "The reason for\n", + "making this simplification may not be obvious now but it is a good\n", + "approximation in flows in the ocean and, as we will see next, it allows\n", + "us to eliminate the Coriolis term.\n", + "\n", + "The final sequence of simplifications eliminate the $z$-dependence in\n", + "the problem by integrating ([Vorticity Eqn](#eq:diff)) in the vertical direction and using boundary\n", + "conditions.\n", + "\n", + "The top 500 metres of the ocean tend to act as a single slab-like layer.\n", + "The effect of stratification and rotation cause mainly horizontal\n", + "motion. To first order, the upper layers are approximately averaged flow\n", + "(while to the next order, surface and deep water move in opposition with\n", + "deep flow much weaker). Consequently, our averaging over depth takes\n", + "into account this “first order” approximation embodying the horizontal\n", + "(planar) motion, and ignoring the weaker (higher order) effects." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, recognize that the vertical component of velocity on both the top\n", + "and bottom surfaces should be zero: \n", + "$$w = 0 \\;\\;\\; \\mbox{at $z=0$}$$\n", + "$$w = 0 \\;\\;\\; \\mbox{at $z=-H$}$$ \n", + "Notice that the in second\n", + "condition we’ve also assumed that the water surface is approximately\n", + "flat $-$ this is commonly known as the *rigid lid\n", + "approximation*. Integrate the differential equation ([Vorticity Eqn](#eq:diff)) with respect\n", + "to $z$, applying the above boundary conditions, and using the fact that\n", + "$u$ and $v$ (and therefore also $\\zeta$) are independent of $z$,\n", + "\n", + "\n", + "(Depth-Integrated Vorticity)\n", + "$$\n", + " \\frac{1}{H} \\int_{-H}^0 \\mbox{(Vorticity Eqn)} dz \\Longrightarrow \n", + " \\frac {\\partial \\zeta}{\\partial t} + u \\frac {\\partial \\zeta}{\\partial x} + v \\frac {\\partial \\zeta}{\\partial y} + v\\beta \n", + " = \\frac{1}{H} \\, \\left( \\left. A_v \\, \n", + " \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=0} - \\left. A_v \\, \n", + " \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=-H} \\right) \\, + A_h \\, \\nabla_h^2 \\zeta\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The two boundary terms on the right hand side can be rewritten in terms\n", + "of known information if we apply two additional boundary conditions: the\n", + "first, that the given wind stress on the ocean surface,\n", + "$$\\vec{\\tau}(x,y) = (\\tau_1,\\tau_2) \\;\\;\\mbox{at }\\;\\; z=0,$$ \n", + "can be written as\n", + "$$\\rho A_v \\left( \\frac{\\partial u}{\\partial z} , \\frac{\\partial v}{\\partial z} \\right) = \\left( \\tau_1 , \\tau_2\n", + " \\right)$$ \n", + "which, after differentiating, leads to\n", + " \n", + "\n", + "(Stress Boundary Condition)\n", + "$$\\frac{1}{H} \\, A_v \\, \\left. \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=0} = \\frac{1}{\\rho H} \\,\n", + " \\nabla_h \\times \\tau \\,;\n", + "$$ \n", + "and, the second, that the *Ekman\n", + "layer* along the bottom of the ocean, $z=-H$, generates Ekman\n", + "pumping which obeys the following relationship:\n", + "\n", + "\n", + "(Ekman Boundary Condition)\n", + "$$\\frac{1}{H} \\, A_v \\, \\left. \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=-H} =\n", + " \\; \\kappa \\zeta, \n", + "$$ \n", + "where the *Ekman number*,\n", + "$\\kappa$, is defined by\n", + "$$\\kappa \\equiv \\frac{1}{H} \\left( \\frac{A_v f}{2} \\right)^{1/2}.$$\n", + "Using ([Stress Boundary Condition](#eq:stressbc)) and ([Ekman Boundary Condition](#ekmanbc)) to replace the boundary terms in ([Depth-Integrated Vorticity](#vort-depth-integ)), we get the following\n", + "equation for the vorticity:\n", + "$$\\frac{\\partial \\zeta}{\\partial t} + u \\frac{\\partial \\zeta}{\\partial x} + v \\frac{\\partial \\zeta}{\\partial y} + v \\beta = \\frac{1}{\\rho H} \\, \\nabla_h \n", + "\\times \\tau - \\kappa \\zeta + A_h \\, \\nabla_h^2 \\zeta.$$\n", + "\n", + "The next and final step may not seem at first to be much of a\n", + "simplification, but it is essential in order to derive a differential\n", + "equation that can be easily solved numerically. Integrate ([Continuity Eqn](#eq:continuity)) with respect\n", + "to $z$ in order to obtain $$\\frac {\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} = 0,$$ after which we can\n", + "introduce a *stream function*, $\\psi$, defined in terms of\n", + "the velocity as\n", + "$$u = - \\, \\frac{\\partial \\psi}{\\partial y} \\, , \\, v = \\frac{\\partial \\psi}{\\partial x}.$$ \n", + "The stream function satisfies this equation exactly, and we can write the\n", + "vorticity as $$\\zeta = \\nabla_h^2 \\psi,$$ which then allows us to write\n", + "both the velocity and vorticity in terms of a single variable, $\\psi$.\n", + "\n", + "After substituting into the vorticity equation, we are left with a\n", + "single equation in the unknown stream function. \n", + "$$ \\frac{\\partial}{\\partial t} \\, \\nabla_h^2 \\psi + {\\cal J} \\left( \\psi, \\nabla_h^2 \\psi \\right) + \\beta \\, \\frac {\\partial \\psi}{\\partial x} = \\frac{-1}{\\rho H} \\, \\nabla_h \\times \\tau - \\, \\kappa \\, \\nabla_h^2 \\psi + A_h \\, \\nabla_h^4 \\psi $$\n", + "where\n", + "$${\\cal J} (a,b) = \\frac{\\partial a}{\\partial x} \\, \\frac{\\partial b}{\\partial y} - \\frac{\\partial a}{\\partial y} \\, \\frac{\\partial b}{\\partial x}$$ \n", + "is called the *Jacobian* operator." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The original system ([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)) was a system of four non-linear PDE’s, in four\n", + "independent variables (the three velocities and the pressure), each of\n", + "which depend on the three spatial coordinates. Now let us review the\n", + "approximations made above, and their effects on this system:\n", + "\n", + "1. After applying the QG approximation and homogeneity, two of the\n", + " non-linear terms were eliminated from the momentum equations, so\n", + " that the vertical velocity, $w$, no longer appears.\n", + "\n", + "2. By introducing the vorticity, $\\zeta$, the pressure was eliminated,\n", + " and the two momentum equations to be rewritten as a single equation\n", + " in $\\zeta$ and the velocities.\n", + "\n", + "3. Some additional terms were eliminated by assuming that Coriolis\n", + " effects dominate vorticity, and applying the continuity condition.\n", + "\n", + "4. Integrating over the vertical extent of the ocean, and applying\n", + " boundary conditions eliminated the $z$-dependence in the problem.\n", + "\n", + "5. The final step consists of writing the unknown vorticity and\n", + " velocities in terms of the single unknown stream function, $\\psi$.\n", + "\n", + "It is evident that the final equation is considerably simpler: it is a\n", + "single, non-linear PDE for the unknown stream function, $\\psi$, which is\n", + "a function of only two independent variables. As we will see in the next\n", + "section, this equation is of a very common type, for which simple and\n", + "efficient numerical techniques are available." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary ##\n", + "\n", + "- **advection:** A property or quantity transferred by the flow of a fluid is said to be “advected” by the flow.\n", + "aliasing error: In a numerical scheme, this is the phenomenon that occurs when a grid is not fine enough to resolve the high modes in a solution. These high waveumbers consequently appear as lower modes in the solution due to aliasing. If the scheme is such that these high wavenumber modes are amplified, then the aliased modes can lead to a significant error in the computed solution.\n", + "- **$\\beta$-plane:** A $\\beta$-plane is a plane approximation of a curved section of the Earth’s surface, where the Coriolis parameter can be written roughly as a linear function.\n", + "- **continuity equation:** The equation that describes mass conservation in a fluid, ${\\partial \\rho}/{\\partial t} + \\nabla \\cdot (\\rho \\vec u) = 0$\n", + "- **Coriolis force:** An additional force experienced by an observer positioned in a rotating frame of reference. If $\\Omega$ is the angular velocity of the rotating frame, and $\\vec u$ is the velocity of an object observed within the rotating frame, then the Coriolis force, $\\Omega \\times \\vec u$, appears as a force tending to deflect the moving object to the right.\n", + "- **Coriolis parameter:** The component of the planetary vorticity which is normal to the earth’s surface, usually denoted by f.\n", + "- **difference stencil:** A convenient notation for representing difference approximation formula for derivatives. \n", + "- **Ekman layer:** The frictional layer in a geophysical fluid flow field which appears on a rigid surface perpendicular to the rotation vector.\n", + "- **Gauss-Seidel relaxation:** One of a class of iterative schemes for solving systems of linear equations. See Lab 8 for a complete discussion.\n", + "- **homogeneous fluid:** A fluid with constant density. Even though the density of ocean water varies with depth, it is often assumed homogeneous in order to simplify the equations of motion.\n", + "- **hydrostatic balance:** A balance, in the vertical direction, between the vertical pressure gradient and the buoyancy force. The pressure difference between any two points on a vertical line is assumed to depend only on the weight of the fluid between the points, as if the fluid were at rest, even though it is actually in motion. This approximation leads to a simplification in the equations of fluid flow, by replacing the vertical momentum equation.\n", + "- **incompressible fluid:** A fluid for which changes in the density with pressure are negligible. For a fluid with velocity field, $\\vec u$, this is expressed by the equation $\\nabla \\cdot \\vec u = 0$. This equation states that the local increase of density with time must be balanced by the divergence of the mass flux.\n", + "- **Jacobi relaxation:** The simplest of the iterative methods for solving linear systems of equations. See Lab 8 for a complete discussion.\n", + "- **momentum equation(s):** The equations representing Newton’s second law of motion for a fluid. There is one momentum equation for each component of the velocity.\n", + "- **over-relaxation:** Within a relaxation scheme, this refers to the use of a relaxation parameter $\\mu > 1$. It accelerates the standard Gauss-Seidel relaxation by forcing the iterates to move closer to the actual solution." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- **Poisson equation:** The partial differential equation $\\nabla^2 u = f$ or, written two dimensions, ${\\partial^2 u}/{\\partial x^2} + {\\partial^2 u}/{\\partial y^2} =f(x,y)$.\n", + "- **QG:** abbreviation for quasi-geostrophic.\n", + "- **quasi-geostrophic balance:** Approximate balance between the pressure gradient and the Coriolis Force.\n", + "- **relaxation:** A term that applies to a class of iterative schemes for solving large systems of equations. The advantage to these schemes for sparse matrices (compared to direct schemes such as Gaussian elimination) is that they operate only on the non-zero entries of the matrix. For a description of relaxation methods, see Lab 8." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- **rigid lid approximation:** Assumption that the water surface deflection is negligible in the continuity equation (or conservation of volume equation)\n", + "- **SOR:** see successive over-relaxation.\n", + "- **sparse system:** A system of linear equations whose matrix representation has a large percentage of its entries equal to zero.\n", + "- **stream function:** Incompressible, two-dimensional flows with velocity field $(u,v)$, may be described by a stream function, $\\psi(x, y)$, which satisfies $u = −{\\partial \\psi}/{\\partial y}, v = {\\partial \\psi}/{\\partial x}$. These equations are a consequence of the incompressibility condition.\n", + "- **successive over-relaxation:** An iterative method for solving large systems of linear equations. See Lab 8 for a complete discussion.\n", + "- **under-relaxation:** Within a relaxation scheme, this refers to the use of a relaxation parameter $\\mu < 1$. It is not appropriate for solving systems of equations directly, but does have some application to multigrid methods.\n", + "- **vorticity:** Defined to be the curl of the velocity field, $\\zeta = \\nabla \\times \\vec u$. In geophysical flows, where the Earth is a rotating frame of reference, the vorticity can be considered as the sum of a relative vorticity (the curl of the velocity in the nonrotating frame) and the planetary vorticity, $2 \\Omega$. For these large-scale flows, vorticity is almost always present, and the planetary vorticity dominates." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References ##\n", + "\n", + "
\n", + "Arakawa, A. and V. R. Lamb, 1981: A potential enstrophy and energy conserving scheme for the shallow water equations. Monthly Weather Review, 109, 18–36.\n", + "
\n", + "
\n", + "Bryan, K., 1963: A numerical investigation of a non-linear model of a wind-driven ocean. Journal of Atmospheric Science, 20, 594–606
\n", + "
\n", + "On the adjustment of azimuthally perturbed vortices. Journal of Geophysical Research, 92, 8213–8225.\n", + "
\n", + "
\n", + "Mesinger, F. and A. Arakawa, 1976: Numerical Methods Used in Atmospheric Models,GARP Publications Series No.~17, Global Atmospheric Research Programme.\n", + "
\n", + "
\n", + "Pedlosky, J., 1987: Geophysical Fluid Dynamics. Springer-Verlag, New York, 2nd edition.Pond, \n", + "
\n", + "
\n", + "Phillips, N. A., 1956: The general circulation of the atmosphere: A numerical experiment.\n", + "Quarterly Journal of the Royal Meteorological Society, 82, 123–164.\n", + "
\n", + "
\n", + "Strang, G., 1988: Linear Algebra and its Applications. Harcourt Brace Jovanovich, San Diego,\n", + "CA, 2nd edition.\n", + "
\n", + "
\n", + "Veronis, G., 1966: Wind-driven ocean circulation – Part 2. Numerical solutions of the non- linear problem. Deep Sea Research, 13, 31–55.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/notebooks/lab9/01-lab9.ipynb.txt b/_sources/notebooks/lab9/01-lab9.ipynb.txt new file mode 100644 index 0000000..fcf23c0 --- /dev/null +++ b/_sources/notebooks/lab9/01-lab9.ipynb.txt @@ -0,0 +1,1254 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 9: Fast Fourier Transforms" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "In this lab, you will explore Fast Fourier Transforms as a method of analysing and filtering data. The goal is to gain a better understanding of how Fourier transforms can be used to analyse the power spectral density of data (how much power there is in different frequencies) and for filtering data (removing certain frequencies from the data, whether that's trying to remove noise, or isolating some frequencies of interest).\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- understand how sampling frequency can impact the frequencies you can detect in data\n", + "\n", + "- use Fourier transforms to analyse atmospheric wind measurements and calculate and plot power spectral densities\n", + "\n", + "- use Fourier transforms to filter data by removing particular frequencies" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction \n", + "\n", + "This lab introduces the use of the fast Fourier transform for estimation\n", + "of the power spectral density and for simple filtering. If you need a refresher or\n", + "are learning about fourier transforms for the first time, we recommend reading\n", + "[Newman Chapter 7](https://owncloud.eoas.ubc.ca/s/STrxS2pXewjqdYt). For a description of the Fast Fourier transform,\n", + "see [Stull Section 8.4](https://owncloud.eoas.ubc.ca/s/KMfPeGPLs2Fe7Qq) and [Jake VanderPlas's blog entry](https://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/). Another good resources is\n", + "[Numerical Recipes Chapter 12](https://nextcloud.eoas.ubc.ca/s/cnbBeQ47qBMgq3K)\n", + "\n", + "Before running this lab you will need to: \n", + " 1. Install netCDF4 by running:\n", + " \n", + " conda install netCDF4 \n", + " \n", + "after you have activated your numeric_2024 conda environment \n", + " \n", + "2. download some data by running the cell below\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "start_time": "2022-03-04T20:51:44.330Z" + } + }, + "outputs": [], + "source": [ + "# Download some data that we will need:\n", + "from matplotlib import pyplot as plt\n", + "import urllib\n", + "import os\n", + "filelist=['miami_tower.npz','a17.nc','aircraft.npz']\n", + "data_download=True\n", + "if data_download:\n", + " for the_file in filelist:\n", + " url='http://clouds.eos.ubc.ca/~phil/docs/atsc500/data/{}'.format(the_file)\n", + " urllib.request.urlretrieve(url,the_file)\n", + " print(\"download {}: size is {:6.2g} Mbytes\".format(the_file,os.path.getsize(the_file)*1.e-6))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-08T03:04:50.917356Z", + "start_time": "2022-03-08T03:04:50.914138Z" + } + }, + "outputs": [], + "source": [ + "# import required packages\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "plt.style.use('ggplot')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## A simple transform\n", + "\n", + "To get started assume that there is a pure tone -- a cosine wave oscillating at a frequency of 1 Hz. Next assume that we sample that 1 Hz wave at a sampling rate of 5 Hz i.e. 5 times a second\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:27.525727Z", + "start_time": "2022-03-04T20:48:27.320858Z" + } + }, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "#\n", + "# create a cosine wave that oscilates 20 times in 20 seconds\n", + "# sampled at Hz, so there are 20*5 = 100 measurements in 20 seconds\n", + "#\n", + "deltaT=0.2\n", + "ticks = np.arange(0,20,deltaT)\n", + "#\n", + "#20 cycles in 20 seconds, each cycle goes through 2*pi radians\n", + "#\n", + "onehz=np.cos(2.*np.pi*ticks)\n", + "fig,ax = plt.subplots(1,1,figsize=(8,6))\n", + "ax.plot(ticks,onehz)\n", + "ax.set_title('one hz wave sampled at 5 Hz')\n", + "out=ax.set_xlabel('time (seconds)')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Repeat, but for a 2 Hz wave" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:27.862162Z", + "start_time": "2022-03-04T20:48:27.719034Z" + } + }, + "outputs": [], + "source": [ + "deltaT=0.2\n", + "ticks = np.arange(0,20,deltaT)\n", + "#\n", + "#40 cycles in 20 seconds, each cycle goes through 2*pi radians\n", + "#\n", + "twohz=np.cos(2.*2.*np.pi*ticks)\n", + "fig,ax = plt.subplots(1,1,figsize=(8,6))\n", + "ax.plot(ticks,twohz)\n", + "ax.set_title('two hz wave sampled at 5 Hz')\n", + "out=ax.set_xlabel('time (seconds)')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note the problem at 2 Hz, the 5 Hz sampling frequency is too coarse to hit the top of every other\n", + "peak in the wave. The 'Nyquist frequency' = 2 $\\times$ the sampling rate, is the highest frequency that equipment of a given sampling rate can reliably measure. In this example where we are measuring 5 times a second (i.e. at 5Hz), the Nyquist frequency is 2.5Hz. Note that, whilst the 2Hz signal is below the Nyquist frequency, and so it is measurable and we do detect a 2Hz signal, because 2Hz is close to the Nyquist frequency of 2.5Hz, the signal is being distorted." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:28.672553Z", + "start_time": "2022-03-04T20:48:28.567073Z" + } + }, + "outputs": [], + "source": [ + "#now take the fft, we have 100 bins, so we alias at 50 bins, which is the nyquist frequency of 5 Hz/2. = 2.5 Hz\n", + "# so the fft frequency resolution is 20 bins/Hz, or 1 bin = 0.05 Hz\n", + "thefft=np.fft.fft(onehz)\n", + "real_coeffs=np.real(thefft)\n", + "\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,6))\n", + "theAx.plot(real_coeffs)\n", + "out=theAx.set_title('real fft of 1 hz')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The layout of the fft return value is describe in \n", + "[the scipy user manual](http://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html#id9). \n", + "For reference, here is the Fourier transform calculated by numpy.fft:\n", + "\n", + "$$y[k] = \\sum_{n=0}^{N-1} x[n]\\exp \\left (- i 2 \\pi k n /N \\right )$$\n", + "\n", + "which is the discrete version of the continuous transform (Numerical Recipes 12.0.4):\n", + "\n", + "$$y(k) = \\int_{-\\infty}^{\\infty} x(t) \\exp \\left ( -i k t \\right ) dt$$\n", + "\n", + "(Note the different minus sign convention in the exponent compared to Numerical Recipes p. 490. It doesn't matter what you choose, as long as you're consistent).\n", + "\n", + "From the Scipy manual:\n", + "\n", + "> Inserting k=0 we see that np.sum(x) corresponds to y[0]. This term will be non-zero if we haven't removed any large scale trend in the data. For N even, the elements y[1]...y[N/2−1] contain the positive-frequency terms, and the elements y[N/2]...y[N−1] contain the negative-frequency terms, in order of decreasingly negative frequency. For N odd, the elements y[1]...y[(N−1)/2] contain the positive- frequency terms, and the elements y[(N+1)/2]...y[N−1] contain the negative- frequency terms, in order of decreasingly negative frequency.\n", + "> In case the sequence x is real-valued, the values of y[n] for positive frequencies is the conjugate of the values y[n] for negative frequencies (because the spectrum is symmetric). Typically, only the FFT corresponding to positive frequencies is plotted.\n", + "\n", + "So the first peak at index 20 is (20 bins) x (0.05 Hz/bin) = 1 Hz, as expected. The nyquist frequency of 2.5 Hz is at an index of N/2 = 50 and the negative frequency peak is 20 bins to the left of the end bin.\n", + "\n", + "\n", + "The inverse transform is:\n", + "\n", + "$$x[n] = \\frac{1}{N} \\sum_{k=0}^{N-1} y]k]\\exp \\left ( i 2 \\pi k n /N \\right )$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What about the imaginary part? All imaginary coefficients are zero (neglecting roundoff errors)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:30.797803Z", + "start_time": "2022-03-04T20:48:30.680290Z" + } + }, + "outputs": [], + "source": [ + "imag_coeffs=np.imag(thefft)\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,6))\n", + "theAx.plot(imag_coeffs)\n", + "out=theAx.set_title('imaginary fft of 1 hz')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:17:08.773334Z", + "start_time": "2022-03-03T21:17:08.670253Z" + } + }, + "outputs": [], + "source": [ + "#now evaluate the power spectrum using Stull's 8.6.1a on p. 312\n", + "\n", + "Power=np.real(thefft*np.conj(thefft))\n", + "totsize=len(thefft)\n", + "halfpoint=int(np.floor(totsize/2.))\n", + "firsthalf=Power[0:halfpoint]\n", + "\n", + "\n", + "fig,ax=plt.subplots(1,1)\n", + "freq=np.arange(0,5.,0.05)\n", + "ax.plot(freq[0:halfpoint],firsthalf)\n", + "ax.set_title('power spectrum')\n", + "out=ax.set_xlabel('frequency (Hz)')\n", + "len(freq)\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Check Stull 8.6.1b (or Numerical Recipes 12.0.13) which says that squared power spectrum = variance\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:17:11.622908Z", + "start_time": "2022-03-03T21:17:11.618116Z" + } + }, + "outputs": [], + "source": [ + "print('\\nsimple cosine: velocity variance %10.3f' % (np.sum(onehz*onehz)/totsize))\n", + "print('simple cosine: Power spectrum sum %10.3f\\n' % (np.sum(Power)/totsize**2.))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Power spectrum of turbulent vertical velocity" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's apply fft to some real atmospheric measurements. We'll read in one of the files we downloaded at the beginning of the lab (you should be able to see these files on your local computer in this Lab9 directory)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:19:06.371394Z", + "start_time": "2022-03-03T21:19:06.363885Z" + }, + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "#load data sampled at 20.8333 Hz\n", + "\n", + "td=np.load('miami_tower.npz') #load temp, uvel, vvel, wvel, minutes\n", + "print('keys: ',td.keys())\n", + "# Print the description saved in the file so we know what data are in the file\n", + "print(td['description'])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:20:16.106598Z", + "start_time": "2022-03-03T21:20:15.516697Z" + }, + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "# calculate the fft and plot the frequency-power spectrum\n", + "sampleRate=20.833\n", + "nyquistfreq=sampleRate/2.\n", + "\n", + "\n", + "totsize=36000\n", + "wvel=td['wvel'][0:totsize].flatten()\n", + "temp=td['temp'][0:totsize].flatten()\n", + "wvel = wvel - np.mean(wvel)\n", + "temp= temp - np.mean(temp)\n", + "flux=wvel*temp\n", + "\n", + "\n", + "halfpoint=int(np.floor(totsize/2.))\n", + "frequencies=np.arange(0,halfpoint)\n", + "frequencies=frequencies/halfpoint\n", + "frequencies=frequencies*nyquistfreq\n", + "\n", + "# raw spectrum -- no windowing or averaging\n", + "#First confirm Parseval's theorem\n", + "# (Numerical Recipes 12.1.10, p. 498)\n", + "\n", + "thefft=np.fft.fft(wvel)\n", + "Power=np.real(thefft*np.conj(thefft))\n", + "print('check Wiener-Khichine theorem for wvel')\n", + "print('\\nraw fft sum, full time series: %10.4f\\n' % (np.sum(Power)/totsize**2.))\n", + "print('velocity variance: %10.4f\\n' % (np.sum(wvel*wvel)/totsize))\n", + "\n", + "\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "frequencies[0]=np.NaN\n", + "Power[0]=np.NaN\n", + "Power_half=Power[:halfpoint:]\n", + "theAx.loglog(frequencies,Power_half)\n", + "theAx.set_title('raw wvel spectrum with $f^{-5/3}$')\n", + "theAx.set(xlabel='frequency (HZ)',ylabel='Power (m^2/s^2)')\n", + "#\n", + "# pick one point the line should pass through (by eye)\n", + "# note that y intercept will be at log10(freq)=0\n", + "# or freq=1 Hz\n", + "#\n", + "leftspec=np.log10(Power[1]*1.e-3)\n", + "logy=leftspec - 5./3.*np.log10(frequencies)\n", + "yvals=10.**logy\n", + "theAx.loglog(frequencies,yvals,'r-')\n", + "thePoint=theAx.plot(1.,Power[1]*1.e-3,'g+')\n", + "thePoint[0].set_markersize(15)\n", + "thePoint[0].set_marker('h')\n", + "thePoint[0].set_markerfacecolor('g')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## power spectrum layout" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here is what the entire power spectrum looks like, showing positive and negative frequencies" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:20:20.946232Z", + "start_time": "2022-03-03T21:20:20.597672Z" + }, + "scrolled": true + }, + "outputs": [], + "source": [ + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "out=theAx.semilogy(Power)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "and here is what fftshift does:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:20:41.449621Z", + "start_time": "2022-03-03T21:20:41.098947Z" + } + }, + "outputs": [], + "source": [ + "shift_power=np.fft.fftshift(Power)\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "out=theAx.semilogy(shift_power)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Confirm that the fft at negative f is the complex conjugate of the fft at positive f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:01.817604Z", + "start_time": "2022-03-03T21:20:59.751476Z" + } + }, + "outputs": [], + "source": [ + "test_fft=np.fft.fft(wvel)\n", + "fig,theAx=plt.subplots(2,1,figsize=(8,10))\n", + "theAx[0].semilogy(np.real(test_fft))\n", + "theAx[1].semilogy(np.imag(test_fft))\n", + "print(test_fft[100])\n", + "print(test_fft[-100])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Windowing\n", + "\n", + "The FFT above is noisy, and there are several ways to smooth it. Numerical Recipes, p. 550 has a good discussion of \"windowing\" which helps remove the spurious power caused by the fact that the timeseries has a sudden stop and start.\n", + "Below we split the timeseries into 25 segements of 1440 points each, fft each segment then average the 25. We convolve each segment with a Bartlett window." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:29.850813Z", + "start_time": "2022-03-03T21:21:29.532796Z" + } + }, + "outputs": [], + "source": [ + "print('\\n\\n\\nTry a windowed spectrum (Bartlett window)\\n')\n", + "## windowing -- see p. Numerical recipes 550 for notation\n", + "\n", + "def calc_window(numvals=1440):\n", + " \"\"\"\n", + " Calculate a Bartlett window following\n", + " Numerical Recipes 13.4.13\n", + " \"\"\"\n", + "\n", + " halfpoint=int(np.floor(numvals/2.))\n", + " facm=halfpoint\n", + " facp=1/facm\n", + "\n", + " window=np.empty([numvals],float)\n", + " for j in np.arange(numvals):\n", + " window[j]=(1.-((j - facm)*facp)**2.)\n", + " return window\n", + "\n", + "#\n", + "# we need to normalize by the squared weights\n", + "# (see the fortran code on Numerical recipes p. 550)\n", + "#\n", + "numvals=1440\n", + "window=calc_window(numvals=numvals)\n", + "sumw=np.sum(window**2.)/numvals\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "theAx.plot(window)\n", + "theAx.set_title('Bartlett window')\n", + "print('sumw: %10.3f' % sumw)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:31.330525Z", + "start_time": "2022-03-03T21:21:31.325843Z" + } + }, + "outputs": [], + "source": [ + "def do_fft(the_series,window,ensemble=25,title='title'):\n", + " numvals=len(window)\n", + " sumw=np.sum(window**2.)/numvals\n", + " subset=the_series.copy()\n", + " subset=subset[:len(window)*ensemble]\n", + " subset=np.reshape(subset,(ensemble,numvals))\n", + " winspec=np.zeros([numvals],float)\n", + "\n", + " for therow in np.arange(ensemble):\n", + " thedat=subset[therow,:]\n", + " thefft =np.fft.fft(thedat*window)\n", + " Power=thefft*np.conj(thefft)\n", + " #print('\\nensemble member: %d' % therow)\n", + " #print('\\nwindowed fft sum (m^2/s^2): %10.4f\\n' % (np.sum(Power)/(sumw*numvals**2.),))\n", + " #print('velocity variance (m^2/s^2): %10.4f\\n\\n' % (np.sum(thedat*thedat)/numvals,))\n", + " winspec=winspec + Power\n", + "\n", + " winspec=np.real(winspec/(numvals**2.*ensemble*sumw))\n", + " return winspec" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compare power spectra for wvel, theta, sensible heat flux" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### start with wvel" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:33.181739Z", + "start_time": "2022-03-03T21:21:33.174399Z" + } + }, + "outputs": [], + "source": [ + "winspec=do_fft(wvel,window)\n", + "sampleRate=20.833\n", + "nyquistfreq=sampleRate/2.\n", + "halfpoint=int(len(winspec)/2.)\n", + "averaged_freq=np.linspace(0,1.,halfpoint)*nyquistfreq\n", + "winspec=winspec[0:halfpoint]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:35.469984Z", + "start_time": "2022-03-03T21:21:34.907400Z" + }, + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "def do_plot(the_freq,the_spec,title=None,ylabel=None):\n", + " the_freq[0]=np.NaN\n", + " the_spec[0]=np.NaN\n", + " fig,theAx=plt.subplots(1,1,figsize=(8,6))\n", + " theAx.loglog(the_freq,the_spec,label='fft power')\n", + " if title:\n", + " theAx.set_title(title)\n", + " leftspec=np.log10(the_spec[int(np.floor(halfpoint/10.))])\n", + " logy=leftspec - 5./3.*np.log10(the_freq)\n", + " yvals=10.**logy\n", + " theAx.loglog(the_freq,yvals,'g-',label='$f^{-5/3}$')\n", + " theAx.set_xlabel('frequency (Hz)')\n", + " if ylabel:\n", + " out=theAx.set_ylabel(ylabel)\n", + " out=theAx.legend(loc='best')\n", + " return theAx\n", + "\n", + "labels=dict(title='wvel power spectrum',ylabel='$(m^2\\,s^{-2}\\,Hz^{-1})$')\n", + "ax=do_plot(averaged_freq,winspec,**labels)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:36.209773Z", + "start_time": "2022-03-03T21:21:35.633088Z" + } + }, + "outputs": [], + "source": [ + "winspec=do_fft(temp,window)\n", + "winspec=winspec[0:halfpoint]\n", + "labels=dict(title='Temperature power spectrum',ylabel='$K^{2}\\,Hz^{-1})$')\n", + "ax=do_plot(averaged_freq,winspec,**labels)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:37.174492Z", + "start_time": "2022-03-03T21:21:36.451203Z" + } + }, + "outputs": [], + "source": [ + "winspec=do_fft(flux,window)\n", + "winspec=winspec[0:halfpoint]\n", + "labels=dict(title='Heat flux power spectrum',ylabel='$K m s^{-1}\\,Hz^{-1})$')\n", + "ax=do_plot(averaged_freq,winspec,**labels)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Filtering" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can also filter our timeseries by removing frequencies we aren't interested in. Numerical Recipes discusses the approach on page 551. For example, suppose we want to filter all frequencies higher than 0.5 Hz from the vertical velocity data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:41.173979Z", + "start_time": "2022-03-03T21:21:41.030629Z" + } + }, + "outputs": [], + "source": [ + "wvel= wvel - np.mean(wvel)\n", + "thefft=np.fft.fft(wvel)\n", + "totsize=len(thefft)\n", + "samprate=20.8333 #Hz\n", + "the_time=np.arange(0,totsize,1/20.8333)\n", + "freq_bin_width=samprate/(totsize*2)\n", + "half_hz_index=int(np.floor(0.5/freq_bin_width))\n", + "filter_func=np.zeros_like(thefft,dtype=np.float64)\n", + "filter_func[0:half_hz_index]=1.\n", + "filter_func[-half_hz_index:]=1.\n", + "filtered_wvel=np.real(np.fft.ifft(filter_func*thefft))\n", + "fig,ax=plt.subplots(1,1,figsize=(10,6))\n", + "numpoints=500\n", + "ax.plot(the_time[:numpoints],filtered_wvel[:numpoints],label='filtered')\n", + "ax.plot(the_time[:numpoints],wvel[:numpoints],'g+',label='data')\n", + "ax.set(xlabel='time (seconds)',ylabel='wvel (m/s)')\n", + "out=ax.legend()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "collapsed": true, + "jupyter": { + "outputs_hidden": true + } + }, + "source": [ + "## 2D histogram of the optical depth $\\tau$\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Below I calculate the 2-d and averaged 1-d spectra for the optical depth, which gives the penetration depth of photons through a cloud, and is closely related to cloud thickness" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:32:33.540660Z", + "start_time": "2022-03-03T21:32:33.536922Z" + } + }, + "outputs": [], + "source": [ + "# this allows us to ignore (not print out) some warnings\n", + "import warnings\n", + "warnings.filterwarnings(\"ignore\",category=FutureWarning)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:33:03.591148Z", + "start_time": "2022-03-03T21:33:03.540959Z" + } + }, + "outputs": [], + "source": [ + "import netCDF4\n", + "from netCDF4 import Dataset\n", + "filelist=['a17.nc']\n", + "with Dataset(filelist[0]) as nc:\n", + " tau=nc.variables['tau'][...]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Character of the optical depth field\n", + "\n", + "The image below shows one of the marine boundary layer landsat scenes analyzed in \n", + "[Lewis et al., 2004](http://onlinelibrary.wiley.com/doi/10.1029/2003JD003742/full)\n", + "\n", + "It is a 2048 x 2048 pixel image taken by Landsat 7, with the visible reflectivity converted to\n", + "cloud optical depth. The pixels are 25 m x 25 m, so the scene extends for about 50 km x 50 km" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:33:26.163275Z", + "start_time": "2022-03-03T21:33:25.475800Z" + } + }, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "from mpl_toolkits.axes_grid1 import make_axes_locatable\n", + "plt.close('all')\n", + "fig,ax=plt.subplots(1,2,figsize=(13,7))\n", + "ax[0].set_title('landsat a17')\n", + "im0=ax[0].imshow(tau)\n", + "im1=ax[1].hist(tau.ravel())\n", + "ax[1].set_title('histogram of tau values')\n", + "divider = make_axes_locatable(ax[0])\n", + "cax = divider.append_axes(\"bottom\", size=\"5%\", pad=0.35)\n", + "out=fig.colorbar(im0,orientation='horizontal',cax=cax)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ubc_fft class" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the next cell I define a class that calculates the 2-d fft for a square image\n", + "\n", + "in the method ```power_spectrum``` we calculate both the 2d fft and the power spectrum\n", + "and save them as class attributes. In the method ```annular_average``` I take the power spectrum,\n", + "which is the two-dimensional field $E(k_x, k_y)$ (in cartesian coordinates) or $E(k,\\theta)$ (in polar coordinates).\n", + "In the method ```annular_avg``` I take the average\n", + "\n", + "$$\n", + "\\overline{E}(k) = \\int_0^{2\\pi} E(k, \\theta) d\\theta\n", + "$$\n", + "\n", + "and plot that average with the method ```graph_spectrum```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:07.660224Z", + "start_time": "2022-03-03T21:34:07.645124Z" + } + }, + "outputs": [], + "source": [ + "from netCDF4 import Dataset\n", + "import numpy as np\n", + "import math\n", + "from numpy import fft\n", + "from matplotlib import pyplot as plt\n", + "\n", + "\n", + "class ubc_fft:\n", + "\n", + " def __init__(self, filename, var, scale):\n", + " \"\"\"\n", + " Input filename, var=variable name,\n", + " scale= the size of the pixel in km\n", + "\n", + " Constructer opens the netcdf file, reads the data and\n", + " saves the twodimensional fft\n", + " \"\"\"\n", + " with Dataset(filename,'r') as fin:\n", + " data = fin.variables[var][...]\n", + " data = data - data.mean()\n", + " if data.shape[0] != data.shape[1]:\n", + " raise ValueError('expecting square matrix')\n", + " self.xdim = data.shape[0] # size of each row of the array\n", + " self.midpoint = int(math.floor(self.xdim/2))\n", + " root,suffix = filename.split('.')\n", + " self.filename = root\n", + " self.var = var\n", + " self.scale = float(scale)\n", + " self.data = data\n", + " self.fft_data = fft.fft2(self.data)\n", + "\n", + " def power_spectrum(self):\n", + " \"\"\"\n", + " calculate the power spectrum for the 2-dimensional field\n", + " \"\"\"\n", + " #\n", + " # fft_shift moves the zero frequency point to the middle\n", + " # of the array\n", + " #\n", + " fft_shift = fft.fftshift(self.fft_data)\n", + " spectral_dens = fft_shift*np.conjugate(fft_shift)/(self.xdim*self.xdim)\n", + " spectral_dens = spectral_dens.real\n", + " #\n", + " # dimensional wavenumbers for 2dim spectrum (need only the kx\n", + " # dimensional since image is square\n", + " #\n", + " k_vals = np.arange(0,(self.midpoint))+1\n", + " k_vals = (k_vals-self.midpoint)/(self.xdim*self.scale)\n", + " self.spectral_dens=spectral_dens\n", + " self.k_vals=k_vals\n", + "\n", + " def annular_avg(self,avg_binwidth):\n", + " \"\"\"\n", + " integrate the 2-d power spectrum around a series of rings\n", + " of radius kradial and average into a set of 1-dimensional\n", + " radial bins\n", + " \"\"\"\n", + " #\n", + " # define the k axis which is the radius in the 2-d polar version of E\n", + " #\n", + " numbins = int(round((math.sqrt(2)*self.xdim/avg_binwidth),0)+1)\n", + "\n", + " avg_spec = np.zeros(numbins,np.float64)\n", + " bin_count = np.zeros(numbins,np.float64)\n", + "\n", + " print(\"\\t- INTEGRATING... \")\n", + " for i in range(self.xdim):\n", + " if (i%100) == 0:\n", + " print(\"\\t\\trow: {} completed\".format(i))\n", + " for j in range(self.xdim):\n", + " kradial = math.sqrt(((i+1)-self.xdim/2)**2+((j+1)-self.xdim/2)**2)\n", + " bin_num = int(math.floor(kradial/avg_binwidth))\n", + " avg_spec[bin_num]=avg_spec[bin_num]+ kradial*self.spectral_dens[i,j]\n", + " bin_count[bin_num]+=1\n", + "\n", + " for i in range(numbins):\n", + " if bin_count[i]>0:\n", + " avg_spec[i]=avg_spec[i]*avg_binwidth/bin_count[i]/(4*(math.pi**2))\n", + " self.avg_spec=avg_spec\n", + " #\n", + " # dimensional wavenumbers for 1-d average spectrum\n", + " #\n", + " self.k_bins=np.arange(numbins)+1\n", + " self.k_bins = self.k_bins[0:self.midpoint]\n", + " self.avg_spec = self.avg_spec[0:self.midpoint]\n", + "\n", + "\n", + "\n", + " def graph_spectrum(self, kol_slope=-5./3., kol_offset=1., \\\n", + " title=None):\n", + " \"\"\"\n", + " graph the annular average and compare it to Kolmogorov -5/3\n", + " \"\"\"\n", + " avg_spec=self.avg_spec\n", + " delta_k = 1./self.scale # 1./km (1/0.025 for landsat 25 meter pixels)\n", + " nyquist = delta_k * 0.5\n", + " knum = self.k_bins * (nyquist/float(len(self.k_bins)))# k = w/(25m)\n", + " #\n", + " # draw the -5/3 line through a give spot\n", + " #\n", + " kol = kol_offset*(knum**kol_slope)\n", + " fig,ax=plt.subplots(1,1,figsize=(8,8))\n", + " ax.loglog(knum,avg_spec,'r-',label='power')\n", + " ax.loglog(knum,kol,'k-',label=\"$k^{-5/3}$\")\n", + " ax.set(title=title,xlabel='k (1/km)',ylabel='$E_k$')\n", + " ax.legend()\n", + " self.plotax=ax" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:15.853713Z", + "start_time": "2022-03-03T21:34:15.510810Z" + } + }, + "outputs": [], + "source": [ + "plt.close('all')\n", + "plt.style.use('ggplot')\n", + "output = ubc_fft('a17.nc','tau',0.025)\n", + "output.power_spectrum()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:29.176416Z", + "start_time": "2022-03-03T21:34:28.420369Z" + } + }, + "outputs": [], + "source": [ + "fig,ax=plt.subplots(1,1,figsize=(7,7))\n", + "ax.set_title('landsat a17')\n", + "im0=ax.imshow(np.log10(output.spectral_dens))\n", + "ax.set_title('log10 of the 2-d power spectrum')\n", + "divider = make_axes_locatable(ax)\n", + "cax = divider.append_axes(\"bottom\", size=\"5%\", pad=0.35)\n", + "out=fig.colorbar(im0,orientation='horizontal',cax=cax)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:47.397968Z", + "start_time": "2022-03-03T21:34:40.370991Z" + } + }, + "outputs": [], + "source": [ + "avg_binwidth=5 #make the kradial bins 5 pixels wide\n", + "output.annular_avg(avg_binwidth)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:50.839648Z", + "start_time": "2022-03-03T21:34:50.110205Z" + } + }, + "outputs": [], + "source": [ + "output.graph_spectrum(kol_offset=2000.,title='Landsat {} power spectrum'.format(output.filename))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Problem -- lowpass filtering of a 2-d image\n", + "\n", + "For the image above, \n", + "we know that the 25 meter pixels correspond to k=1/0.025 = 40 $km^{-1}$. That means that the Nyquist\n", + "wavenumber is k=20 $km^{-1}$. Using that information, design a filter that removes all wavenumbers\n", + "higher than 1 $km^{-1}$. \n", + "\n", + "1) Use that filter to zero those values in the fft, then inverse transform and\n", + "plot the low-pass filtered image.\n", + "\n", + "2) Take the 1-d fft of the image and repeat the plot of the power spectrum to show that there is no power in wavenumbers higher than 1 $km^{-1}$.\n", + "\n", + "(Hint -- I used the fftshift function to put the low wavenumber cells in the center of the fft, which made it simpler to zero the outer cells. I then used ifftshift to reverse shift before inverse transforming to get the filtered\n", + "image.)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## An aside about ffts: Using the fft to compute correlation\n", + "\n", + "Below I use aircraft measurments of $\\theta$ and wvel taken at 25 Hz. I compute the \n", + "autocorrelation using numpy.correlate and numpy.fft and show they are identical, as we'd expect" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:36:15.510615Z", + "start_time": "2022-03-03T21:36:14.995216Z" + } + }, + "outputs": [], + "source": [ + "#http://stackoverflow.com/questions/643699/how-can-i-use-numpy-correlate-to-do-autocorrelation\n", + "import numpy as np\n", + "%matplotlib inline\n", + "data = np.load('aircraft.npz')\n", + "wvel=data['wvel'] - data['wvel'].mean()\n", + "theta=data['theta'] - data['theta'].mean()\n", + "autocorr = np.correlate(wvel,wvel,mode='full')\n", + "auto_data = autocorr[wvel.size:]\n", + "ticks=np.arange(0,wvel.size)\n", + "ticks=ticks/25.\n", + "fig,ax = plt.subplots(1,1,figsize=(10,8))\n", + "ax.set(xlabel='lag (seconds)',title='autocorrelation of wvel using numpy.correlate')\n", + "out=ax.plot(ticks[:300],auto_data[:300])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:36:24.626642Z", + "start_time": "2022-03-03T21:36:24.494830Z" + } + }, + "outputs": [], + "source": [ + "import numpy.fft as fft\n", + "the_fft = fft.fft(wvel)\n", + "auto_fft = the_fft*np.conj(the_fft)\n", + "auto_fft = np.real(fft.ifft(auto_fft))\n", + "\n", + "fig,ax = plt.subplots(1,1,figsize=(10,8))\n", + "ax.plot(ticks[:300],auto_fft[:300])\n", + "out=ax.set(xlabel='lag (seconds)',title='autocorrelation using fft')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## An aside about ffts: Using ffts to find a wave envelope" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Say you have a wave in your data, but it is not across all of your domain, e.g:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-08T03:05:21.654984Z", + "start_time": "2022-03-08T03:05:21.323238Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Create a cosine wave modulated by a larger wavelength envelope wave\n", + "\n", + "# create a cosine wave that oscilates x times in 10 seconds\n", + "# sampled at 10 Hz, so there are 10*10 = 100 measurements in 10 seconds\n", + "#\n", + "%matplotlib inline\n", + "\n", + "fig,axs = plt.subplots(2,2,figsize=(12,8))\n", + "\n", + "deltaT=0.1\n", + "ticks = np.arange(0,10,deltaT)\n", + "\n", + "onehz=np.cos(2.0*np.pi*ticks)\n", + "axs[0,0].plot(ticks,onehz)\n", + "axs[0,0].set_title('wave one hz')\n", + "\n", + "# Define an evelope function that is zero between 0 and 2 second,\n", + "# modulated by a sine wave between 2 and 8 and zero afterwards\n", + "\n", + "envelope = np.empty_like(onehz)\n", + "envelope[0:20] = 0.0\n", + "envelope[20:80] = np.sin((1.0/6.0)*np.pi*ticks[0:60])\n", + "envelope[80:100] = 0.0\n", + "\n", + "axs[0,1].plot(ticks,envelope)\n", + "axs[0,1].set_title('envelope')\n", + "\n", + "envelopewave = onehz * envelope\n", + "\n", + "axs[1,0].plot(ticks,envelopewave)\n", + "axs[1,0].set_title('one hz with envelope')\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can do a standard FFT on this to see the power spectrum and then recover the original wave using the inverse FFT. However, we can also use FFTs in other ways to do wavelet analysis, e.g. to find the envelope function, using a method known as the Hilbert transform (see [Zimen et al. 2003](https://nextcloud.eoas.ubc.ca/s/E2ebfKNp2mF2kY5)). This method uses the following steps:\n", + "1. Perform the fourier transform of the function.\n", + "2. Apply the inverse fourier transform to only the positive wavenumber half of the Fourier spectrum.\n", + "3. Calculate the magnitude of the result from step 2 (which will have both real and imaginary parts) and multiply by 2.0 to get the correct magnitude of the envelope function." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-08T03:07:51.709585Z", + "start_time": "2022-03-08T03:07:51.558910Z" + }, + "code_folding": [] + }, + "outputs": [], + "source": [ + "# Calculate the envelope function\n", + "# Step 1. FFT\n", + "thefft=np.fft.fft(envelopewave)\n", + "\n", + "# Find the corresponding frequencies for each index of thefft\n", + "# note, these may not be the exact frequencies, as that will depend on the sampling resolution\n", + "# of your input data, but for this purpose we just want to know which ones are positive and\n", + "# which are negative so don't need to get the constant correct\n", + "freqs = np.fft.fftfreq(len(envelopewave))\n", + "\n", + "# Step 2. Hilbert transform: set all negative frequencies to 0:\n", + "filt_fft = thefft.copy() # make a copy of the array that we will change the negative frequencies in\n", + "filt_fft[freqs<0] = 0 # set all values for negative frequenices to 0\n", + "\n", + "# Inverse FFT on full field:\n", + "recover_sig = np.fft.ifft(thefft)\n", + "# inverse FFT on only the positive wavenumbers\n", + "positive_k_ifft = np.fft.ifft(filt_fft)\n", + "\n", + "# Step 3. Calculate magnitude and multiply by 2:\n", + "envelope_sig = 2.0 *np.abs(positive_k_ifft)\n", + "\n", + "# Plot the result\n", + "fig,axs = plt.subplots(1,1,figsize=(12,8))\n", + "\n", + "deltaT=0.1\n", + "ticks = np.arange(0,10,deltaT)\n", + "\n", + "axs.plot(ticks,envelopewave,linewidth=3,label='original signal')\n", + "\n", + "axs.plot(ticks,recover_sig,linestyle=':',color='k',linewidth=3,label ='signal via FFT')\n", + "\n", + "\n", + "axs.plot(ticks,envelope_sig,linewidth=3,color='b',label='envelope from FFT')\n", + "axs.set_title('Envelope from Hilbert transform')\n", + "axs.legend(loc='best')\n", + "plt.show()\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/_sources/rubrics.rst.txt b/_sources/rubrics.rst.txt new file mode 100644 index 0000000..18e03f6 --- /dev/null +++ b/_sources/rubrics.rst.txt @@ -0,0 +1,8 @@ +Links to rubrics +================ + +* `Grad project `_ + +* `Undergrad project `_ + +* `Presentations rubric `_ diff --git a/_sources/texts.rst.txt b/_sources/texts.rst.txt new file mode 100644 index 0000000..35767d9 --- /dev/null +++ b/_sources/texts.rst.txt @@ -0,0 +1,28 @@ +Optional textbooks +================== + + +The labs in this course are meant to be self-contained. If you'd like additional information/greater depth here are +some texts that we have found useful: + +- `Computational Physics with Python `_ + `Amazon.ca link `_ + + * Comprehensive treatment of our course topics using physics-examples (but obviously the techniques and equations apply + to all disciplines). Good python intro material in Chapters 2 and 3 can be downloaded at the author's website. + + +- `Introduction to applied mathematics, Gilbert Strang `_ + Compact chapters on ordinary differential equations and linear algebra + +- `Numerical analysis -- Burden and Faires `_ + +- `Elemenatary differential equations -- Boyce and de Prima `_ + + +- `Numerical recipes free online edition `_ + +- `Schaum's outline of Numerical Analysis `_ + + +- `Schaum's outline of Differential Equations `_ diff --git a/_sources/ugrad_schedule.rst.txt b/_sources/ugrad_schedule.rst.txt new file mode 100644 index 0000000..5b4d080 --- /dev/null +++ b/_sources/ugrad_schedule.rst.txt @@ -0,0 +1,83 @@ +Dates for Undergraduate Class (ATSC 409) +======================================== + +January +------- +* Monday Jan 8 : First Class + +* Jan 15, 2 pm: Lab 1 Reading and Objectives Quiz + +* Monday Jan 15: Second Class + +* Jan 19, 6 pm: Lab 1 Assignment + +* January 19: Last date for withdrawal from class without a "W" standing + +* Jan 22, 2 pm: Lab 2 Reading and Objectives Quiz + +* Monday Jan 22: Third Class + +* Jan 26, 6 pm:Lab 2 Assignment + +* Jan 29, 2 pm: Lab 3 Reading and Objectives Quiz + +* Monday Jan 29: Fourth Class + +February +---------- +* Feb 2, 6 pm: Lab 3 Assignment + +* Monday, Feb 5: Fifth Class + +* Feb 9, 6 pm: Miniproject 1 + +* Feb 12, 2 pm: Lab 4 Reading and Objectives Quiz + +* Monday Feb 12: Sixth Class + +* Feb 16, 6 pm: Lab 4 Assignment + +* Feb 19-23: Reading Week, No class + +* Feb 26, 2 pm: Lab 5a Reading and Objectives Quiz + +* Monday Feb 26: Seventh Class + +March +----- + +* Mar 1, 6 pm: Lab 5a Assignment + +* Mar 1, 6 pm: Teams for Projects (a list of names due) + +* March 1: Last date to withdraw from course with 'W' + +* Monday Mar 4: Eighth Class + +* Mar 8, 6 pm: Miniproject 2 + +* Monday Mar 11: Ninth Class + +* Mar 15, 6 pm: Project Proposal + +* Mar 15, 6 pm: First iPeer evaluation + +* Mar 18, 2 pm: Lab 6 or Lab 7a Reading and Objectives Quiz + +* Monday Mar 18: Tenth Class + +* Mar 22, 6 pm: Lab 6 or Lab 7a Assignment + +* Monday Mar 25: Eleventh Class + +April +----- + +* Monday, Apr 8: Last Class + +* Apr 8, in class, Project Presentation + +* Apr 8, 6 pm, Second iPeer evaluation + +* Apr 12, 6 pm: Project + diff --git a/_sources/ugradsyllabus.rst.txt b/_sources/ugradsyllabus.rst.txt new file mode 100644 index 0000000..6c0b0f4 --- /dev/null +++ b/_sources/ugradsyllabus.rst.txt @@ -0,0 +1,298 @@ +Undergraduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: ATSC 409 +======================================================================================= + + +Calendar Entry +-------------- + +Web-based introduction to the practical numerical solution of ordinary +and partial differential equations including considerations of stability +and accuracy. Credit will not be granted for both ATSC 409 and ATSC +506/EOSC 511. + +Course Purpose +-------------- + +The students completing this course will be able to apply standard +numerical solution techniques to the solution of problems such as waves, +advection, population growth. + +Meeting Times +------------- +See canvas course page for scheduled class times and location + + +Instructors +----------- + +| Rachel White, rwhite@eoas.ubc.ca +| Susan Allen, sallen@eoas.ubc.ca + +See canvas course page for office hour locations + +Prerequisites +------------- + +Solution of Ordinary Differential Equations (MATH 215 or equivalent) AND +a programming course. Partial Differential Equations (Math 316 or Phys +312) is recommended. [1]_ + +Course Structure +---------------- + +This course is not lecture based. The course is an interactive, computer +based laboratory course. The computer will lead you through the +laboratory (like a set of lab notes) and you will answer problems most +of which use the computer. The course consists of three parts. A set of +interactive, computer based laboratory exercises, two mini-projects and +a final project. + +During the meeting times, there will be group worksheets to delve +into the material, brief presentations to help with technical +matters, time to ask questions in a group format and also individually +and time to read and work on the laboratories. + +It will be important to read the labs before class to do the +worksheets. To encourage good practices there are quizzes on canvas +for each lab. + +You can use a web-browser to examine the course exercises. Point your +browser to: + +https://rhwhite.github.io/numeric_2024/notebook_toc.html + +Grades +------ + + - Laboratory Exercises 20% (individual with collaboration, excellent/satisfactory/unsatisfactory grading) + - Mini-projects 30% (individual with collaboration) + - Quizzes 5% (individual) + - Worksheets 5% (group) + - Project Proposal 5% (group) + - Project 30% (group) + - Project Oral Presentation 5% (group) + +There will be 6 assigned exercise sets or 'Laboratory Exercises' based on the labs. +Note that these are not necessarily the same as the problems in the +lab and will generally be a much smaller set. Laboratory exercises +can be worked with partners or alone. Each student must upload their +own solution in their own words. + +The laboratory exercise sets are to be uploaded to the course CANVAS page. +Sometimes, rather than a large series of plots, you may wish to +include a summarizing table. If you do not understand the scope of a +problem, please ask. Help with the labs is +available 1) through piazza (see CANVAS) so you can contact your classmates +and ask them 2) during the weekly scheduled lab or 3) directly from the +instructors during the scheduled office hours (see canvas). + +Laboratory exercises will be graded as 'excellent', 'satisfactory' or 'unsatisfactory'. +Your grade on canvas will be given as: + +1.0 = excellent + +0.8 = satisfactory + +0 = unsatisfactory + +Grades will be returned within a week of the submission deadline. +If you receive a grade of 'satisfactory' or 'unsatisfactory' on your first submission, +you will be given an opportunity to resubmit the problems you got incorrect to try to +improve your grade. To get a score of 'excellent' on a resubmission, you must include +a full explanation of your understanding of why your initial answer was incorrect, and +what misconception, or mistake, you have corrected to get to your new answer. Resubmissions +will be due exactly 2 weeks after the original submission deadline. It is your responsibility +to manage the timing of the resubmission deadlines with the next laboratory exercise. + +Your final Laboratory Exercise grade will be calculated from the number of excellent, satisfactory +and unsatisfactory grades you have from the 6 exercises: +4 or more submissions at 'Excellent', none 'Unsatisfactory': 100% +2 or more submissions at 'Excellent', none 'Unsatisfactory': 90% +1 or fewer 'Excellent', none 'Unsatisfactory': 80% +1 'Unsatisfactory' submission: 70% +2 'Unsatisfactory' submissions: 60% +3 'Unsatisfactory' submissions: 50% +4 or more 'Unsatisfactory' submissions: 0%: +[2]_ + +The two mini-projects are longer assignments and slightly open-ended. +These mini-projects can be worked with partners or alone. Each +student much upload their own solution in their own words. + +Quizzes are done online, reflect the learning objectives of each lab +and are assigned to ensure you do the reading with enough depth to +participate fully in the class worksheets and have the background to +do the Laboratory Exercises. There will be a "grace space" policy +allowing you to miss one quiz. + +The in-class worksheets will be marked for a complete effort. There +will be a “grace space” policy allowing you to miss one class +worksheet. The grace space policy is to accommodate missed classes due +to illness, “away games” for athletes etc. In-class paper worksheets +are done as a group and are to handed in (one worksheet only per +group) at the end of the worksheet time. + +**The project will be done as a group. The topic of the project should +be selected from a list provided by the instructors or in consultation +with the instructors.** + +Assignments, quizzes, mini-projects and the project are expected on +time. Late ones will be marked and then the mark will be multiplied by +:math:`(0.9)^{\rm (number\ of\ days\ or\ part\ days\ late)}`. + + +Contents +-------- + +For each laboratory we give an estimate of number of hours. You will +need to complete six hours a week to keep up with the course material +covered in the quizzes. + +- Introductory Meeting + +- Laboratory One + + - Estimate: 8 hours + + - Quiz #1 Objectives [3]_ pertaining to Lab 1 + + - Assignment: See web. + +- Laboratory Two + + - Estimate: 6 hours + + - Quiz #2 Objectives pertaining to Lab 2 + + - Assignment: See web. + +- Laboratory Three + + - Estimate: 8 hours + + - Quiz #3 Objectives pertaining to Lab 3 + + - Assignment: See web. + +- Mini-Project #1 + + - Estimate: 4 hours + + - Details on web. + +- Laboratory Four + + - Estimate: 8 hours + + - Quiz #5 Objectives pertaining to Lab 4 + + - Assignment: See web. + +- Laboratory Five + + - Estimate: 6 hours + + - Quiz #6 Objectives pertaining to Lab 5 + + - Assignment: See web + +- Mini-Project #2 + + - Estimate: 4 hours + + - Details on web. + +- Laboratory Seven (do 7 if you have PDE’s) + + - Estimate: 8 hours + + - Quiz #7-7 Objectives pertaining to Lab + + - Assignment: See web. + +- Laboratory Six (do 6 if you do not have PDE’s) + + - Estimate: 8 hours + + - Quiz #7-6 Objectives pertaining to Lab 6 + +- Assignment: See web. + +- Project + + - Estimate: 16 hours + + - Proposal + + - 20 minute presentation to the class + + - Project report + + +University Statement on Values and Policies +------------------------------------------- + +UBC provides resources to support student learning and to maintain +healthy lifestyles but recognizes that sometimes crises arise and so +there are additional resources to access including those for survivors +of sexual violence. UBC values respect for the person and ideas of +all members of the academic community. Harassment and discrimination +are not tolerated nor is suppression of academic freedom. UBC provides +appropriate accommodation for students with disabilities and for +religious and cultural observances. UBC values academic honesty and +students are expected to acknowledge the ideas generated by others and +to uphold the highest academic standards in all of their +actions. Details of the policies and how to access support are +available here + +https://senate.ubc.ca/policies-resources-support-student-success. + + +Supporting Diversity and Inclusions +----------------------------------- + +Atmospheric Science, Oceanography and the Earth Sciences have been +historically dominated by a small subset of +privileged people who are predominantly male and white, missing out on +many influential individuals thoughts and +experiences. In this course, we would like to create an environment +that supports a diversity of thoughts, perspectives +and experiences, and honours your identities. To help accomplish this: + + - Please let us know your preferred name and/or set of pronouns. + - If you feel like your performance in our class is impacted by your experiences outside of class, please don’t hesitate to come and talk with us. We want to be a resource for you and to help you succeed. + - If an approach in class does not work well for you, please talk to any of the teaching team and we will do our best to make adjustments. Your suggestions are encouraged and appreciated. + - We are all still learning about diverse perspectives and identities. If something was said in class (by anyone) that made you feel uncomfortable, please talk to us about it + + +Academic Integrity +------------------ + +Students are expected to learn material with honesty, integrity, and responsibility. + + - Honesty means you should not take credit for the work of others, + and if you work with others you are careful to give them the credit they deserve. + - Integrity means you follow the rules you are given and are respectful towards others + and their attempts to do so as well. + - Responsibility means that you if you are unclear about the rules in a specific case + you should contact the instructor for guidance. + +The course will involve a mixture of individual and group work. We try +to be flexible about this as my priority is for you to learn the +material rather than blindly follow rules, but there are +rules. Plagiarism (i.e. copying of others work) and cheating (not +following the rules) can result in penalties ranging from zero on an +assignment to failing the course. + + +**For due dates etc, please see the Detailed Schedule.** + +.. [1] + If you have PDE’s Lab 7 is strongly recommended, whereas if you do + not have PDE’s do Lab 6 + +.. [2] + For assignments with a late penalty, we will consider grades of >=85% as Excellent, 60-85\% as Satisfactory, and below 60% as Unsatisfactory. + +.. [3] + Objectives is an older term for Learning Goals diff --git a/_static/UBC_EOAS_favicon.ico b/_static/UBC_EOAS_favicon.ico new file mode 100644 index 0000000..0aba095 Binary files /dev/null and b/_static/UBC_EOAS_favicon.ico differ diff --git a/_static/basic.css b/_static/basic.css new file mode 100644 index 0000000..f316efc --- /dev/null +++ b/_static/basic.css @@ -0,0 +1,925 @@ +/* + * basic.css + * ~~~~~~~~~ + * + * Sphinx stylesheet -- basic theme. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +/* -- main layout ----------------------------------------------------------- */ + +div.clearer { + clear: both; +} + +div.section::after { + display: block; + content: ''; + clear: left; +} + +/* -- relbar ---------------------------------------------------------------- */ + +div.related { + width: 100%; + font-size: 90%; +} + +div.related h3 { + display: none; +} + +div.related ul { + margin: 0; + padding: 0 0 0 10px; + list-style: none; +} + +div.related li { + display: inline; +} + +div.related li.right { + float: right; + margin-right: 5px; +} + +/* -- sidebar --------------------------------------------------------------- */ + +div.sphinxsidebarwrapper { + padding: 10px 5px 0 10px; +} + +div.sphinxsidebar { + float: left; + width: 230px; + margin-left: -100%; + font-size: 90%; + word-wrap: break-word; + overflow-wrap : break-word; +} + +div.sphinxsidebar ul { + list-style: none; +} + +div.sphinxsidebar ul ul, +div.sphinxsidebar ul.want-points { + margin-left: 20px; + list-style: square; +} + +div.sphinxsidebar ul ul { + margin-top: 0; + margin-bottom: 0; +} + +div.sphinxsidebar form { + margin-top: 10px; +} + +div.sphinxsidebar input { + border: 1px solid #98dbcc; + font-family: sans-serif; + font-size: 1em; +} + +div.sphinxsidebar #searchbox form.search { + overflow: hidden; +} + +div.sphinxsidebar #searchbox input[type="text"] { + float: left; + width: 80%; + padding: 0.25em; + box-sizing: border-box; +} + +div.sphinxsidebar #searchbox input[type="submit"] { + float: left; + width: 20%; + border-left: none; + padding: 0.25em; + box-sizing: border-box; +} + + +img { + border: 0; + max-width: 100%; +} + +/* -- search page ----------------------------------------------------------- */ + +ul.search { + margin: 10px 0 0 20px; + padding: 0; +} + +ul.search li { + padding: 5px 0 5px 20px; + background-image: url(file.png); + background-repeat: no-repeat; + background-position: 0 7px; +} + +ul.search li a { + font-weight: bold; +} + +ul.search li p.context { + color: #888; + margin: 2px 0 0 30px; + text-align: left; +} + +ul.keywordmatches li.goodmatch a { + font-weight: bold; +} + +/* -- index page ------------------------------------------------------------ */ + +table.contentstable { + width: 90%; + margin-left: auto; + margin-right: auto; +} + +table.contentstable p.biglink { + line-height: 150%; +} + +a.biglink { + font-size: 1.3em; +} + +span.linkdescr { + font-style: italic; + padding-top: 5px; + font-size: 90%; +} + +/* -- general index --------------------------------------------------------- */ + +table.indextable { + width: 100%; +} + +table.indextable td { + text-align: left; + vertical-align: top; +} + +table.indextable ul { + margin-top: 0; + margin-bottom: 0; + list-style-type: none; +} + +table.indextable > tbody > tr > td > ul { + padding-left: 0em; +} + +table.indextable tr.pcap { + height: 10px; +} + +table.indextable tr.cap { + margin-top: 10px; + background-color: #f2f2f2; +} + +img.toggler { + margin-right: 3px; + margin-top: 3px; + cursor: pointer; +} + +div.modindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +div.genindex-jumpbox { + border-top: 1px solid #ddd; + border-bottom: 1px solid #ddd; + margin: 1em 0 1em 0; + padding: 0.4em; +} + +/* -- domain module index --------------------------------------------------- */ + +table.modindextable td { + padding: 2px; + border-collapse: collapse; +} + +/* -- general body styles --------------------------------------------------- */ + +div.body { + min-width: 360px; + max-width: 800px; +} + +div.body p, div.body dd, div.body li, div.body blockquote { + -moz-hyphens: auto; + -ms-hyphens: auto; + -webkit-hyphens: auto; + hyphens: auto; +} + +a.headerlink { + visibility: hidden; +} + +a:visited { + color: #551A8B; +} + +h1:hover > a.headerlink, +h2:hover > a.headerlink, +h3:hover > a.headerlink, +h4:hover > a.headerlink, +h5:hover > a.headerlink, +h6:hover > a.headerlink, +dt:hover > a.headerlink, +caption:hover > a.headerlink, +p.caption:hover > a.headerlink, +div.code-block-caption:hover > a.headerlink { + visibility: visible; +} + +div.body p.caption { + text-align: inherit; +} + +div.body td { + text-align: left; +} + +.first { + margin-top: 0 !important; +} + +p.rubric { + margin-top: 30px; + font-weight: bold; +} + +img.align-left, figure.align-left, .figure.align-left, object.align-left { + clear: left; + float: left; + margin-right: 1em; +} + +img.align-right, figure.align-right, .figure.align-right, object.align-right { + clear: right; + float: right; + margin-left: 1em; +} + +img.align-center, figure.align-center, .figure.align-center, object.align-center { + display: block; + margin-left: auto; + margin-right: auto; +} + +img.align-default, figure.align-default, .figure.align-default { + display: block; + margin-left: auto; + margin-right: auto; +} + +.align-left { + text-align: left; +} + +.align-center { + text-align: center; +} + +.align-default { + text-align: center; +} + +.align-right { + text-align: right; +} + +/* -- sidebars -------------------------------------------------------------- */ + +div.sidebar, +aside.sidebar { + margin: 0 0 0.5em 1em; + border: 1px solid #ddb; + padding: 7px; + background-color: #ffe; + width: 40%; + float: right; + clear: right; + overflow-x: auto; +} + +p.sidebar-title { + font-weight: bold; +} + +nav.contents, +aside.topic, +div.admonition, div.topic, blockquote { + clear: left; +} + +/* -- topics ---------------------------------------------------------------- */ + +nav.contents, +aside.topic, +div.topic { + border: 1px solid #ccc; + padding: 7px; + margin: 10px 0 10px 0; +} + +p.topic-title { + font-size: 1.1em; + font-weight: bold; + margin-top: 10px; +} + +/* -- admonitions ----------------------------------------------------------- */ + +div.admonition { + margin-top: 10px; + margin-bottom: 10px; + padding: 7px; +} + +div.admonition dt { + font-weight: bold; +} + +p.admonition-title { + margin: 0px 10px 5px 0px; + font-weight: bold; +} + +div.body p.centered { + text-align: center; + margin-top: 25px; +} + +/* -- content of sidebars/topics/admonitions -------------------------------- */ + +div.sidebar > :last-child, +aside.sidebar > :last-child, +nav.contents > :last-child, +aside.topic > :last-child, +div.topic > :last-child, +div.admonition > :last-child { + margin-bottom: 0; +} + +div.sidebar::after, +aside.sidebar::after, +nav.contents::after, +aside.topic::after, +div.topic::after, +div.admonition::after, +blockquote::after { + display: block; + content: ''; + clear: both; +} + +/* -- tables ---------------------------------------------------------------- */ + +table.docutils { + margin-top: 10px; + margin-bottom: 10px; + border: 0; + border-collapse: collapse; +} + +table.align-center { + margin-left: auto; + margin-right: auto; +} + +table.align-default { + margin-left: auto; + margin-right: auto; +} + +table caption span.caption-number { + font-style: italic; +} + +table caption span.caption-text { +} + +table.docutils td, table.docutils th { + padding: 1px 8px 1px 5px; + border-top: 0; + border-left: 0; + border-right: 0; + border-bottom: 1px solid #aaa; +} + +th { + text-align: left; + padding-right: 5px; +} + +table.citation { + border-left: solid 1px gray; + margin-left: 1px; +} + +table.citation td { + border-bottom: none; +} + +th > :first-child, +td > :first-child { + margin-top: 0px; +} + +th > :last-child, +td > :last-child { + margin-bottom: 0px; +} + +/* -- figures --------------------------------------------------------------- */ + +div.figure, figure { + margin: 0.5em; + padding: 0.5em; +} + +div.figure p.caption, figcaption { + padding: 0.3em; +} + +div.figure p.caption span.caption-number, +figcaption span.caption-number { + font-style: italic; +} + +div.figure p.caption span.caption-text, +figcaption span.caption-text { +} + +/* -- field list styles ----------------------------------------------------- */ + +table.field-list td, table.field-list th { + border: 0 !important; +} + +.field-list ul { + margin: 0; + padding-left: 1em; +} + +.field-list p { + margin: 0; +} + +.field-name { + -moz-hyphens: manual; + -ms-hyphens: manual; + -webkit-hyphens: manual; + hyphens: manual; +} + +/* -- hlist styles ---------------------------------------------------------- */ + +table.hlist { + margin: 1em 0; +} + +table.hlist td { + vertical-align: top; +} + +/* -- object description styles --------------------------------------------- */ + +.sig { + font-family: 'Consolas', 'Menlo', 'DejaVu Sans Mono', 'Bitstream Vera Sans Mono', monospace; +} + +.sig-name, code.descname { + background-color: transparent; + font-weight: bold; +} + +.sig-name { + font-size: 1.1em; +} + +code.descname { + font-size: 1.2em; +} + +.sig-prename, code.descclassname { + background-color: transparent; +} + +.optional { + font-size: 1.3em; +} + +.sig-paren { + font-size: larger; +} + +.sig-param.n { + font-style: italic; +} + +/* C++ specific styling */ + +.sig-inline.c-texpr, +.sig-inline.cpp-texpr { + font-family: unset; +} + +.sig.c .k, .sig.c .kt, +.sig.cpp .k, .sig.cpp .kt { + color: #0033B3; +} + +.sig.c .m, +.sig.cpp .m { + color: #1750EB; +} + +.sig.c .s, .sig.c .sc, +.sig.cpp .s, .sig.cpp .sc { + color: #067D17; +} + + +/* -- other body styles ----------------------------------------------------- */ + +ol.arabic { + list-style: decimal; +} + +ol.loweralpha { + list-style: lower-alpha; +} + +ol.upperalpha { + list-style: upper-alpha; +} + +ol.lowerroman { + list-style: lower-roman; +} + +ol.upperroman { + list-style: upper-roman; +} + +:not(li) > ol > li:first-child > :first-child, +:not(li) > ul > li:first-child > :first-child { + margin-top: 0px; +} + +:not(li) > ol > li:last-child > :last-child, +:not(li) > ul > li:last-child > :last-child { + margin-bottom: 0px; +} + +ol.simple ol p, +ol.simple ul p, +ul.simple ol p, +ul.simple ul p { + margin-top: 0; +} + +ol.simple > li:not(:first-child) > p, +ul.simple > li:not(:first-child) > p { + margin-top: 0; +} + +ol.simple p, +ul.simple p { + margin-bottom: 0; +} + +aside.footnote > span, +div.citation > span { + float: left; +} +aside.footnote > span:last-of-type, +div.citation > span:last-of-type { + padding-right: 0.5em; +} +aside.footnote > p { + margin-left: 2em; +} +div.citation > p { + margin-left: 4em; +} +aside.footnote > p:last-of-type, +div.citation > p:last-of-type { + margin-bottom: 0em; +} +aside.footnote > p:last-of-type:after, +div.citation > p:last-of-type:after { + content: ""; + clear: both; +} + +dl.field-list { + display: grid; + grid-template-columns: fit-content(30%) auto; +} + +dl.field-list > dt { + font-weight: bold; + word-break: break-word; + padding-left: 0.5em; + padding-right: 5px; +} + +dl.field-list > dd { + padding-left: 0.5em; + margin-top: 0em; + margin-left: 0em; + margin-bottom: 0em; +} + +dl { + margin-bottom: 15px; +} + +dd > :first-child { + margin-top: 0px; +} + +dd ul, dd table { + margin-bottom: 10px; +} + +dd { + margin-top: 3px; + margin-bottom: 10px; + margin-left: 30px; +} + +.sig dd { + margin-top: 0px; + margin-bottom: 0px; +} + +.sig dl { + margin-top: 0px; + margin-bottom: 0px; +} + +dl > dd:last-child, +dl > dd:last-child > :last-child { + margin-bottom: 0; +} + +dt:target, span.highlighted { + background-color: #fbe54e; +} + +rect.highlighted { + fill: #fbe54e; +} + +dl.glossary dt { + font-weight: bold; + font-size: 1.1em; +} + +.versionmodified { + font-style: italic; +} + +.system-message { + background-color: #fda; + padding: 5px; + border: 3px solid red; +} + +.footnote:target { + background-color: #ffa; +} + +.line-block { + display: block; + margin-top: 1em; + margin-bottom: 1em; +} + +.line-block .line-block { + margin-top: 0; + margin-bottom: 0; + margin-left: 1.5em; +} + +.guilabel, .menuselection { + font-family: sans-serif; +} + +.accelerator { + text-decoration: underline; +} + +.classifier { + font-style: oblique; +} + +.classifier:before { + font-style: normal; + margin: 0 0.5em; + content: ":"; + display: inline-block; +} + +abbr, acronym { + border-bottom: dotted 1px; + cursor: help; +} + +.translated { + background-color: rgba(207, 255, 207, 0.2) +} + +.untranslated { + background-color: rgba(255, 207, 207, 0.2) +} + +/* -- code displays --------------------------------------------------------- */ + +pre { + overflow: auto; + overflow-y: hidden; /* fixes display issues on Chrome browsers */ +} + +pre, div[class*="highlight-"] { + clear: both; +} + +span.pre { + -moz-hyphens: none; + -ms-hyphens: none; + -webkit-hyphens: none; + hyphens: none; + white-space: nowrap; +} + +div[class*="highlight-"] { + margin: 1em 0; +} + +td.linenos pre { + border: 0; + background-color: transparent; + color: #aaa; +} + +table.highlighttable { + display: block; +} + +table.highlighttable tbody { + display: block; +} + +table.highlighttable tr { + display: flex; +} + +table.highlighttable td { + margin: 0; + padding: 0; +} + +table.highlighttable td.linenos { + padding-right: 0.5em; +} + +table.highlighttable td.code { + flex: 1; + overflow: hidden; +} + +.highlight .hll { + display: block; +} + +div.highlight pre, +table.highlighttable pre { + margin: 0; +} + +div.code-block-caption + div { + margin-top: 0; +} + +div.code-block-caption { + margin-top: 1em; + padding: 2px 5px; + font-size: small; +} + +div.code-block-caption code { + background-color: transparent; +} + +table.highlighttable td.linenos, +span.linenos, +div.highlight span.gp { /* gp: Generic.Prompt */ + user-select: none; + -webkit-user-select: text; /* Safari fallback only */ + -webkit-user-select: none; /* Chrome/Safari */ + -moz-user-select: none; /* Firefox */ + -ms-user-select: none; /* IE10+ */ +} + +div.code-block-caption span.caption-number { + padding: 0.1em 0.3em; + font-style: italic; +} + +div.code-block-caption span.caption-text { +} + +div.literal-block-wrapper { + margin: 1em 0; +} + +code.xref, a code { + background-color: transparent; + font-weight: bold; +} + +h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { + background-color: transparent; +} + +.viewcode-link { + float: right; +} + +.viewcode-back { + float: right; + font-family: sans-serif; +} + +div.viewcode-block:target { + margin: -1px -10px; + padding: 0 10px; +} + +/* -- math display ---------------------------------------------------------- */ + +img.math { + vertical-align: middle; +} + +div.body div.math p { + text-align: center; +} + +span.eqno { + float: right; +} + +span.eqno a.headerlink { + position: absolute; + z-index: 1; +} + +div.math:hover a.headerlink { + visibility: visible; +} + +/* -- printout stylesheet --------------------------------------------------- */ + +@media print { + div.document, + div.documentwrapper, + div.bodywrapper { + margin: 0 !important; + width: 100%; + } + + div.sphinxsidebar, + div.related, + div.footer, + #top-link { + display: none; + } +} \ No newline at end of file diff --git a/_static/doctools.js b/_static/doctools.js new file mode 100644 index 0000000..4d67807 --- /dev/null +++ b/_static/doctools.js @@ -0,0 +1,156 @@ +/* + * doctools.js + * ~~~~~~~~~~~ + * + * Base JavaScript utilities for all Sphinx HTML documentation. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +const BLACKLISTED_KEY_CONTROL_ELEMENTS = new Set([ + "TEXTAREA", + "INPUT", + "SELECT", + "BUTTON", +]); + +const _ready = (callback) => { + if (document.readyState !== "loading") { + callback(); + } else { + document.addEventListener("DOMContentLoaded", callback); + } +}; + +/** + * Small JavaScript module for the documentation. + */ +const Documentation = { + init: () => { + Documentation.initDomainIndexTable(); + Documentation.initOnKeyListeners(); + }, + + /** + * i18n support + */ + TRANSLATIONS: {}, + PLURAL_EXPR: (n) => (n === 1 ? 0 : 1), + LOCALE: "unknown", + + // gettext and ngettext don't access this so that the functions + // can safely bound to a different name (_ = Documentation.gettext) + gettext: (string) => { + const translated = Documentation.TRANSLATIONS[string]; + switch (typeof translated) { + case "undefined": + return string; // no translation + case "string": + return translated; // translation exists + default: + return translated[0]; // (singular, plural) translation tuple exists + } + }, + + ngettext: (singular, plural, n) => { + const translated = Documentation.TRANSLATIONS[singular]; + if (typeof translated !== "undefined") + return translated[Documentation.PLURAL_EXPR(n)]; + return n === 1 ? singular : plural; + }, + + addTranslations: (catalog) => { + Object.assign(Documentation.TRANSLATIONS, catalog.messages); + Documentation.PLURAL_EXPR = new Function( + "n", + `return (${catalog.plural_expr})` + ); + Documentation.LOCALE = catalog.locale; + }, + + /** + * helper function to focus on search bar + */ + focusSearchBar: () => { + document.querySelectorAll("input[name=q]")[0]?.focus(); + }, + + /** + * Initialise the domain index toggle buttons + */ + initDomainIndexTable: () => { + const toggler = (el) => { + const idNumber = el.id.substr(7); + const toggledRows = document.querySelectorAll(`tr.cg-${idNumber}`); + if (el.src.substr(-9) === "minus.png") { + el.src = `${el.src.substr(0, el.src.length - 9)}plus.png`; + toggledRows.forEach((el) => (el.style.display = "none")); + } else { + el.src = `${el.src.substr(0, el.src.length - 8)}minus.png`; + toggledRows.forEach((el) => (el.style.display = "")); + } + }; + + const togglerElements = document.querySelectorAll("img.toggler"); + togglerElements.forEach((el) => + el.addEventListener("click", (event) => toggler(event.currentTarget)) + ); + togglerElements.forEach((el) => (el.style.display = "")); + if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) togglerElements.forEach(toggler); + }, + + initOnKeyListeners: () => { + // only install a listener if it is really needed + if ( + !DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS && + !DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS + ) + return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.altKey || event.ctrlKey || event.metaKey) return; + + if (!event.shiftKey) { + switch (event.key) { + case "ArrowLeft": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const prevLink = document.querySelector('link[rel="prev"]'); + if (prevLink && prevLink.href) { + window.location.href = prevLink.href; + event.preventDefault(); + } + break; + case "ArrowRight": + if (!DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) break; + + const nextLink = document.querySelector('link[rel="next"]'); + if (nextLink && nextLink.href) { + window.location.href = nextLink.href; + event.preventDefault(); + } + break; + } + } + + // some keyboard layouts may need Shift to get / + switch (event.key) { + case "/": + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) break; + Documentation.focusSearchBar(); + event.preventDefault(); + } + }); + }, +}; + +// quick alias for translations +const _ = Documentation.gettext; + +_ready(Documentation.init); diff --git a/_static/documentation_options.js b/_static/documentation_options.js new file mode 100644 index 0000000..e9cd4be --- /dev/null +++ b/_static/documentation_options.js @@ -0,0 +1,13 @@ +const DOCUMENTATION_OPTIONS = { + VERSION: '22.1', + LANGUAGE: 'en', + COLLAPSE_INDEX: false, + BUILDER: 'html', + FILE_SUFFIX: '.html', + LINK_SUFFIX: '.html', + HAS_SOURCE: true, + SOURCELINK_SUFFIX: '.txt', + NAVIGATION_WITH_KEYS: false, + SHOW_SEARCH_SUMMARY: true, + ENABLE_SEARCH_SHORTCUTS: true, +}; \ No newline at end of file diff --git a/_static/file.png b/_static/file.png new file mode 100644 index 0000000..a858a41 Binary files /dev/null and b/_static/file.png differ diff --git a/_static/fonts/OpenSans-Light-webfont.ttf b/_static/fonts/OpenSans-Light-webfont.ttf new file mode 100644 index 0000000..460f12a Binary files /dev/null and b/_static/fonts/OpenSans-Light-webfont.ttf differ diff --git a/_static/fonts/OpenSans-Light.ttf b/_static/fonts/OpenSans-Light.ttf new file mode 100644 index 0000000..0d38189 Binary files /dev/null and b/_static/fonts/OpenSans-Light.ttf differ diff --git a/_static/fonts/OpenSans-Regular-webfont.ttf b/_static/fonts/OpenSans-Regular-webfont.ttf new file mode 100644 index 0000000..cb848e7 Binary files /dev/null and b/_static/fonts/OpenSans-Regular-webfont.ttf differ diff --git a/_static/fonts/OpenSans-Regular.ttf b/_static/fonts/OpenSans-Regular.ttf new file mode 100644 index 0000000..db43334 Binary files /dev/null and b/_static/fonts/OpenSans-Regular.ttf differ diff --git a/_static/images/bg-gradient-sand.png b/_static/images/bg-gradient-sand.png new file mode 100644 index 0000000..ae82378 Binary files /dev/null and b/_static/images/bg-gradient-sand.png differ diff --git a/_static/images/bg-sand.png b/_static/images/bg-sand.png new file mode 100644 index 0000000..6b2d79d Binary files /dev/null and b/_static/images/bg-sand.png differ diff --git a/_static/images/divider.png b/_static/images/divider.png new file mode 100644 index 0000000..54bd41a Binary files /dev/null and b/_static/images/divider.png differ diff --git a/_static/language_data.js b/_static/language_data.js new file mode 100644 index 0000000..367b8ed --- /dev/null +++ b/_static/language_data.js @@ -0,0 +1,199 @@ +/* + * language_data.js + * ~~~~~~~~~~~~~~~~ + * + * This script contains the language-specific data used by searchtools.js, + * namely the list of stopwords, stemmer, scorer and splitter. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ + +var stopwords = ["a", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "near", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with"]; + + +/* Non-minified version is copied as a separate JS file, if available */ + +/** + * Porter Stemmer + */ +var Stemmer = function() { + + var step2list = { + ational: 'ate', + tional: 'tion', + enci: 'ence', + anci: 'ance', + izer: 'ize', + bli: 'ble', + alli: 'al', + entli: 'ent', + eli: 'e', + ousli: 'ous', + ization: 'ize', + ation: 'ate', + ator: 'ate', + alism: 'al', + iveness: 'ive', + fulness: 'ful', + ousness: 'ous', + aliti: 'al', + iviti: 'ive', + biliti: 'ble', + logi: 'log' + }; + + var step3list = { + icate: 'ic', + ative: '', + alize: 'al', + iciti: 'ic', + ical: 'ic', + ful: '', + ness: '' + }; + + var c = "[^aeiou]"; // consonant + var v = "[aeiouy]"; // vowel + var C = c + "[^aeiouy]*"; // consonant sequence + var V = v + "[aeiou]*"; // vowel sequence + + var mgr0 = "^(" + C + ")?" + V + C; // [C]VC... is m>0 + var meq1 = "^(" + C + ")?" + V + C + "(" + V + ")?$"; // [C]VC[V] is m=1 + var mgr1 = "^(" + C + ")?" + V + C + V + C; // [C]VCVC... is m>1 + var s_v = "^(" + C + ")?" + v; // vowel in stem + + this.stemWord = function (w) { + var stem; + var suffix; + var firstch; + var origword = w; + + if (w.length < 3) + return w; + + var re; + var re2; + var re3; + var re4; + + firstch = w.substr(0,1); + if (firstch == "y") + w = firstch.toUpperCase() + w.substr(1); + + // Step 1a + re = /^(.+?)(ss|i)es$/; + re2 = /^(.+?)([^s])s$/; + + if (re.test(w)) + w = w.replace(re,"$1$2"); + else if (re2.test(w)) + w = w.replace(re2,"$1$2"); + + // Step 1b + re = /^(.+?)eed$/; + re2 = /^(.+?)(ed|ing)$/; + if (re.test(w)) { + var fp = re.exec(w); + re = new RegExp(mgr0); + if (re.test(fp[1])) { + re = /.$/; + w = w.replace(re,""); + } + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1]; + re2 = new RegExp(s_v); + if (re2.test(stem)) { + w = stem; + re2 = /(at|bl|iz)$/; + re3 = new RegExp("([^aeiouylsz])\\1$"); + re4 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re2.test(w)) + w = w + "e"; + else if (re3.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + else if (re4.test(w)) + w = w + "e"; + } + } + + // Step 1c + re = /^(.+?)y$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(s_v); + if (re.test(stem)) + w = stem + "i"; + } + + // Step 2 + re = /^(.+?)(ational|tional|enci|anci|izer|bli|alli|entli|eli|ousli|ization|ation|ator|alism|iveness|fulness|ousness|aliti|iviti|biliti|logi)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step2list[suffix]; + } + + // Step 3 + re = /^(.+?)(icate|ative|alize|iciti|ical|ful|ness)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + suffix = fp[2]; + re = new RegExp(mgr0); + if (re.test(stem)) + w = stem + step3list[suffix]; + } + + // Step 4 + re = /^(.+?)(al|ance|ence|er|ic|able|ible|ant|ement|ment|ent|ou|ism|ate|iti|ous|ive|ize)$/; + re2 = /^(.+?)(s|t)(ion)$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + if (re.test(stem)) + w = stem; + } + else if (re2.test(w)) { + var fp = re2.exec(w); + stem = fp[1] + fp[2]; + re2 = new RegExp(mgr1); + if (re2.test(stem)) + w = stem; + } + + // Step 5 + re = /^(.+?)e$/; + if (re.test(w)) { + var fp = re.exec(w); + stem = fp[1]; + re = new RegExp(mgr1); + re2 = new RegExp(meq1); + re3 = new RegExp("^" + C + v + "[^aeiouwxy]$"); + if (re.test(stem) || (re2.test(stem) && !(re3.test(stem)))) + w = stem; + } + re = /ll$/; + re2 = new RegExp(mgr1); + if (re.test(w) && re2.test(w)) { + re = /.$/; + w = w.replace(re,""); + } + + // and turn initial Y back to y + if (firstch == "y") + w = firstch.toLowerCase() + w.substr(1); + return w; + } +} + diff --git a/_static/minus.png b/_static/minus.png new file mode 100644 index 0000000..d96755f Binary files /dev/null and b/_static/minus.png differ diff --git a/_static/mozilla.css b/_static/mozilla.css new file mode 100644 index 0000000..050b9d8 --- /dev/null +++ b/_static/mozilla.css @@ -0,0 +1,294 @@ +/* + * mozilla.css_t + * ~~~~~~~~~~~~~ + * + * Sphinx stylesheet -- Mozilla theme, based on the nature theme. + * + * :copyright: Copyright 2012 Alexis Metaireau. + * :license: BSD + */ + +@import url("https://www.mozilla.org/tabzilla/media/css/tabzilla.css"); +@import url("basic.css"); + +@font-face { + font-family: 'OpenSansLight'; + src: url('fonts/OpenSans-Light-webfont.ttf') format('truetype'); + font-weight: normal; + font-style: normal; + +} + +@font-face { + font-family: 'OpenSans'; + src: url('fonts/OpenSans-Regular-webfont.ttf') format('truetype'); + font-weight: normal; + font-style: normal; + +} + +body { + font-family: 'OpenSans',sans-serif; + font-size: 100%; + color: rgb(51, 51, 51); + padding: 0; + margin: 0; + background: url("images/bg-sand.png") repeat scroll 0px 0px; +} + +div.document{ + border-top: 1px white solid; + background: url("images/bg-gradient-sand.png") repeat-x scroll 0px 0px; +} + +div.documentwrapper { + width: 90%; + margin: auto; +} + +div.bodywrapper { + margin: 0 0 0 230px; +} + +hr { + height: 10px; + background: url("images/divider.png") no-repeat scroll 50% 100% transparent; +} + +div.body { + background-color: #fff; + color: #3E4349; + padding: 0 30px 30px 30px; + font-size: 0.9em; + border-left: 1px solid rgba(0,0,0,0.2); +} + +div.sphinxsidebar { + float: left; + margin-left: 0px; + opacity: 1; + float: left; + margin: 0 10px; + margin-left: 20px; + margin-bottom: 20px; +} +.sphinxsidebar h2 { + text-align: center; + font-size: 24px; +} +.sphinxsidebar h2 img { + display: block; + margin: 0 auto 5px auto; +} +.sphinxsidebar h2 a { + color: #484848; +} +.sphinxsidebar nav { + font-family: 'OpenSansLight', sans-serif; + font-weight: normal; + font-size: 14px; + margin: auto; + margin-top: 120px; + margin-right: 20px; +} +.sphinxsidebar nav ul { + margin: 0; +} +.sphinxsidebar nav ul li { + list-style-type: none; + margin: 0; +} +.sphinxsidebar nav ul li li { +} +.sphinxsidebar nav ul li.current, +.sphinxsidebar nav ul li.toctree-l1:hover { + display: block; + background: rgba(255, 255, 255, 0.5); +} +.sphinxsidebar nav ul li.current .current { + background: rgba(255, 255, 255, 0.9); +} +.sphinxsidebar nav ul li.path > a { + font-family: 'OpenSans', sans-serif; + font-weight: 600; + letter-spacing: -0.02em; + color: #484848; +} +.sphinxsidebar nav ul li.current > ul > li { + display: block; +} +.sphinxsidebar nav a, +.sphinxsidebar nav b { + display: block; + padding: 10px 25px 10px 15px; + color: #000; + background-repeat: no-repeat; + background-position: 100% 50%; +} +.sphinxsidebar nav b { + font-weight: normal; + color: #484848; +} +.sphinxsidebar nav > ul > li { + border-top: 1px dotted rgba(0, 0, 0, 0.1); +} +.sphinxsidebar nav > ul > li:first-child { + border-top: 0; +} +.sphinxsidebar nav > ul > li > a { + padding-top: 15px; + padding-bottom: 15px; +} +.sphinxsidebar nav > ul > li.current:first-child { + background-color: transparent; +} +.sphinxsidebar nav > ul > li.current:first-child a:only-child { + background-image: none; +} + +div.sphinxsidebar h3, +div.sphinxsidebar h4 { + text-transform: uppercase; + font-family: 'OpenSans',sans-serif; + color: #222; + font-size: 1.2em; + font-weight: normal; + margin: 0; + padding: 5px 10px; +} + +div.sphinxsidebar ul { + margin: 10px 20px; + padding: 0; + color: #000; +} + +div.sphinxsidebar ul>li { + list-style-type: none; +} + + +div.footer { + color: #555; + width: 100%; + padding: 13px 0; + text-align: center; + font-size: 75%; +} + +div.footer a { + color: #444; + text-decoration: underline; +} + +div.sphinxsidebarwrapper{ + padding: 20px 0; +} + +/* -- body styles ----------------------------------------------------------- */ + +a { + color: #0095DD; + text-decoration: none; + line-height: 110%; +} + +a:hover { + color: #00539F + text-decoration: underline; +} + +div.body h1, +div.body h2, +div.body h3, +div.body h4, +div.body h5, +div.body h6 { + font-family: 'OpenSansLight',sans-serif; + font-weight: normal; + color: #212224; + margin: 30px 0px 10px 0px; + padding: 5px 0 5px 0px; + text-shadow: 0px 1px 0 white +} + + +div.body h1 { + margin-top: 0; + padding-top: 10px; + font-size: 250%; + text-align: center; + padding-bottom: 30px; +} + + +div.body h2 { font-size: 180%; } +div.body h3 { font-size: 150%; } +div.body h4 { font-size: 130%; } +div.body h5 { font-size: 120%; } +div.body h6 { font-size: 100%; } + +div.body p, div.body dd, div.body li { + line-height: 1.5em; +} + +div.admonition p.admonition-title + p { + display: inline; +} + +div.note { + background-color: #eee; + border: 1px solid #ccc; +} + +div.seealso { + background-color: #ffc; + border: 1px solid #ff6; +} + +div.topic { + background-color: #eee; +} + +div.warning { + background-color: #ffe4e4; + border: 1px solid #f66; +} + +p.admonition-title { + display: inline; +} + +p.admonition-title:after { + content: ":"; +} + +pre{ + background-color: rgb(240, 240, 240); + font-size: 1.1em; + padding: 10px; + margin: 10px; + overflow: auto; +} + +tt { + background-color: #ecf0f3; + color: #222; + /* padding: 1px 2px; */ + font-size: 1.1em; + font-family: monospace; +} + +.viewcode-back { + font-family: 'OpenSans',sans-serif; +} + +div.viewcode-block:target { + background-color: #f4debf; + border-top: 1px solid #ac9; + border-bottom: 1px solid #ac9; +} + +#searchbox { + margin: 0px 20px auto auto; +} \ No newline at end of file diff --git a/_static/nbsphinx-broken-thumbnail.svg b/_static/nbsphinx-broken-thumbnail.svg new file mode 100644 index 0000000..4919ca8 --- /dev/null +++ b/_static/nbsphinx-broken-thumbnail.svg @@ -0,0 +1,9 @@ + + + + diff --git a/_static/nbsphinx-code-cells.css b/_static/nbsphinx-code-cells.css new file mode 100644 index 0000000..a3fb27c --- /dev/null +++ b/_static/nbsphinx-code-cells.css @@ -0,0 +1,259 @@ +/* remove conflicting styling from Sphinx themes */ +div.nbinput.container div.prompt *, +div.nboutput.container div.prompt *, +div.nbinput.container div.input_area pre, +div.nboutput.container div.output_area pre, +div.nbinput.container div.input_area .highlight, +div.nboutput.container div.output_area .highlight { + border: none; + padding: 0; + margin: 0; + box-shadow: none; +} + +div.nbinput.container > div[class*=highlight], +div.nboutput.container > div[class*=highlight] { + margin: 0; +} + +div.nbinput.container div.prompt *, +div.nboutput.container div.prompt * { + background: none; +} + +div.nboutput.container div.output_area .highlight, +div.nboutput.container div.output_area pre { + background: unset; +} + +div.nboutput.container div.output_area div.highlight { + color: unset; /* override Pygments text color */ +} + +/* avoid gaps between output lines */ +div.nboutput.container div[class*=highlight] pre { + line-height: normal; +} + +/* input/output containers */ +div.nbinput.container, +div.nboutput.container { + display: -webkit-flex; + display: flex; + align-items: flex-start; + margin: 0; + width: 100%; +} +@media (max-width: 540px) { + div.nbinput.container, + div.nboutput.container { + flex-direction: column; + } +} + +/* input container */ +div.nbinput.container { + padding-top: 5px; +} + +/* last container */ +div.nblast.container { + padding-bottom: 5px; +} + +/* input prompt */ +div.nbinput.container div.prompt pre, +/* for sphinx_immaterial theme: */ +div.nbinput.container div.prompt pre > code { + color: #307FC1; +} + +/* output prompt */ +div.nboutput.container div.prompt pre, +/* for sphinx_immaterial theme: */ +div.nboutput.container div.prompt pre > code { + color: #BF5B3D; +} + +/* all prompts */ +div.nbinput.container div.prompt, +div.nboutput.container div.prompt { + width: 4.5ex; + padding-top: 5px; + position: relative; + user-select: none; +} + +div.nbinput.container div.prompt > div, +div.nboutput.container div.prompt > div { + position: absolute; + right: 0; + margin-right: 0.3ex; +} + +@media (max-width: 540px) { + div.nbinput.container div.prompt, + div.nboutput.container div.prompt { + width: unset; + text-align: left; + padding: 0.4em; + } + div.nboutput.container div.prompt.empty { + padding: 0; + } + + div.nbinput.container div.prompt > div, + div.nboutput.container div.prompt > div { + position: unset; + } +} + +/* disable scrollbars and line breaks on prompts */ +div.nbinput.container div.prompt pre, +div.nboutput.container div.prompt pre { + overflow: hidden; + white-space: pre; +} + +/* input/output area */ +div.nbinput.container div.input_area, +div.nboutput.container div.output_area { + -webkit-flex: 1; + flex: 1; + overflow: auto; +} +@media (max-width: 540px) { + div.nbinput.container div.input_area, + div.nboutput.container div.output_area { + width: 100%; + } +} + +/* input area */ +div.nbinput.container div.input_area { + border: 1px solid #e0e0e0; + border-radius: 2px; + /*background: #f5f5f5;*/ +} + +/* override MathJax center alignment in output cells */ +div.nboutput.container div[class*=MathJax] { + text-align: left !important; +} + +/* override sphinx.ext.imgmath center alignment in output cells */ +div.nboutput.container div.math p { + text-align: left; +} + +/* standard error */ +div.nboutput.container div.output_area.stderr { + background: #fdd; +} + +/* ANSI colors */ +.ansi-black-fg { color: #3E424D; } +.ansi-black-bg { background-color: #3E424D; } +.ansi-black-intense-fg { color: #282C36; } +.ansi-black-intense-bg { background-color: #282C36; } +.ansi-red-fg { color: #E75C58; } +.ansi-red-bg { background-color: #E75C58; } +.ansi-red-intense-fg { color: #B22B31; } +.ansi-red-intense-bg { background-color: #B22B31; } +.ansi-green-fg { color: #00A250; } +.ansi-green-bg { background-color: #00A250; } +.ansi-green-intense-fg { color: #007427; } +.ansi-green-intense-bg { background-color: #007427; } +.ansi-yellow-fg { color: #DDB62B; } +.ansi-yellow-bg { background-color: #DDB62B; } +.ansi-yellow-intense-fg { color: #B27D12; } +.ansi-yellow-intense-bg { background-color: #B27D12; } +.ansi-blue-fg { color: #208FFB; } +.ansi-blue-bg { background-color: #208FFB; } +.ansi-blue-intense-fg { color: #0065CA; } +.ansi-blue-intense-bg { background-color: #0065CA; } +.ansi-magenta-fg { color: #D160C4; } +.ansi-magenta-bg { background-color: #D160C4; } +.ansi-magenta-intense-fg { color: #A03196; } +.ansi-magenta-intense-bg { background-color: #A03196; } +.ansi-cyan-fg { color: #60C6C8; } +.ansi-cyan-bg { background-color: #60C6C8; } +.ansi-cyan-intense-fg { color: #258F8F; } +.ansi-cyan-intense-bg { background-color: #258F8F; } +.ansi-white-fg { color: #C5C1B4; } +.ansi-white-bg { background-color: #C5C1B4; } +.ansi-white-intense-fg { color: #A1A6B2; } +.ansi-white-intense-bg { background-color: #A1A6B2; } + +.ansi-default-inverse-fg { color: #FFFFFF; } +.ansi-default-inverse-bg { background-color: #000000; } + +.ansi-bold { font-weight: bold; } +.ansi-underline { text-decoration: underline; } + + +div.nbinput.container div.input_area div[class*=highlight] > pre, +div.nboutput.container div.output_area div[class*=highlight] > pre, +div.nboutput.container div.output_area div[class*=highlight].math, +div.nboutput.container div.output_area.rendered_html, +div.nboutput.container div.output_area > div.output_javascript, +div.nboutput.container div.output_area:not(.rendered_html) > img{ + padding: 5px; + margin: 0; +} + +/* fix copybtn overflow problem in chromium (needed for 'sphinx_copybutton') */ +div.nbinput.container div.input_area > div[class^='highlight'], +div.nboutput.container div.output_area > div[class^='highlight']{ + overflow-y: hidden; +} + +/* hide copy button on prompts for 'sphinx_copybutton' extension ... */ +.prompt .copybtn, +/* ... and 'sphinx_immaterial' theme */ +.prompt .md-clipboard.md-icon { + display: none; +} + +/* Some additional styling taken form the Jupyter notebook CSS */ +.jp-RenderedHTMLCommon table, +div.rendered_html table { + border: none; + border-collapse: collapse; + border-spacing: 0; + color: black; + font-size: 12px; + table-layout: fixed; +} +.jp-RenderedHTMLCommon thead, +div.rendered_html thead { + border-bottom: 1px solid black; + vertical-align: bottom; +} +.jp-RenderedHTMLCommon tr, +.jp-RenderedHTMLCommon th, +.jp-RenderedHTMLCommon td, +div.rendered_html tr, +div.rendered_html th, +div.rendered_html td { + text-align: right; + vertical-align: middle; + padding: 0.5em 0.5em; + line-height: normal; + white-space: normal; + max-width: none; + border: none; +} +.jp-RenderedHTMLCommon th, +div.rendered_html th { + font-weight: bold; +} +.jp-RenderedHTMLCommon tbody tr:nth-child(odd), +div.rendered_html tbody tr:nth-child(odd) { + background: #f5f5f5; +} +.jp-RenderedHTMLCommon tbody tr:hover, +div.rendered_html tbody tr:hover { + background: rgba(66, 165, 245, 0.2); +} + diff --git a/_static/nbsphinx-gallery.css b/_static/nbsphinx-gallery.css new file mode 100644 index 0000000..365c27a --- /dev/null +++ b/_static/nbsphinx-gallery.css @@ -0,0 +1,31 @@ +.nbsphinx-gallery { + display: grid; + grid-template-columns: repeat(auto-fill, minmax(160px, 1fr)); + gap: 5px; + margin-top: 1em; + margin-bottom: 1em; +} + +.nbsphinx-gallery > a { + padding: 5px; + border: 1px dotted currentColor; + border-radius: 2px; + text-align: center; +} + +.nbsphinx-gallery > a:hover { + border-style: solid; +} + +.nbsphinx-gallery img { + max-width: 100%; + max-height: 100%; +} + +.nbsphinx-gallery > a > div:first-child { + display: flex; + align-items: start; + justify-content: center; + height: 120px; + margin-bottom: 5px; +} diff --git a/_static/nbsphinx-no-thumbnail.svg b/_static/nbsphinx-no-thumbnail.svg new file mode 100644 index 0000000..9dca758 --- /dev/null +++ b/_static/nbsphinx-no-thumbnail.svg @@ -0,0 +1,9 @@ + + + + diff --git a/_static/plus.png b/_static/plus.png new file mode 100644 index 0000000..7107cec Binary files /dev/null and b/_static/plus.png differ diff --git a/_static/pygments.css b/_static/pygments.css new file mode 100644 index 0000000..8054382 --- /dev/null +++ b/_static/pygments.css @@ -0,0 +1,75 @@ +pre { line-height: 125%; } +td.linenos .normal { color: #666666; background-color: transparent; padding-left: 5px; padding-right: 5px; } +span.linenos { color: #666666; background-color: transparent; padding-left: 5px; padding-right: 5px; } +td.linenos .special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +span.linenos.special { color: #000000; background-color: #ffffc0; padding-left: 5px; padding-right: 5px; } +.highlight .hll { background-color: #ffffcc } +.highlight { background: #f0f0f0; } +.highlight .c { color: #60a0b0; font-style: italic } /* Comment */ +.highlight .err { border: 1px solid #FF0000 } /* Error */ +.highlight .k { color: #007020; font-weight: bold } /* Keyword */ +.highlight .o { color: #666666 } /* Operator */ +.highlight .ch { color: #60a0b0; font-style: italic } /* Comment.Hashbang */ +.highlight .cm { color: #60a0b0; font-style: italic } /* Comment.Multiline */ +.highlight .cp { color: #007020 } /* Comment.Preproc */ +.highlight .cpf { color: #60a0b0; font-style: italic } /* Comment.PreprocFile */ +.highlight .c1 { color: #60a0b0; font-style: italic } /* Comment.Single */ +.highlight .cs { color: #60a0b0; background-color: #fff0f0 } /* Comment.Special */ +.highlight .gd { color: #A00000 } /* Generic.Deleted */ +.highlight .ge { font-style: italic } /* Generic.Emph */ +.highlight .ges { font-weight: bold; font-style: italic } /* Generic.EmphStrong */ +.highlight .gr { color: #FF0000 } /* Generic.Error */ +.highlight .gh { color: #000080; font-weight: bold } /* Generic.Heading */ +.highlight .gi { color: #00A000 } /* Generic.Inserted */ +.highlight .go { color: #888888 } /* Generic.Output */ +.highlight .gp { color: #c65d09; font-weight: bold } /* Generic.Prompt */ +.highlight .gs { font-weight: bold } /* Generic.Strong */ +.highlight .gu { color: #800080; font-weight: bold } /* Generic.Subheading */ +.highlight .gt { color: #0044DD } /* Generic.Traceback */ +.highlight .kc { color: #007020; font-weight: bold } /* Keyword.Constant */ +.highlight .kd { color: #007020; font-weight: bold } /* Keyword.Declaration */ +.highlight .kn { color: #007020; font-weight: bold } /* Keyword.Namespace */ +.highlight .kp { color: #007020 } /* Keyword.Pseudo */ +.highlight .kr { color: #007020; font-weight: bold } /* Keyword.Reserved */ +.highlight .kt { color: #902000 } /* Keyword.Type */ +.highlight .m { color: #40a070 } /* Literal.Number */ +.highlight .s { color: #4070a0 } /* Literal.String */ +.highlight .na { color: #4070a0 } /* Name.Attribute */ +.highlight .nb { color: #007020 } /* Name.Builtin */ +.highlight .nc { color: #0e84b5; font-weight: bold } /* Name.Class */ +.highlight .no { color: #60add5 } /* Name.Constant */ +.highlight .nd { color: #555555; font-weight: bold } /* Name.Decorator */ +.highlight .ni { color: #d55537; font-weight: bold } /* Name.Entity */ +.highlight .ne { color: #007020 } /* Name.Exception */ +.highlight .nf { color: #06287e } /* Name.Function */ +.highlight .nl { color: #002070; font-weight: bold } /* Name.Label */ +.highlight .nn { color: #0e84b5; font-weight: bold } /* Name.Namespace */ +.highlight .nt { color: #062873; font-weight: bold } /* Name.Tag */ +.highlight .nv { color: #bb60d5 } /* Name.Variable */ +.highlight .ow { color: #007020; font-weight: bold } /* Operator.Word */ +.highlight .w { color: #bbbbbb } /* Text.Whitespace */ +.highlight .mb { color: #40a070 } /* Literal.Number.Bin */ +.highlight .mf { color: #40a070 } /* Literal.Number.Float */ +.highlight .mh { color: #40a070 } /* Literal.Number.Hex */ +.highlight .mi { color: #40a070 } /* Literal.Number.Integer */ +.highlight .mo { color: #40a070 } /* Literal.Number.Oct */ +.highlight .sa { color: #4070a0 } /* Literal.String.Affix */ +.highlight .sb { color: #4070a0 } /* Literal.String.Backtick */ +.highlight .sc { color: #4070a0 } /* Literal.String.Char */ +.highlight .dl { color: #4070a0 } /* Literal.String.Delimiter */ +.highlight .sd { color: #4070a0; font-style: italic } /* Literal.String.Doc */ +.highlight .s2 { color: #4070a0 } /* Literal.String.Double */ +.highlight .se { color: #4070a0; font-weight: bold } /* Literal.String.Escape */ +.highlight .sh { color: #4070a0 } /* Literal.String.Heredoc */ +.highlight .si { color: #70a0d0; font-style: italic } /* Literal.String.Interpol */ +.highlight .sx { color: #c65d09 } /* Literal.String.Other */ +.highlight .sr { color: #235388 } /* Literal.String.Regex */ +.highlight .s1 { color: #4070a0 } /* Literal.String.Single */ +.highlight .ss { color: #517918 } /* Literal.String.Symbol */ +.highlight .bp { color: #007020 } /* Name.Builtin.Pseudo */ +.highlight .fm { color: #06287e } /* Name.Function.Magic */ +.highlight .vc { color: #bb60d5 } /* Name.Variable.Class */ +.highlight .vg { color: #bb60d5 } /* Name.Variable.Global */ +.highlight .vi { color: #bb60d5 } /* Name.Variable.Instance */ +.highlight .vm { color: #bb60d5 } /* Name.Variable.Magic */ +.highlight .il { color: #40a070 } /* Literal.Number.Integer.Long */ \ No newline at end of file diff --git a/_static/schedules/Graduate_Schedule.pdf b/_static/schedules/Graduate_Schedule.pdf new file mode 100644 index 0000000..87fb05a Binary files /dev/null and b/_static/schedules/Graduate_Schedule.pdf differ diff --git a/_static/schedules/Undergraduate_Schedule.pdf b/_static/schedules/Undergraduate_Schedule.pdf new file mode 100644 index 0000000..6efd320 Binary files /dev/null and b/_static/schedules/Undergraduate_Schedule.pdf differ diff --git a/_static/searchtools.js b/_static/searchtools.js new file mode 100644 index 0000000..92da3f8 --- /dev/null +++ b/_static/searchtools.js @@ -0,0 +1,619 @@ +/* + * searchtools.js + * ~~~~~~~~~~~~~~~~ + * + * Sphinx JavaScript utilities for the full-text search. + * + * :copyright: Copyright 2007-2024 by the Sphinx team, see AUTHORS. + * :license: BSD, see LICENSE for details. + * + */ +"use strict"; + +/** + * Simple result scoring code. + */ +if (typeof Scorer === "undefined") { + var Scorer = { + // Implement the following function to further tweak the score for each result + // The function takes a result array [docname, title, anchor, descr, score, filename] + // and returns the new score. + /* + score: result => { + const [docname, title, anchor, descr, score, filename] = result + return score + }, + */ + + // query matches the full name of an object + objNameMatch: 11, + // or matches in the last dotted part of the object name + objPartialMatch: 6, + // Additive scores depending on the priority of the object + objPrio: { + 0: 15, // used to be importantResults + 1: 5, // used to be objectResults + 2: -5, // used to be unimportantResults + }, + // Used when the priority is not in the mapping. + objPrioDefault: 0, + + // query found in title + title: 15, + partialTitle: 7, + // query found in terms + term: 5, + partialTerm: 2, + }; +} + +const _removeChildren = (element) => { + while (element && element.lastChild) element.removeChild(element.lastChild); +}; + +/** + * See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions#escaping + */ +const _escapeRegExp = (string) => + string.replace(/[.*+\-?^${}()|[\]\\]/g, "\\$&"); // $& means the whole matched string + +const _displayItem = (item, searchTerms, highlightTerms) => { + const docBuilder = DOCUMENTATION_OPTIONS.BUILDER; + const docFileSuffix = DOCUMENTATION_OPTIONS.FILE_SUFFIX; + const docLinkSuffix = DOCUMENTATION_OPTIONS.LINK_SUFFIX; + const showSearchSummary = DOCUMENTATION_OPTIONS.SHOW_SEARCH_SUMMARY; + const contentRoot = document.documentElement.dataset.content_root; + + const [docName, title, anchor, descr, score, _filename] = item; + + let listItem = document.createElement("li"); + let requestUrl; + let linkUrl; + if (docBuilder === "dirhtml") { + // dirhtml builder + let dirname = docName + "/"; + if (dirname.match(/\/index\/$/)) + dirname = dirname.substring(0, dirname.length - 6); + else if (dirname === "index/") dirname = ""; + requestUrl = contentRoot + dirname; + linkUrl = requestUrl; + } else { + // normal html builders + requestUrl = contentRoot + docName + docFileSuffix; + linkUrl = docName + docLinkSuffix; + } + let linkEl = listItem.appendChild(document.createElement("a")); + linkEl.href = linkUrl + anchor; + linkEl.dataset.score = score; + linkEl.innerHTML = title; + if (descr) { + listItem.appendChild(document.createElement("span")).innerHTML = + " (" + descr + ")"; + // highlight search terms in the description + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + } + else if (showSearchSummary) + fetch(requestUrl) + .then((responseData) => responseData.text()) + .then((data) => { + if (data) + listItem.appendChild( + Search.makeSearchSummary(data, searchTerms, anchor) + ); + // highlight search terms in the summary + if (SPHINX_HIGHLIGHT_ENABLED) // set in sphinx_highlight.js + highlightTerms.forEach((term) => _highlightText(listItem, term, "highlighted")); + }); + Search.output.appendChild(listItem); +}; +const _finishSearch = (resultCount) => { + Search.stopPulse(); + Search.title.innerText = _("Search Results"); + if (!resultCount) + Search.status.innerText = Documentation.gettext( + "Your search did not match any documents. Please make sure that all words are spelled correctly and that you've selected enough categories." + ); + else + Search.status.innerText = _( + "Search finished, found ${resultCount} page(s) matching the search query." + ).replace('${resultCount}', resultCount); +}; +const _displayNextItem = ( + results, + resultCount, + searchTerms, + highlightTerms, +) => { + // results left, load the summary and display it + // this is intended to be dynamic (don't sub resultsCount) + if (results.length) { + _displayItem(results.pop(), searchTerms, highlightTerms); + setTimeout( + () => _displayNextItem(results, resultCount, searchTerms, highlightTerms), + 5 + ); + } + // search finished, update title and status message + else _finishSearch(resultCount); +}; +// Helper function used by query() to order search results. +// Each input is an array of [docname, title, anchor, descr, score, filename]. +// Order the results by score (in opposite order of appearance, since the +// `_displayNextItem` function uses pop() to retrieve items) and then alphabetically. +const _orderResultsByScoreThenName = (a, b) => { + const leftScore = a[4]; + const rightScore = b[4]; + if (leftScore === rightScore) { + // same score: sort alphabetically + const leftTitle = a[1].toLowerCase(); + const rightTitle = b[1].toLowerCase(); + if (leftTitle === rightTitle) return 0; + return leftTitle > rightTitle ? -1 : 1; // inverted is intentional + } + return leftScore > rightScore ? 1 : -1; +}; + +/** + * Default splitQuery function. Can be overridden in ``sphinx.search`` with a + * custom function per language. + * + * The regular expression works by splitting the string on consecutive characters + * that are not Unicode letters, numbers, underscores, or emoji characters. + * This is the same as ``\W+`` in Python, preserving the surrogate pair area. + */ +if (typeof splitQuery === "undefined") { + var splitQuery = (query) => query + .split(/[^\p{Letter}\p{Number}_\p{Emoji_Presentation}]+/gu) + .filter(term => term) // remove remaining empty strings +} + +/** + * Search Module + */ +const Search = { + _index: null, + _queued_query: null, + _pulse_status: -1, + + htmlToText: (htmlString, anchor) => { + const htmlElement = new DOMParser().parseFromString(htmlString, 'text/html'); + for (const removalQuery of [".headerlinks", "script", "style"]) { + htmlElement.querySelectorAll(removalQuery).forEach((el) => { el.remove() }); + } + if (anchor) { + const anchorContent = htmlElement.querySelector(`[role="main"] ${anchor}`); + if (anchorContent) return anchorContent.textContent; + + console.warn( + `Anchored content block not found. Sphinx search tries to obtain it via DOM query '[role=main] ${anchor}'. Check your theme or template.` + ); + } + + // if anchor not specified or not found, fall back to main content + const docContent = htmlElement.querySelector('[role="main"]'); + if (docContent) return docContent.textContent; + + console.warn( + "Content block not found. Sphinx search tries to obtain it via DOM query '[role=main]'. Check your theme or template." + ); + return ""; + }, + + init: () => { + const query = new URLSearchParams(window.location.search).get("q"); + document + .querySelectorAll('input[name="q"]') + .forEach((el) => (el.value = query)); + if (query) Search.performSearch(query); + }, + + loadIndex: (url) => + (document.body.appendChild(document.createElement("script")).src = url), + + setIndex: (index) => { + Search._index = index; + if (Search._queued_query !== null) { + const query = Search._queued_query; + Search._queued_query = null; + Search.query(query); + } + }, + + hasIndex: () => Search._index !== null, + + deferQuery: (query) => (Search._queued_query = query), + + stopPulse: () => (Search._pulse_status = -1), + + startPulse: () => { + if (Search._pulse_status >= 0) return; + + const pulse = () => { + Search._pulse_status = (Search._pulse_status + 1) % 4; + Search.dots.innerText = ".".repeat(Search._pulse_status); + if (Search._pulse_status >= 0) window.setTimeout(pulse, 500); + }; + pulse(); + }, + + /** + * perform a search for something (or wait until index is loaded) + */ + performSearch: (query) => { + // create the required interface elements + const searchText = document.createElement("h2"); + searchText.textContent = _("Searching"); + const searchSummary = document.createElement("p"); + searchSummary.classList.add("search-summary"); + searchSummary.innerText = ""; + const searchList = document.createElement("ul"); + searchList.classList.add("search"); + + const out = document.getElementById("search-results"); + Search.title = out.appendChild(searchText); + Search.dots = Search.title.appendChild(document.createElement("span")); + Search.status = out.appendChild(searchSummary); + Search.output = out.appendChild(searchList); + + const searchProgress = document.getElementById("search-progress"); + // Some themes don't use the search progress node + if (searchProgress) { + searchProgress.innerText = _("Preparing search..."); + } + Search.startPulse(); + + // index already loaded, the browser was quick! + if (Search.hasIndex()) Search.query(query); + else Search.deferQuery(query); + }, + + _parseQuery: (query) => { + // stem the search terms and add them to the correct list + const stemmer = new Stemmer(); + const searchTerms = new Set(); + const excludedTerms = new Set(); + const highlightTerms = new Set(); + const objectTerms = new Set(splitQuery(query.toLowerCase().trim())); + splitQuery(query.trim()).forEach((queryTerm) => { + const queryTermLower = queryTerm.toLowerCase(); + + // maybe skip this "word" + // stopwords array is from language_data.js + if ( + stopwords.indexOf(queryTermLower) !== -1 || + queryTerm.match(/^\d+$/) + ) + return; + + // stem the word + let word = stemmer.stemWord(queryTermLower); + // select the correct list + if (word[0] === "-") excludedTerms.add(word.substr(1)); + else { + searchTerms.add(word); + highlightTerms.add(queryTermLower); + } + }); + + if (SPHINX_HIGHLIGHT_ENABLED) { // set in sphinx_highlight.js + localStorage.setItem("sphinx_highlight_terms", [...highlightTerms].join(" ")) + } + + // console.debug("SEARCH: searching for:"); + // console.info("required: ", [...searchTerms]); + // console.info("excluded: ", [...excludedTerms]); + + return [query, searchTerms, excludedTerms, highlightTerms, objectTerms]; + }, + + /** + * execute search (requires search index to be loaded) + */ + _performSearch: (query, searchTerms, excludedTerms, highlightTerms, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + const allTitles = Search._index.alltitles; + const indexEntries = Search._index.indexentries; + + // Collect multiple result groups to be sorted separately and then ordered. + // Each is an array of [docname, title, anchor, descr, score, filename]. + const normalResults = []; + const nonMainIndexResults = []; + + _removeChildren(document.getElementById("search-progress")); + + const queryLower = query.toLowerCase().trim(); + for (const [title, foundTitles] of Object.entries(allTitles)) { + if (title.toLowerCase().trim().includes(queryLower) && (queryLower.length >= title.length/2)) { + for (const [file, id] of foundTitles) { + let score = Math.round(100 * queryLower.length / title.length) + normalResults.push([ + docNames[file], + titles[file] !== title ? `${titles[file]} > ${title}` : title, + id !== null ? "#" + id : "", + null, + score, + filenames[file], + ]); + } + } + } + + // search for explicit entries in index directives + for (const [entry, foundEntries] of Object.entries(indexEntries)) { + if (entry.includes(queryLower) && (queryLower.length >= entry.length/2)) { + for (const [file, id, isMain] of foundEntries) { + const score = Math.round(100 * queryLower.length / entry.length); + const result = [ + docNames[file], + titles[file], + id ? "#" + id : "", + null, + score, + filenames[file], + ]; + if (isMain) { + normalResults.push(result); + } else { + nonMainIndexResults.push(result); + } + } + } + } + + // lookup as object + objectTerms.forEach((term) => + normalResults.push(...Search.performObjectSearch(term, objectTerms)) + ); + + // lookup as search terms in fulltext + normalResults.push(...Search.performTermsSearch(searchTerms, excludedTerms)); + + // let the scorer override scores with a custom scoring function + if (Scorer.score) { + normalResults.forEach((item) => (item[4] = Scorer.score(item))); + nonMainIndexResults.forEach((item) => (item[4] = Scorer.score(item))); + } + + // Sort each group of results by score and then alphabetically by name. + normalResults.sort(_orderResultsByScoreThenName); + nonMainIndexResults.sort(_orderResultsByScoreThenName); + + // Combine the result groups in (reverse) order. + // Non-main index entries are typically arbitrary cross-references, + // so display them after other results. + let results = [...nonMainIndexResults, ...normalResults]; + + // remove duplicate search results + // note the reversing of results, so that in the case of duplicates, the highest-scoring entry is kept + let seen = new Set(); + results = results.reverse().reduce((acc, result) => { + let resultStr = result.slice(0, 4).concat([result[5]]).map(v => String(v)).join(','); + if (!seen.has(resultStr)) { + acc.push(result); + seen.add(resultStr); + } + return acc; + }, []); + + return results.reverse(); + }, + + query: (query) => { + const [searchQuery, searchTerms, excludedTerms, highlightTerms, objectTerms] = Search._parseQuery(query); + const results = Search._performSearch(searchQuery, searchTerms, excludedTerms, highlightTerms, objectTerms); + + // for debugging + //Search.lastresults = results.slice(); // a copy + // console.info("search results:", Search.lastresults); + + // print the results + _displayNextItem(results, results.length, searchTerms, highlightTerms); + }, + + /** + * search for object names + */ + performObjectSearch: (object, objectTerms) => { + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const objects = Search._index.objects; + const objNames = Search._index.objnames; + const titles = Search._index.titles; + + const results = []; + + const objectSearchCallback = (prefix, match) => { + const name = match[4] + const fullname = (prefix ? prefix + "." : "") + name; + const fullnameLower = fullname.toLowerCase(); + if (fullnameLower.indexOf(object) < 0) return; + + let score = 0; + const parts = fullnameLower.split("."); + + // check for different match types: exact matches of full name or + // "last name" (i.e. last dotted part) + if (fullnameLower === object || parts.slice(-1)[0] === object) + score += Scorer.objNameMatch; + else if (parts.slice(-1)[0].indexOf(object) > -1) + score += Scorer.objPartialMatch; // matches in last name + + const objName = objNames[match[1]][2]; + const title = titles[match[0]]; + + // If more than one term searched for, we require other words to be + // found in the name/title/description + const otherTerms = new Set(objectTerms); + otherTerms.delete(object); + if (otherTerms.size > 0) { + const haystack = `${prefix} ${name} ${objName} ${title}`.toLowerCase(); + if ( + [...otherTerms].some((otherTerm) => haystack.indexOf(otherTerm) < 0) + ) + return; + } + + let anchor = match[3]; + if (anchor === "") anchor = fullname; + else if (anchor === "-") anchor = objNames[match[1]][1] + "-" + fullname; + + const descr = objName + _(", in ") + title; + + // add custom score for some objects according to scorer + if (Scorer.objPrio.hasOwnProperty(match[2])) + score += Scorer.objPrio[match[2]]; + else score += Scorer.objPrioDefault; + + results.push([ + docNames[match[0]], + fullname, + "#" + anchor, + descr, + score, + filenames[match[0]], + ]); + }; + Object.keys(objects).forEach((prefix) => + objects[prefix].forEach((array) => + objectSearchCallback(prefix, array) + ) + ); + return results; + }, + + /** + * search for full-text terms in the index + */ + performTermsSearch: (searchTerms, excludedTerms) => { + // prepare search + const terms = Search._index.terms; + const titleTerms = Search._index.titleterms; + const filenames = Search._index.filenames; + const docNames = Search._index.docnames; + const titles = Search._index.titles; + + const scoreMap = new Map(); + const fileMap = new Map(); + + // perform the search on the required terms + searchTerms.forEach((word) => { + const files = []; + const arr = [ + { files: terms[word], score: Scorer.term }, + { files: titleTerms[word], score: Scorer.title }, + ]; + // add support for partial matches + if (word.length > 2) { + const escapedWord = _escapeRegExp(word); + if (!terms.hasOwnProperty(word)) { + Object.keys(terms).forEach((term) => { + if (term.match(escapedWord)) + arr.push({ files: terms[term], score: Scorer.partialTerm }); + }); + } + if (!titleTerms.hasOwnProperty(word)) { + Object.keys(titleTerms).forEach((term) => { + if (term.match(escapedWord)) + arr.push({ files: titleTerms[term], score: Scorer.partialTitle }); + }); + } + } + + // no match but word was a required one + if (arr.every((record) => record.files === undefined)) return; + + // found search word in contents + arr.forEach((record) => { + if (record.files === undefined) return; + + let recordFiles = record.files; + if (recordFiles.length === undefined) recordFiles = [recordFiles]; + files.push(...recordFiles); + + // set score for the word in each file + recordFiles.forEach((file) => { + if (!scoreMap.has(file)) scoreMap.set(file, {}); + scoreMap.get(file)[word] = record.score; + }); + }); + + // create the mapping + files.forEach((file) => { + if (!fileMap.has(file)) fileMap.set(file, [word]); + else if (fileMap.get(file).indexOf(word) === -1) fileMap.get(file).push(word); + }); + }); + + // now check if the files don't contain excluded terms + const results = []; + for (const [file, wordList] of fileMap) { + // check if all requirements are matched + + // as search terms with length < 3 are discarded + const filteredTermCount = [...searchTerms].filter( + (term) => term.length > 2 + ).length; + if ( + wordList.length !== searchTerms.size && + wordList.length !== filteredTermCount + ) + continue; + + // ensure that none of the excluded terms is in the search result + if ( + [...excludedTerms].some( + (term) => + terms[term] === file || + titleTerms[term] === file || + (terms[term] || []).includes(file) || + (titleTerms[term] || []).includes(file) + ) + ) + break; + + // select one (max) score for the file. + const score = Math.max(...wordList.map((w) => scoreMap.get(file)[w])); + // add result to the result list + results.push([ + docNames[file], + titles[file], + "", + null, + score, + filenames[file], + ]); + } + return results; + }, + + /** + * helper function to return a node containing the + * search summary for a given text. keywords is a list + * of stemmed words. + */ + makeSearchSummary: (htmlText, keywords, anchor) => { + const text = Search.htmlToText(htmlText, anchor); + if (text === "") return null; + + const textLower = text.toLowerCase(); + const actualStartPosition = [...keywords] + .map((k) => textLower.indexOf(k.toLowerCase())) + .filter((i) => i > -1) + .slice(-1)[0]; + const startWithContext = Math.max(actualStartPosition - 120, 0); + + const top = startWithContext === 0 ? "" : "..."; + const tail = startWithContext + 240 < text.length ? "..." : ""; + + let summary = document.createElement("p"); + summary.classList.add("context"); + summary.textContent = top + text.substr(startWithContext, 240).trim() + tail; + + return summary; + }, +}; + +_ready(Search.init); diff --git a/_static/sphinx_highlight.js b/_static/sphinx_highlight.js new file mode 100644 index 0000000..8a96c69 --- /dev/null +++ b/_static/sphinx_highlight.js @@ -0,0 +1,154 @@ +/* Highlighting utilities for Sphinx HTML documentation. */ +"use strict"; + +const SPHINX_HIGHLIGHT_ENABLED = true + +/** + * highlight a given string on a node by wrapping it in + * span elements with the given class name. + */ +const _highlight = (node, addItems, text, className) => { + if (node.nodeType === Node.TEXT_NODE) { + const val = node.nodeValue; + const parent = node.parentNode; + const pos = val.toLowerCase().indexOf(text); + if ( + pos >= 0 && + !parent.classList.contains(className) && + !parent.classList.contains("nohighlight") + ) { + let span; + + const closestNode = parent.closest("body, svg, foreignObject"); + const isInSVG = closestNode && closestNode.matches("svg"); + if (isInSVG) { + span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); + } else { + span = document.createElement("span"); + span.classList.add(className); + } + + span.appendChild(document.createTextNode(val.substr(pos, text.length))); + const rest = document.createTextNode(val.substr(pos + text.length)); + parent.insertBefore( + span, + parent.insertBefore( + rest, + node.nextSibling + ) + ); + node.nodeValue = val.substr(0, pos); + /* There may be more occurrences of search term in this node. So call this + * function recursively on the remaining fragment. + */ + _highlight(rest, addItems, text, className); + + if (isInSVG) { + const rect = document.createElementNS( + "http://www.w3.org/2000/svg", + "rect" + ); + const bbox = parent.getBBox(); + rect.x.baseVal.value = bbox.x; + rect.y.baseVal.value = bbox.y; + rect.width.baseVal.value = bbox.width; + rect.height.baseVal.value = bbox.height; + rect.setAttribute("class", className); + addItems.push({ parent: parent, target: rect }); + } + } + } else if (node.matches && !node.matches("button, select, textarea")) { + node.childNodes.forEach((el) => _highlight(el, addItems, text, className)); + } +}; +const _highlightText = (thisNode, text, className) => { + let addItems = []; + _highlight(thisNode, addItems, text, className); + addItems.forEach((obj) => + obj.parent.insertAdjacentElement("beforebegin", obj.target) + ); +}; + +/** + * Small JavaScript module for the documentation. + */ +const SphinxHighlight = { + + /** + * highlight the search words provided in localstorage in the text + */ + highlightSearchWords: () => { + if (!SPHINX_HIGHLIGHT_ENABLED) return; // bail if no highlight + + // get and clear terms from localstorage + const url = new URL(window.location); + const highlight = + localStorage.getItem("sphinx_highlight_terms") + || url.searchParams.get("highlight") + || ""; + localStorage.removeItem("sphinx_highlight_terms") + url.searchParams.delete("highlight"); + window.history.replaceState({}, "", url); + + // get individual terms from highlight string + const terms = highlight.toLowerCase().split(/\s+/).filter(x => x); + if (terms.length === 0) return; // nothing to do + + // There should never be more than one element matching "div.body" + const divBody = document.querySelectorAll("div.body"); + const body = divBody.length ? divBody[0] : document.querySelector("body"); + window.setTimeout(() => { + terms.forEach((term) => _highlightText(body, term, "highlighted")); + }, 10); + + const searchBox = document.getElementById("searchbox"); + if (searchBox === null) return; + searchBox.appendChild( + document + .createRange() + .createContextualFragment( + '" + ) + ); + }, + + /** + * helper function to hide the search marks again + */ + hideSearchWords: () => { + document + .querySelectorAll("#searchbox .highlight-link") + .forEach((el) => el.remove()); + document + .querySelectorAll("span.highlighted") + .forEach((el) => el.classList.remove("highlighted")); + localStorage.removeItem("sphinx_highlight_terms") + }, + + initEscapeListener: () => { + // only install a listener if it is really needed + if (!DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS) return; + + document.addEventListener("keydown", (event) => { + // bail for input elements + if (BLACKLISTED_KEY_CONTROL_ELEMENTS.has(document.activeElement.tagName)) return; + // bail with special keys + if (event.shiftKey || event.altKey || event.ctrlKey || event.metaKey) return; + if (DOCUMENTATION_OPTIONS.ENABLE_SEARCH_SHORTCUTS && (event.key === "Escape")) { + SphinxHighlight.hideSearchWords(); + event.preventDefault(); + } + }); + }, +}; + +_ready(() => { + /* Do not call highlightSearchWords() when we are on the search page. + * It will highlight words from the *previous* search query. + */ + if (typeof Search === "undefined") SphinxHighlight.highlightSearchWords(); + SphinxHighlight.initEscapeListener(); +}); diff --git a/genindex.html b/genindex.html new file mode 100644 index 0000000..951228f --- /dev/null +++ b/genindex.html @@ -0,0 +1,77 @@ + + + + + + + Index — Numeric course 22.1 documentation + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ + +

Index

+ +
+ +
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/getting_started.html b/getting_started.html new file mode 100644 index 0000000..e047340 --- /dev/null +++ b/getting_started.html @@ -0,0 +1,115 @@ + + + + + + + + Getting started — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/getting_started/installing_jupyter.html b/getting_started/installing_jupyter.html new file mode 100644 index 0000000..bdae948 --- /dev/null +++ b/getting_started/installing_jupyter.html @@ -0,0 +1,179 @@ + + + + + + + + Student installs — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Student installs

+

If you already have conda or anaconda installed, skip to ``Git install`` below

+
+

For MacOS new installs

+
    +
  1. Download miniconda from https://docs.conda.io/en/latest/miniconda.html – choose the correct Miniconda3 MacOSX 64-bit pkg file for your Mac (Intel chip or new M1/M2 Silicon) from the menu and run it, agreeing to the licences and accepting all defaults. You should install for “just me”

  2. +
  3. To test your installation, open a fresh terminal window and at the prompt type which conda (unless you are using zsh. In that case use whence -p conda). You should see something resembling the following output, with your username instead of phil:

  4. +
+
% which conda
+/Users/phil/opt/miniconda3/bin/conda
+
+
+
+
+

For Windows new installs

+
    +
  1. Download miniconda from https://docs.conda.io/en/latest/miniconda.html – choose the Miniconda3 Windows 64-bit. download from the menu and run it, agreeing to the licences and accepting all defaults.

  2. +
+

The installer should suggest installing in a path that looks like:

+
C:\Users\phil\Miniconda3
+
+
+
    +
  1. Once the install completes hit the windows key and start typing anaconda. You should see a shortcut that looks like:

  2. +
+
Anaconda Powershell Prompt
+(Miniconda3)
+
+
+

Note that Windows comes with two different terminals ``cmd`` (old) and ``powershell`` (new). Always select the powershell version of the anaconda terminal

+
    +
  1. Select the short cut. If the install was successful you should see something like:

  2. +
+
(base) (Miniconda3):Users/phil>
+
+
+

with your username substituted for phil.

+

Some useful troubleshooting websites if you have issues getting conda installed on windows: https://stackoverflow.com/questions/54501167/anaconda-and-git-bash-in-windows-conda-command-not-found https://stackoverflow.com/questions/44597662/conda-command-is-not-recognized-on-windows-10

+
+
+

Git install

+

Inside your powershell or MacOs terminal, install git using conda:

+
conda install git
+
+
+

and then set it up

+
git config --global user.name "Phil"
+git config --global user.email phil@example.com
+
+
+
+
+

Github account

+

To use the course materials and to work collaboratively for the project, you will need a github account. Sign up for a free account at https://github.com if you don’t already have one - you will need to use the same address you configured git for above.

+

Once you have your github account, you will need to set up a secure way to connect. If you think you might use github a lot, we recommend setting up an ssh connection - this is a longer set-up process, but then quicker each time you want to connect to git. Follow the instructions here:

+

https://docs.github.com/en/authentication/connecting-to-github-with-ssh

+

A quicker set-up is to create a Personal Access Token. Follow the instructions here: https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic

+

Once you have a personal access token, you can enter it instead of your password when performing Git operations over HTTPS (see https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#using-a-personal-access-token-on-the-command-line).

+
+
+

Fork the course repository into your github account

+

Now go to the course website at https://github.com/rhwhite/numeric_2024 and fork the repository. The ‘fork’ button is on the upper right. This creates a ‘fork’ or copy of the current status of the repository in your account.

+

You now have your own fork of the course repository and should be taken to that page. Its name will be YourGitHubId/numeric_2024

+
+
+

Setting up the course repository

+

In the terminal, change directories to your home directory (called ~ for short) and make a new directory called repos to hold the course notebook repository. Change into repos and clone the course (do change YourGitHubId to your actual git hub id):

+
cd ~
+mkdir repos
+cd repos
+git clone https://github.com/YourGitHubId/numeric_2024.git
+
+
+
+
+

Creating the course environment

+

In the terminal, execute the following commands:

+
cd numeric_2024
+conda env create -f envs/environment.yaml
+conda activate numeric_2024
+
+
+
+
+

Opening the notebook folder and working with lab 1

+

To make it possible to pull down changes to the repository (for example, as I write this section only lab1 and lab2 are available) you need to work in a copy of the notebook. So always copy the notebook to a new name. See below an example for lab1. I suggest you use your name rather than phil!

+
cd ~/repos/numeric_2024/notebooks
+cp lab1/01-lab1.ipynb lab1/phil-lab1.ipynb
+jupyter lab
+
+
+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/getting_started/installing_jupyter.ipynb b/getting_started/installing_jupyter.ipynb new file mode 100644 index 0000000..337b31d --- /dev/null +++ b/getting_started/installing_jupyter.ipynb @@ -0,0 +1,186 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Student installs\n", + "\n", + "**If you already have conda or anaconda installed, skip to `Git install` below**\n", + "\n", + "\n", + "\n", + "## For MacOS new installs\n", + "\n", + "\n", + "1. Download miniconda from https://docs.conda.io/en/latest/miniconda.html -- choose the correct `Miniconda3 MacOSX 64-bit pkg` file for your Mac (Intel chip or new M1/M2 Silicon) from the menu and run it, agreeing to the licences and accepting all defaults. You should install for \"just me\"\n", + "\n", + "2. To test your installation, open a fresh terminal window and at the prompt type `which conda` (unless you are using zsh. In that case use `whence -p conda`). You should see something resembling the following output, with your username instead of `phil`:\n", + "\n", + "```\n", + "% which conda\n", + "/Users/phil/opt/miniconda3/bin/conda\n", + "```\n", + "\n", + "## For Windows new installs\n", + "\n", + "1. Download miniconda from https://docs.conda.io/en/latest/miniconda.html -- choose the `Miniconda3 Windows 64-bit`. download from the menu and run it, agreeing to the licences and accepting all defaults.\n", + "\n", + "The installer should suggest installing in a path that looks like:\n", + "\n", + "```\n", + "C:\\Users\\phil\\Miniconda3\n", + "```\n", + "\n", + "2. Once the install completes hit the windows key and start typing `anaconda`. You should see a shortcut that looks like:\n", + "\n", + "```\n", + "Anaconda Powershell Prompt\n", + "(Miniconda3)\n", + "```\n", + "\n", + "**Note that Windows comes with two different terminals `cmd` (old) and `powershell` (new). Always select the powershell version of the anaconda terminal**\n", + "\n", + "3. Select the short cut. If the install was successful you should see something like:\n", + "\n", + "```\n", + "(base) (Miniconda3):Users/phil>\n", + "```\n", + "with your username substituted for phil.\n", + "\n", + "Some useful troubleshooting websites if you have issues getting conda installed on windows: \n", + "https://stackoverflow.com/questions/54501167/anaconda-and-git-bash-in-windows-conda-command-not-found\n", + "https://stackoverflow.com/questions/44597662/conda-command-is-not-recognized-on-windows-10\n", + "\n", + "## Git install\n", + "\n", + "Inside your powershell or MacOs terminal, install git using conda:\n", + "\n", + "```\n", + "conda install git\n", + "```\n", + "\n", + "and then set it up\n", + "\n", + "```\n", + "git config --global user.name \"Phil\"\n", + "git config --global user.email phil@example.com\n", + "```\n", + "\n", + "## Github account\n", + "\n", + "To use the course materials and to work collaboratively for the project, you will need a github account. Sign up for a free account at https://github.com if you don't already have one - you will need to use the same address you configured git for above. \n", + "\n", + "Once you have your github account, you will need to set up a secure way to connect. If you think you might use github a lot, we recommend setting up an ssh connection - this is a longer set-up process, but then quicker each time you want to connect to git. Follow the instructions here: \n", + "\n", + "https://docs.github.com/en/authentication/connecting-to-github-with-ssh\n", + "\n", + "A quicker set-up is to create a Personal Access Token. Follow the instructions here:\n", + "https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#creating-a-personal-access-token-classic\n", + "\n", + "Once you have a personal access token, you can enter it instead of your password when performing Git operations over HTTPS (see https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/managing-your-personal-access-tokens#using-a-personal-access-token-on-the-command-line). \n", + "\n", + "## Fork the course repository into your github account\n", + "\n", + "Now go to the course website at https://github.com/rhwhite/numeric_2024 and fork the repository. The 'fork' button is on the upper right. This creates a 'fork' or copy of the current status of the repository in your account. \n", + "\n", + "You now have your own fork of the course repository and should be taken to that page. Its name will be YourGitHubId/numeric_2024\n", + "\n", + "## Setting up the course repository\n", + "\n", + "In the terminal, change directories to your home directory (called `~` for short) and make a new directory\n", + "called `repos` to hold the course notebook repository. Change into `repos` and clone the course (do change YourGitHubId to your actual git hub id):\n", + "\n", + "```\n", + "cd ~\n", + "mkdir repos\n", + "cd repos\n", + "git clone https://github.com/YourGitHubId/numeric_2024.git\n", + "```\n", + "\n", + "## Creating the course environment\n", + "\n", + "In the terminal, execute the following commands:\n", + "\n", + "```\n", + "cd numeric_2024\n", + "conda env create -f envs/environment.yaml\n", + "conda activate numeric_2024\n", + "```\n", + "\n", + "## Opening the notebook folder and working with lab 1\n", + "\n", + "To make it possible to pull down changes to the repository (for example, as I write this section only lab1 and lab2 are available) you need to work in a copy of the notebook. So always copy the notebook to a new name. See below an example for lab1. I suggest you use your name rather than phil!\n", + "\n", + "```\n", + "cd ~/repos/numeric_2024/notebooks\n", + "cp lab1/01-lab1.ipynb lab1/phil-lab1.ipynb\n", + "jupyter lab\n", + "```\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.11.5" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/getting_started/python.html b/getting_started/python.html new file mode 100644 index 0000000..3d8b17e --- /dev/null +++ b/getting_started/python.html @@ -0,0 +1,335 @@ + + + + + + + + The command line shell and git — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

The command line shell and git

+ +
+

Using the command line

+
+

Powershell and Bash common commands

+
    +
  • To go to your $HOME folder:

  • +
+
cd ~
+
+or
+
+cd $HOME
+
+
+
    +
  • To open explorer or finder for the current folder:

  • +
+
windows explorer do:
+
+   start .
+
+MacOs finder do:
+
+   open .
+
+
+
    +
  • To move up one folder:

  • +
+
cd ..
+
+
+
    +
  • To save typing, remember that hitting the tab key completes filenames

  • +
+
+
+

To configure powershell on windows

+
    +
  • first start a powershell terminal with admin privileges, then type:

    +

    set-executionpolicy remotesigned

    +
  • +
  • then, in your miniconda3 powershel profile, do:

    +

    Test-Path $profile

    +

    to see whether you have an existing profile.

    +
  • +
  • if you don’t have a profile, then do the following (this will overwrite an existing profile, so be aware):

    +

    New-Item –Path $Profile –Type File –Force

    +
  • +
  • To add to your profile, open with:

    +

    start $profile

    +
  • +
+
+
+

To configure bash or zsh on MacOS

+
    +
  • open a terminal then type either

    +

    open .bash_profile

    +

    or for Catalina

    +

    open .zshenv

    +
  • +
+
+
+

Bash and powershell command reference

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Cmdlet

Alias

Bash Equivalent

Description

Get-ChildItem

gci

ls

List the directories and files in the current location.

Set-Location

sl

cd

Change to the directory at the given path. Typing .. rather than a path will move up one directory.

Push-Location

pushd

pushd

Changes to the directory.

Pop-Location

popd

popd

Changes back to the previous directory after using pushd

New-Item

ni

(touch)

Creates a new item. Used with no parameter, the item is by default a file. Using mkdir is a shortcut for including the +parameter -ItemType dir.

mkdir

none

mkdir

Creates a new directory. (See New-Item.)

Explorer

start .

open .)

Open something using File Explorer (the GUI)

Remove-Item

rm

rm

Deletes something. Permanently!

Move-Item

mv

mv

Moves something. Takes two arguments - first a filename (i.e. its present path), then a path for its new location +(including the name it should have there). By not changing the path, it can be used to rename files.

Copy-Item

cp

cp

Copies a file to a new location. Takes same arguments as move, but keeps the original file in its location.

Write-Output

write

echo

Outputs whatever you type. Use redirection to output to a file. Redirection with >> will add to the file, rather than +overwriting contents.

Get-Content

gc

cat

Gets the contents of a file and prints it to the screen. Adding the parameter -TotalCount followed by a number x +prints only the first x lines. Adding the parameter -Tail followed by a number x prints only the final x lines.

Select-String

sls

(grep)

Searches for specific content.

Measure-Object

measure

(wc)

Gets statistical information about an object. Use Get-Content and pipe the output to Measure-Object with the +parameters -line, -word, and -character to get word count information.

>

none

>

Redirection. Puts the output of the command to the left of > into a file to the right of >.

\|

none

\|

Piping. Takes the output of the command to the left and uses it as the input for the command to the right.

Get-Help

none

man

Gets the help file for a cmdlet. Adding the parameter -online opens the help page on TechNet.

exit

none

exit

Exits PowerShell

+

Remember the keyboard shortcuts of tab for auto-completion and the up and down arrows to scroll through recent commands. These shortcuts can save a lot of typing!

+
+
+
+

Git

+ +
+
+

Pulling changes from the github repository

+

When we commit changes to the master branch and push to our github repository, you’ll need to fetch those changes to keep current. To do that:

+
    +
  1. go to your fork of the repository on github. You should see a statement like “This branch is 1 commit ahead, 2 commits behind rwhite:main”.

  2. +
  3. click ‘Fetch upstream’ beside that statement and “Fetch and merge”. Now the statement should be something like “This branch is 2 commits ahead of rhwhite:main”

  4. +
  5. pull the changes into your own computer space. Make sure you aren’t going to clobber any of your own files:

    +
    git status
    +
    +
    +

    you can ignore “untracked files”, but pay attention to any files labeled “modified”. Those will be overwritten when you reset to our commit, so copy them to a new name or folder.

    +
  6. +
  7. Get the new commit with

    +
    git pull
    +
    +
    +
  8. +
+
+
+
+

Books and tutorials

+ +
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/getting_started/python.ipynb b/getting_started/python.ipynb new file mode 100644 index 0000000..ea84fe2 --- /dev/null +++ b/getting_started/python.ipynb @@ -0,0 +1,229 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# The command line shell and git\n", + "\n", + " - The default shell on OSX is bash, which is taught in this set of\n", + " lessons: or in this\n", + " [detailed bash reference](https://programminghistorian.org/en/lessons/intro-to-bash)\n", + " - if you are on Windows, powershell is somewhat similar -- here is\n", + " a table listing commands for both shell side by side taken from\n", + " this in-depth [powershell tutorial](https://programminghistorian.org/en/lessons/intro-to-powershell#quick-reference)\n", + "\n", + "## Using the command line\n", + "\n", + "### Powershell and Bash common commands\n", + "\n", + "* To go to your $HOME folder:\n", + " \n", + "```\n", + "cd ~\n", + "\n", + "or\n", + "\n", + "cd $HOME\n", + "```\n", + "\n", + "* To open explorer or finder for the current folder:\n", + "\n", + "```\n", + "windows explorer do:\n", + "\n", + " start .\n", + "\n", + "MacOs finder do:\n", + "\n", + " open .\n", + " \n", + "```\n", + " \n", + "* To move up one folder:\n", + "\n", + "```\n", + "cd ..\n", + "```\n", + "\n", + "* To save typing, remember that hitting the tab key completes filenames\n", + "\n", + "### To configure powershell on windows\n", + "\n", + "* first start a powershell terminal with admin privileges, then type:\n", + "\n", + " `set-executionpolicy remotesigned`\n", + " \n", + "* then, in your miniconda3 powershel profile, do:\n", + "\n", + " `Test-Path $profile`\n", + " \n", + " to see whether you have an existing profile.\n", + " \n", + "* if you don't have a profile, then do the following (this will overwrite an existing profile, so be aware):\n", + "\n", + " `New-Item –Path $Profile –Type File –Force`\n", + " \n", + "* To add to your profile, open with:\n", + "\n", + " `start $profile`\n", + " \n", + "### To configure bash or zsh on MacOS\n", + "\n", + "* open a terminal then type either\n", + "\n", + " `open .bash_profile`\n", + " \n", + " or for Catalina\n", + " \n", + " `open .zshenv`\n", + " \n", + " \n", + "\n", + "### Bash and powershell command reference\n", + "\n", + "| Cmdlet | Alias | Bash Equivalent | Description |\n", + "| ------- | ------- | ------- | ------- |\n", + "| `Get-ChildItem` | `gci` | `ls` | List the directories and files in the current location. | \n", + "| `Set-Location` | `sl` | `cd` | Change to the directory at the given path. Typing `..` rather than a path will move up one directory. |\n", + "| `Push-Location` | `pushd` | `pushd` | Changes to the directory. |\n", + "| `Pop-Location` | `popd` | `popd` | Changes back to the previous directory after using `pushd` |\n", + "| `New-Item` | `ni` | (`touch`) | Creates a new item. Used with no parameter, the item is by default a file. Using `mkdir` is a shortcut for including the parameter `-ItemType dir`. |\n", + "| `mkdir` | none | `mkdir` | Creates a new directory. (See `New-Item`.) |\n", + "| `Explorer` | `start .` | `open .`) | Open something using File Explorer (the GUI) |\n", + "| `Remove-Item` | `rm` | `rm` | Deletes something. Permanently! |\n", + "| `Move-Item` | `mv` | `mv` | Moves something. Takes two arguments - first a filename (i.e. its present path), then a path for its new location (including the name it should have there). By not changing the path, it can be used to rename files. |\n", + "| `Copy-Item` | `cp` | `cp` | Copies a file to a new location. Takes same arguments as move, but keeps the original file in its location. |\n", + "| `Write-Output` | `write` | `echo` | Outputs whatever you type. Use redirection to output to a file. Redirection with `>>` will add to the file, rather than overwriting contents. |\n", + "| `Get-Content` | `gc` | `cat` | Gets the contents of a file and prints it to the screen. Adding the parameter `-TotalCount` followed by a number x prints only the first x lines. Adding the parameter `-Tail` followed by a number x prints only the final x lines. |\n", + "| `Select-String` | `sls` | (`grep`) | Searches for specific content. |\n", + "| `Measure-Object` | `measure` | (`wc`) | Gets statistical information about an object. Use `Get-Content` and pipe the output to `Measure-Object` with the parameters `-line`, `-word`, and `-character` to get word count information. |\n", + "| `>` | none | `>` |Redirection. Puts the output of the command to the left of `>` into a file to the right of `>`. |\n", + "| `\\|` | none | `\\|` |Piping. Takes the output of the command to the left and uses it as the input for the command to the right. |\n", + "| `Get-Help` | none | `man` | Gets the help file for a cmdlet. Adding the parameter `-online` opens the help page on TechNet. |\n", + "| `exit` | none | `exit` | Exits PowerShell |\n", + "\n", + "Remember the keyboard shortcuts of `tab` for auto-completion and the up and down arrows to scroll through recent commands. These shortcuts can save a lot of typing!\n", + "\n", + "## Git\n", + "\n", + "- A good place to go to learn git fundamentals is this lesson\n", + " \n", + "\n", + "## Pulling changes from the github repository\n", + "\n", + "When we commit changes to the master branch and push to our github\n", + "repository, you'll need to fetch those changes to keep current. To do\n", + "that:\n", + "\n", + "1) go to your fork of the repository on github. You should see a statement like \"This branch is 1 commit ahead, 2 commits behind rwhite:main\".\n", + "\n", + "2) click 'Fetch upstream' beside that statement and \"Fetch and merge\". Now the statement should be something like \"This branch is 2 commits ahead of rhwhite:main\"\n", + "\n", + "3) pull the changes into your own computer space. Make sure you aren't going to clobber any of your own files:\n", + " \n", + " git status\n", + " \n", + " you can ignore \"untracked files\", but pay attention to any files\n", + " labeled \"modified\". Those will be overwritten when you reset to our\n", + " commit, so copy them to a new name or folder.\n", + "\n", + "5) Get the new commit with\n", + " \n", + " git pull\n", + " \n", + "# Books and tutorials\n", + "\n", + " - We will be referring to Phil Austin's version of David Pine's Introduction to Python:\n", + " http://phaustin.github.io/pyman. The notebooks for each chapter are included\n", + " in the [numeric_students/pyman](https://github.com/phaustin/numeric_students/tree/downloads/pyman) folder.\n", + " - If you are new to python, I would recommend you also go over the\n", + " following short ebook in detail:\n", + " - Jake Vanderplas' [Whirlwind tour of\n", + " Python](https://github.com/jakevdp/WhirlwindTourOfPython/blob/f40b435dea823ad5f094d48d158cc8b8f282e9d5/Index.ipynb)\n", + " is available both as a set of notebooks which you can clone from\n", + " github or as a free ebook:\n", + " \n", + " - to get the notebooks do:\n", + " \n", + " git clone \n", + " - We will be referencing chapters from:\n", + " - A Springer ebook from the UBC library: [Numerical\n", + " Python](https://login.ezproxy.library.ubc.ca/login?qurl=https%3a%2f%2flink.springer.com%2fopenurl%3fgenre%3dbook%26isbn%3d978-1-4842-0554-9)\n", + " - with code on github:\n", + " \n", + " git clone\n", + " \n", + " - Two other texts that are available as a set of notebooks you can\n", + " clone with git:\n", + " - \n", + " - \n", + " - My favorite O'Reilly book is:\n", + " - [Python for Data\n", + " Analysis](http://shop.oreilly.com/product/0636920023784.do)\n", + " - Some other resources:\n", + " - If you know Matlab, there is [Numpy for Maltab\n", + " users](http://wiki.scipy.org/NumPy_for_Matlab_Users)\n", + " - Here is a [python\n", + " translation](http://nbviewer.jupyter.org/gist/phaustin/1af744215e51562d010b9f6a19c0724c)\n", + " by [Don\n", + " MacMillen](http://blogs.siam.org/from-matlab-guide-to-ipython-notebook/)\n", + " of [Chapter 1 of his matlab\n", + " guide](http://clouds.eos.ubc.ca/~phil/courses/atsc301/downloads_pw/matlab_guide_2nd.pdf)\n", + " - [Numpy beginners\n", + " guide](http://www.packtpub.com/numpy-mathematical-2e-beginners-guide/book)\n", + " - [Learning\n", + " Ipython](http://www.packtpub.com/learning-ipython-for-interactive-computing-and-data-visualization/book)\n", + " - [The official Python\n", + " tutorial](http://docs.python.org/tut/tut.html)\n", + " - [Numpy\n", + " cookbook](http://www.packtpub.com/numpy-for-python-cookbook/book)\n", + " - A general computing introduction: [How to think like a computer\n", + " scientist](http://www.openbookproject.net/thinkcs/python/english3e)\n", + " with an [interactive\n", + " version](http://interactivepython.org/courselib/static/thinkcspy/index.html)\n", + " - [Think Stats](http://greenteapress.com/wp/think-stats-2e/)\n", + " - [Think Bayes](http://greenteapress.com/wp/think-bayes/)" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "-all", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.1" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 2 +} diff --git a/getting_started/vscode.html b/getting_started/vscode.html new file mode 100644 index 0000000..bc5dcfd --- /dev/null +++ b/getting_started/vscode.html @@ -0,0 +1,104 @@ + + + + + + + + VScode notes — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

VScode notes

+
+

Install vscode from https://code.visualstudio.com/download

+

Install the command line version as well. On windows, this should be done as part of the install. On MacOS, you need to open the vscode command pallette (⌘shift-P) and type

+
Shell Command: Install code in PATH
+
+
+

once that is done, you should be able to start vscode from the command line in a particular folder by typing:

+
code .
+
+
+
+
+

Suggested Extensions

+

On the left you will see four boxes, one moved up. Here you can add extensions. You will need some just to run notebooks etc. We suggest: - Python - Pylance (this one installed with Python for me) - Jupyter (again came with Python for me) - C/C++ - Clipboard - Code Spell Checker - Gitlens

+
+
+

Notebooks

+

In class I will demonstrate using VSCode with notebooks and with python modules. When you open either, you will be asked to choose your kernel (numeric_2022) and an interpreter (the python associated with numeric_2022).

+

The notebooks are not VSCode ready and you will see non-rendered pieces. Technology changes and we are always behind.

+

I will show you some of the strengths of VSCode for editing notebooks focusing on its real editor powers: spellchecking and multiple corrections

+
+
+

Python Modules

+

I will show you in class some of the super features of editing in VScode including: - code colouring - built in information on functions - click on variable, see everywhere it is used - checks alignment (whitespace) - marks changes you’ve made - typo in variable leading to undefined - undefined function: colour changes to white - making a change, then using the git integration to save, stage and commit

+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/getting_started/vscode.ipynb b/getting_started/vscode.ipynb new file mode 100644 index 0000000..31e1666 --- /dev/null +++ b/getting_started/vscode.ipynb @@ -0,0 +1,104 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# VScode notes #\n", + "\n", + "## Install vscode from https://code.visualstudio.com/download\n", + "\n", + "Install the command line version as well. On windows, this should be done as part of the install. On MacOS, you need to open the vscode command pallette (⌘shift-P) and type\n", + "```\n", + "Shell Command: Install code in PATH\n", + "```\n", + "\n", + "once that is done, you should be able to start vscode from the command line in a particular\n", + "folder by typing:\n", + "\n", + "```\n", + "code .\n", + "```\n", + "\n", + "## Suggested Extensions ##\n", + "\n", + "On the left you will see four boxes, one moved up. Here you can add extensions. You will need some just to run notebooks etc. We suggest:\n", + "- Python\n", + "- Pylance (this one installed with Python for me)\n", + "- Jupyter (again came with Python for me)\n", + "- C/C++\n", + "- Clipboard\n", + "- Code Spell Checker\n", + "- Gitlens\n", + "\n", + "## Notebooks ##\n", + "\n", + "In class I will demonstrate using VSCode with notebooks and with python modules. When you open either, you will be asked to choose your kernel (numeric_2022) and an interpreter (the python associated with numeric_2022).\n", + "\n", + "The notebooks are not VSCode ready and you will see non-rendered pieces. Technology changes and we are always behind.\n", + "\n", + "I will show you some of the strengths of VSCode for editing notebooks focusing on its real editor powers: spellchecking and multiple corrections\n", + "\n", + "## Python Modules ##\n", + "\n", + "I will show you in class some of the super features of editing in VScode including:\n", + "- code colouring\n", + "- built in information on functions\n", + "- click on variable, see everywhere it is used\n", + "- checks alignment (whitespace)\n", + "- marks changes you've made\n", + "- typo in variable leading to undefined\n", + "- undefined function: colour changes to white\n", + "- making a change, then using the git integration to save, stage and commit\n" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs", + "text_representation": { + "extension": ".py", + "format_name": "percent", + "format_version": "1.3", + "jupytext_version": "1.3.2" + } + }, + "kernelspec": { + "display_name": "Python 3.7.6 64-bit ('numeric': conda)", + "language": "python", + "name": "python37664bitnumericcondabd5c031d404d4597ae8310d0bb6bf5f0" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.7.6" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/grad_schedule.html b/grad_schedule.html new file mode 100644 index 0000000..469106f --- /dev/null +++ b/grad_schedule.html @@ -0,0 +1,134 @@ + + + + + + + + Dates for Graduate Class (EOSC 511) — Numeric course 22.1 documentation + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Dates for Graduate Class (EOSC 511)

+
+

January

+
    +
  • Monday Jan 8 : First Class

  • +
  • Jan 15, 2 pm: Lab 1 Reading and Objectives Quiz

  • +
  • Monday Jan 15: Second Class

  • +
  • Jan 19, 6 pm: Lab 1 Assignment

  • +
  • January 19: Last date for withdrawal from class without a “W” standing

  • +
  • Jan 22, 2 pm: Lab 2 Reading and Objectives Quiz

  • +
  • Monday Jan 22: Third Class

  • +
  • Jan 26, 6 pm:Lab 2 Assignment

  • +
  • Jan 29, 2 pm: Lab 3 Reading and Objectives Quiz

  • +
  • Monday Jan 29: Fourth Class

  • +
+
+
+

February

+
    +
  • Feb 2, 6 pm: Lab 3 Assignment

  • +
  • Feb 5, 2 pm: Lab 4 Reading and Objectives Quiz

  • +
  • Monday, Feb 5: Fifth Class

  • +
  • Feb 9, 6 pm: Lab 4 Assignment

  • +
  • Monday Feb 12: Sixth Class

  • +
  • Feb 16, 6 pm: Miniproject

  • +
  • Feb 19-23: Reading Week, No class

  • +
  • Feb 26, 2 pm: Lab 5a Reading and Objectives Quiz

  • +
  • Monday Feb 26: Seventh Class

  • +
+
+
+

March

+
    +
  • Mar 1, 6 pm: Lab 5a Assignment

  • +
  • Mar 1, 6 pm: Teams for Projects (a list of names due)

  • +
  • March 1: Last date to withdraw from course with ‘W’

  • +
  • Mar 4, 2 pm: Lab 7a Reading and Objectives Quiz

  • +
  • Monday Mar 4: Eighth Class

  • +
  • Mar 8, 6 pm: Lab 7a Assignment

  • +
  • Monday Mar 11: Ninth Class

  • +
  • Mar 15, 6 pm: Project Proposal

  • +
  • Mar 15, 6 pm: First iPeer evaluation

  • +
  • Mar 18, 2 pm: Optional Labs Reading and Objectives Quiz

  • +
  • Monday Mar 18: Tenth Class

  • +
  • Mar 22, 6 pm: Optional Labs Assignment

  • +
  • Monday Mar 25: Eleventh Class: Project Proposal Presentations

  • +
+
+
+

April

+
    +
  • Monday, Apr 8: Last Class

  • +
  • Apr 8, in class, Project Presentation (for teams that did not +present in March)

  • +
  • Apr 8, 6 pm, Second iPeer evaluation

  • +
  • Apr 12, 6 pm: Project

  • +
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/gradsyllabus.html b/gradsyllabus.html new file mode 100644 index 0000000..49886ee --- /dev/null +++ b/gradsyllabus.html @@ -0,0 +1,311 @@ + + + + + + + + Graduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: EOSC 511 / ATSC 506 — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Graduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: EOSC 511 / ATSC 506

+
+

Course Purpose

+

The students completing this course will be able to apply standard +numerical solution techniques to problems in Oceanographic, Atmospheric +and Earth Science.

+
+
+

Meeting Times

+

See canvas course page for scheduled class times and location

+
+
+

Instructors

+
+
Rachel White, rwhite@eoas.ubc.ca
+
Susan Allen, sallen@eoas.ubc.ca
+
+

See canvas course page for office hour locations

+
+
+

Prerequisites

+

The course assumes a mathematics background including vector calculus +and linear algebra. Students weak in either of these areas will be +directed to readings to strengthen their knowledge. Programming +experience is greatly recommended.

+
+
+

Course Structure

+

This course is not lecture based. The course is an interactive, computer +based laboratory course. The computer will lead you through the +laboratory (like a set of lab notes) and you will answer problems most +of which use the computer. The course consists of three parts. A set of +required interactive, computer based laboratory exercises, a choice of +elective laboratory exercises and a project. The project will be a +group project determined through consultation between the instructors, the students and their +supervisors.

+

During the meeting times, there will be group worksheets to delve +into the material, brief presentations to help with technical +matters, time to ask questions in a group format and also individually +and time to read and work on the laboratories.

+

You can use a web-browser to examine the course exercises. Point your +browser to:

+

https://rhwhite.github.io/numeric_2024/notebook_toc.html

+
+
+

Grades

+
+
    +
  • Laboratory Exercises 15% (individual with collaboration, satisfactory/unsatisfactory grading)

  • +
  • Quizzes 5% (individual)

  • +
  • Worksheets 5% (group)

  • +
  • Mini-project 15% (individual with collaboration)

  • +
  • Project Proposal 10% (group)

  • +
  • Project 40% (group)

  • +
  • Project Oral Presentation 10% (group)

  • +
+
+

There will be 7 assigned exercise sets or ‘Laboratory Exercises’ based on the labs. +Note that these are not necessarily the same as the problems in the +lab and will generally be a much smaller set. Laboratory exercises +can be worked with partners or alone. Each student must upload their +own solution in their own words.

+

The laboratory exercise sets are to be uploaded to the course CANVAS page. +Sometimes, rather than a large series of plots, you may wish to +include a summarizing table. If you do not understand the scope of a +problem, please ask. Help with the labs is +available 1) through piazza (see CANVAS) so you can contact your classmates +and ask them 2) during the weekly scheduled lab or 3) directly from the +instructors during the scheduled office hours (see canvas).

+

Laboratory exercises will be graded as ‘excellent’, ‘satisfactory’ or ‘unsatisfactory’. +Your grade on canvas will be given as:

+

1.0 = excellent

+

0.8 = satisfactory

+

0 = unsatisfactory

+

Grades will be returned within a week of the submission deadline. +If you receive a grade of ‘satisfactory’ or ‘unsatisfactory’ on your first submission, +you will be given an opportunity to resubmit the problems you got incorrect to try to +improve your grade. To get a score of ‘excellent’ on a resubmission, you must include +a full explanation of your understanding of why your initial answer was incorrect, and +what misconception, or mistake, you have corrected to get to your new answer. Resubmissions +will be due exactly 2 weeks after the original submission deadline. It is your responsibility +to manage the timing of the resubmission deadlines with the next laboratory exercise.

+

Your final Laboratory Exercise grade will be calculated from the number of excellent, satisfactory +and unsatisfactory grades you have from the 7 exercises: +5 or more submissions at ‘Excellent’, none ‘Unsatisfactory’: 100% +3 or more submissions at ‘Excellent’, none ‘Unsatisfactory’: 90% +1 or fewer ‘Excellent’, none ‘Unsatisfactory’: 80% +1 ‘Unsatisfactory’ submission: 70% +2 ‘Unsatisfactory’ submissions: 60% +3 ‘Unsatisfactory’ submissions: 50% +4 or more ‘Unsatisfactory’ submissions: 0%: +[1]

+

Quizzes are done online, reflect the learning objectives of each lab +and are assigned to ensure you do the reading with enough depth to +participate fully in the class worksheets and have the background to +do the Laboratory Exercises. There will be a “grace space” policy +allowing you to miss one quiz.

+

The in-class worksheets will be marked for a complete effort. There +will be a “grace space” policy allowing you to miss one class +worksheet. The grace space policy is to accommodate missed classes due +to illness, “away games” for athletes etc. In-class paper worksheets +are done as a group and are to handed in (one worksheet only per +group) at the end of the worksheet time.

+

The project will be done in groups of three to four. The project topic is to be chosen in consultation with your research supervisors and the instructors. The subject of these projects has to be ocean or atmosphere related unless the group has identified an outside supervisor who is willing to provide subject specific advice. Students without ocean/atmosphere expertise can join a ocean/atmopsheric sciences group - it will be up to the group to figure out where and how they can best contribute to the project.

+

Assignments, quizzes, mini-projects and the project are expected on +time. Late mini-projects and projects will be marked and then the mark will be multiplied by +\((0.9)^{\rm (number\ of\ days\ or\ part\ days\ late)}\).

+
+
+

Set Laboratories

+

Recommended timing. Problems to be handed in can be found on the +webpage.

+
    +
  • Laboratory One: One Week

  • +
  • Laboratory Two: One Week

  • +
  • Laboratory Three: One Week

  • +
  • Laboratory Four: One and a Half Weeks

  • +
  • Laboratory Five: Half a Week

  • +
  • Laboratory Seven: One Week

  • +
+
+
+

Elective Laboratories

+

Choose the one large lab (10 points) or two small labs (5 points). Time scale: one and a half weeks.

+
+

ODE’s

+
    +
  • Rest of Lab 5 (5 points)

  • +
  • Lab 6 (5 points)

  • +
+
+
+

PDE’s

+
    +
  • End of Lab 7 (5 points)

  • +
  • Lab 8 (10 points)

  • +
  • Lab 10 (5 points)

  • +
+
+
+

FFT’s

+
    +
  • Lab 9 (5 points)

  • +
+
+
+
+

Project

+
    +
  • Done in groups of three or four. Chosen in consultation with your research supervisors and the +instructors. Should be chosen before the elective labs.

  • +
  • Time scale three and half weeks.

  • +
+
+
+

University Statement on Values and Policies

+

UBC provides resources to support student learning and to maintain +healthy lifestyles but recognizes that sometimes crises arise and so +there are additional resources to access including those for survivors +of sex- ual violence. UBC values respect for the person and ideas of +all members of the academic community. Harassment and discrimination +are not tolerated nor is suppression of academic freedom. UBC provides +appropriate accommodation for students with disabilities and for +religious and cultural observances. UBC values academic honesty and +students are expected to acknowledge the ideas generated by others and +to uphold the highest academic standards in all of their +actions. Details of the policies and how to access support are +available here

+

https://senate.ubc.ca/policies-resources-support-student-success.

+
+
+

Supporting Diversity and Inclusion

+

Atmospheric Science, Oceanography and the Earth Sciences have been +historically dominated by a small subset of +privileged people who are predominantly male and white, missing out on +many influential individuals thoughts and +experiences. In this course, we would like to create an environment +that supports a diversity of thoughts, perspectives +and experiences, and honours your identities. To help accomplish this:

+
+
    +
  • Please let us know your preferred name and/or set of pronouns.

  • +
  • If you feel like your performance in our class is impacted by your experiences outside of class, please don’t hesitate to come and talk with us. We want to be a resource for you and to help you succeed.

  • +
  • If an approach in class does not work well for you, please talk to any of the teaching team and we will do our best to make adjustments. Your suggestions are encouraged and appreciated.

  • +
  • We are all still learning about diverse perspectives and identities. If something was said in class (by anyone) that made you feel uncomfortable, please talk to us about it

  • +
+
+
+
+

Academic Integrity

+

Students are expected to learn material with honesty, integrity, and responsibility.

+
+
    +
  • Honesty means you should not take credit for the work of others, +and if you work with others you are careful to give them the credit they deserve.

  • +
  • Integrity means you follow the rules you are given and are respectful towards others +and their attempts to do so as well.

  • +
  • Responsibility means that you if you are unclear about the rules in a specific case +you should contact the instructor for guidance.

  • +
+
+

The course will involve a mixture of individual and group work. We try +to be flexible about this as my priority is for you to learn the +material rather than blindly follow rules, but there are +rules. Plagiarism (i.e. copying of others work) and cheating (not +following the rules) can result in penalties ranging from zero on an +assignment to failing the course.

+

For due dates etc, please see the Detailed Schedule.

+
+
+

Not feeling well before class?

+

What to do if you’re sick: If you’re sick, it’s important that you stay home, no matter what you think +you may be sick with (e.g., cold, flu, other). If you do miss class because of illness: +• Make a connection early in the term to another student or a group of students in the class. You can +help each other by sharing notes. If you don’t yet know anyone in the class, post on Piazza to connect +with other students. +• Consult the class resources on this website and on canvas. We will post the materials for each class day. +• In this class, the marking scheme is intended to provide flexibility so that you can prioritize your health +and are still be able to succeed. As such, there is a “grace space” policy allowing you to miss one in-class worksheet and one +pre-class quiz with no penalty. +• If you are concerned that you will miss a particular key activity due to illness, contact us to discuss.

+

If an instructor is sick: we will do our best to stay well, but if either of us is ill, here is what you can +expect: +• The other instructor will substitute +• Your TA may help run a class +• We may have a synchronous online session or two. If this happens, you will receive an email.

+ +
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/index.html b/index.html new file mode 100644 index 0000000..5532511 --- /dev/null +++ b/index.html @@ -0,0 +1,91 @@ + + + + + + + + Numerical Techniques for Atmosphere, Ocean and Earth Scientists — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Numerical Techniques for Atmosphere, Ocean and Earth Scientists

+

Welcome! Start by reading the appropriate syllabus below. The “Getting Started” page will take you through the setup you will need to participate in this course - we will help you go through this in the first class.

+ +
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebook_toc.html b/notebook_toc.html new file mode 100644 index 0000000..8bf4b3d --- /dev/null +++ b/notebook_toc.html @@ -0,0 +1,93 @@ + + + + + + + + Numeric notebooks — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + + + + + + + + + \ No newline at end of file diff --git a/notebooks/lab1/01-lab1.html b/notebooks/lab1/01-lab1.html new file mode 100644 index 0000000..d103e96 --- /dev/null +++ b/notebooks/lab1/01-lab1.html @@ -0,0 +1,918 @@ + + + + + + + + Laboratory 1: An Introduction to the Numerical Solution of Differential Equations: Discretization (Jan 2024) — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Laboratory 1: An Introduction to the Numerical Solution of Differential Equations: Discretization (Jan 2024)

+

John M. Stockie

+
+

List of Problems

+ +
+
+

Objectives

+

The examples and exercises in this lab are meant to illustrate the limitations of analytical solution techniques, using several differential equation models for simple physical systems. This is the prime motivation for the use of numerical methods.

+

After completing this lab, you will understand the process of discretizing a continuous problem, and be able to derive a simple finite difference approximation for an ordinary or partial differential equation. The examples will also introduce the concepts of accuracy and stability, which will be discussed further in Lab 2.

+

Specifically you will be able to:

+
    +
  • Define the term or identify: Ordinary Differential Equation, Partial Differential Equation, Linear equation, Non-linear equation, Initial value problem, Boundary value problem, Open Domain, and Closed Domain.

  • +
  • Define the term, identify or perform: Forward difference discretization, Backward difference discretization, and Centre difference discretization.

  • +
  • Define the term: Interpolation, Convergence, and Instability.

  • +
  • Define the term or perform: Linear interpolation.

  • +
+
+
+

Readings

+

There is no required reading for this lab, beyond the contents of the lab itself. However, if you would like additional background on any of the following topics, then refer to the sections indicated below:

+
    +
  • Differential Equations:

    +
      +
    •  Strang (1986), Chapter 6 (ODE’s).

    • +
    •  Boyce and DiPrima (1986) (ODE’s and PDE’s).

    • +
    +
  • +
  • Numerical Methods:

    +
      +
    •  Strang (1986), Section 5.1.

    • +
    •  Garcia (1994), Sections 1.4–1.5, Chapter 2 (a basic introduction to numerical methods for problems in physics).

    • +
    •  Boyce and DiPrima (1986), Sections 8.1–8.5, 8.7, 8.8.

    • +
    +
  • +
+
+
+

Running Code Cells

+

The next cell in this notebook is a code cell. Run it by selecting it and hitting ctrl enter, or by selecting it and hitting the run button (arrow to right) in the notebook controls.

+
+
[ ]:
+
+
+
from IPython.display import Image
+# import plotting package and numerical python package for use in examples later
+import matplotlib.pyplot as plt
+# import the numpy array handling library
+import numpy as np
+# import the quiz script
+import context
+from numlabs.lab1 import quiz1 as quiz
+
+
+
+
+
+

Introduction: Why bother with numerical methods?

+

In introductory courses in ordinary and partial differential equations (ODE’s and PDE’s), many analytical techniques are introduced for deriving solutions. These include the methods of undetermined coefficients, variation of parameters, power series, Laplace transforms, separation of variables, Fourier series, and phase plane analysis, to name a few. When there are so many analytical tools available, one is led to ask:

+
+

Why bother with numerical methods at all?

+
+

The fact is that the class of problems that can be solved analytically is very small. Most differential equations that model physical processes cannot be solved explicitly, and the only recourse available is to use a numerical procedure to obtain an approximate solution of the problem.

+

Furthermore, even if the equation can be integrated to obtain a closed form expression for the solution, it may sometimes be much easier to approximate the solution numerically than to evaluate it analytically.

+

In the following two sections, we introduce two classical physical models, seen in most courses in differential equations. Analytical solutions are given for these models, but then seemingly minor modifications are made which make it difficult (if not impossible) to calculate actual solution values using analytical techniques. The obvious alternative is to use numerical methods.

+
+

Ordinary Differential Equations

+

In order to demonstrate the usefulness of numerical methods, let’s start by looking at an example of a first-order initial value problem (or IVP). In their most general form, these equations look like

+

(Model ODE)

+
+\[\begin{split}\begin{array}{c} + {\displaystyle \frac{dy}{dt} = f(y,t),} \\ + \; \\ + y(0) = y_0, + \end{array}\end{split}\]
+

where

+
    +
  • \(t\) is the independent variable (in many physical systems, which change in time, \(t\) represents time);

  • +
  • \(y(t)\) is the unknown quantity (or dependent variable) that we want to solve for;

  • +
  • \(f(y,t)\) is a known function that can depend on both \(y\) and \(t\); and

  • +
  • \(y_0\) is called the initial value or initial condition, since it provides a value for the solution at an initial time, \(t=0\) (the initial value is required so that the problem has a unique solution).

  • +
+

This problem involves the first derivative of the solution, and also provides an initial value for \(y\), and hence the name “first-order initial value problem”.

+

Under certain very general conditions on the right hand side function \(f\), we know that there will be a unique solution to the problem (Model ODE). However, only in very special cases can we actually write down a closed-form expression for the solution.

+

In the remainder of this section, we will leave the general equation, and investigate a specific example related to heat conduction. It will become clear that it is the problems which do not have exact solutions which are the most interesting or meaningful from a physical standpoint.

+
+

Example One

+
+

Consider a small rock, surrounded by air or water, which gains or loses heat only by conduction with its surroundings (there are no radiation effects). If the rock is small enough, then we can ignore the effects of diffusion of heat within the rock, and consider only the flow of heat through its surface, where the rock interacts with the surrounding medium.

+
+
+

It is well known from experimental observations that the rate at which the temperature of the rock changes is proportional to the difference between the rock’s surface temperature, \(T(t),\) and the ambient temperature, \(T_a\) (the ambient temperature is simply the temperature of the surrounding material, be it air, water, …). This relationship is expressed by the following ordinary differential equation

+

(Conduction 1d)

+
+\[\begin{split}% \textcolor[named]{Red}{\frac{dT}{dt}} = -\lambda \, +% \textcolor[named]{Blue}{(T-T_a)} . +\underbrace{\frac{dT}{dt}}_{\begin{array}{c} +\mbox{rate of change}\\ +\mbox{of temperature} +\end{array}} += -\lambda \underbrace{(T-T_a)}_{\begin{array}{c} +\mbox{temperature}\\ +\mbox{difference} +\end{array}} .\end{split}\]
+

and is commonly known as Newton’s Law of Cooling. (The parameter \(\lambda\) is defined to be \(\lambda = \mu A/cM\), where \(A\) is the surface area of the rock, \(M\) is its mass, \(\mu\) its thermal conductivity, and \(c\) its specific heat.)

+
+
+
+

Quiz on Newton’s Law of Cooling

+

\(\lambda\) is positive? True or False?

+

In the following, replace ‘xxxx’ by ‘True’, ‘False’, ‘Hint 1’ or ‘Hint 2’ and run the cell (how to)

+
+
[ ]:
+
+
+
print (quiz.conduction_quiz(answer = 'xxxx'))
+
+
+
+

If we assume that \(\lambda\) is a constant, then the solution to this equation is given by

+

(Conduction solution)

+
+\[T(t) = T_a + (T(0)-T_a)e^{-\lambda t},\]
+

where \(T(0)\) is the initial temperature.

+

Mathematical Note: Details of the solution can be found in the Appendix

+

In order to obtain realistic value of the parameter \(\lambda\), let our “small” rock be composed of granite, with mass of \(1\;gram\), which corresponds to a \(\lambda \approx 10^{-5}\;sec^{-1}\).

+

Sample solution curves are given in Figure Conduction.

+
+
[ ]:
+
+
+
Image(filename='conduction/conduction.png',width='60%')
+
+
+
+

Figure: Conduction Plot of solution curves \(T(t)\) for \(T_0=-10,15,20,30\); parameter values: \(\lambda=10^{-5}\), \(T_a=20\).

+
+
+
+
+

Demo: Conduction

+

Here is an interactive example that investigates the behaviour of the solution.

+

The first we import the function that does the calculation and plotting. You need to run this cell (how to) to load it. Loading it does not run the function.

+
+
[ ]:
+
+
+
from numlabs.lab1 import temperature_conduction as tc
+
+
+
+

You need to call the function. Simpliest call is next cell.

+
+
[ ]:
+
+
+
# simple call to temperature demo
+tc.temperature()
+
+
+
+

After running as is try changing To = To (the initial temperature), Ta = Ta (the ambient temperature) or la = λ (the effective conductivity) to investigate changes in the solution.

+
+
[ ]:
+
+
+
# setting different values
+# (note this uses the defaults again as written, you should change the values)
+tc.temperature(Ta = 20, To = np.array([-10., 10., 20., 30.]), la = 0.00001)
+
+
+
+
+

Example Two

+

Suppose that the rock in the previous example has a \(\lambda\) which is not constant. For example, if that the rock is made of a material whose specific heat varies with the temperature or time, then \(\lambda\) can be a function of \(T\) or \(t\). This might happen if the material composing the rock undergoes a phase transition at a certain critical temperature (for example, a melting ice pellet). The problem is now a non-linear one, for which analytical techniques may or +may not provide a solution.

+

If \(\lambda=\lambda(T)\), a function of temperature only, then the exact solution may be written as

+
+\[T(t) = T_a + \exp{\left[-\int^{t}_{0} \lambda(T(s))ds \right]},\]
+

which involves an integral that may or may not be evaluated analytically, in which case we can only approximate the integral. Furthermore, if \(\lambda\) is a function of both \(T\) and \(t\) which is not separable (cannot be written as a product of a function of \(T\) and \(t\)), then we may not be able to write down a closed form for the solution at all, and we must resort to numerical methods to obtain a solution.

+

Even worse, suppose that we don’t know \(\lambda\) explicitly as a function of temperature, but rather only from experimental measurements of the rock (see Figure Table for an example).

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

i

Temperature (\(T_i\))

Measured \(\lambda_i\)

0

-5.0

2.92

1

-2.0

1.59

2

1.0

1.00

3

4.0

2.52

4

7.0

3.66

5

10.0

4.64

+
+
[ ]:
+
+
+
Image(filename="table/table-interp.png",width='60%')
+
+
+
+

Figure Table: A rock with \(\lambda\) known only at a sequence of discrete temperature values, from experimental measurements. The function \(\lambda(T)\) can be represented approximately using linear interpolation (and the resulting approximate function can then be used to solve the problem numerically.

+

Then there is no way to express the rock’s temperature as a function, and analytical methods fail us, since we do not know the values at points between the given values. One alternative is to approximate \(\lambda\) at intermediate points by joining successive points with straight lines (this is called linear interpolation), and then use the resulting function in a numerical scheme for computing the solution.

+

As the above example demonstrates, even for a simple ODE such as 1-d conduction, there are situations where analytical methods are inadequate.

+
+
+

Partial Differential Equations

+
+

Example Three

+

The rock in Example One was considered to be small enough that the effects of heat diffusion in the interior were negligible in comparison to the heat lost by conduction through its surface. In this example, consider a rock that is not small, and whose temperature changes are dominated by internal diffusion effects. Therefore, it is no longer possible to ignore the spatial dependence in the problem.

+

For simplicity, we will add spatial dependence in one direction only, which corresponds to a “one-dimensional rock”, or a thin rod. Assume that the rod is insulated along its sides, so that heat flows only along its length, and possibly out the ends (see Figure Rod).

+
+
[ ]:
+
+
+
Image(filename='conduction/rod.png',width='60%')
+
+
+
+

Figure Rod: A thin rod can be thought of as a model for a one-dimensional rock.

+

Consequently, the temperature varies only with position, \(x\), and time, \(t\), and can be written as a function \(u(x,t)\). The temperature in the rod is governed by the following PDE

+
+\[u_t = \alpha^2 u_{xx},\]
+

for which we have to provide an initial temperature

+
+\[u(x,0) = u_0(x),\]
+

and boundary values

+
+\[u(0,t)=u(1,t)=0,\]
+

where

+
    +
  • \(\alpha^2\) is the thermal diffusivity of the material,

  • +
  • \(u_0(x)\) is the initial temperature distribution in the rod, and

  • +
  • the boundary conditions indicate that the ends of the rod are held at constant temperature, which we’ve assumed is zero.

  • +
+

Thermal diffusivity is a quantity that depends only on the material from which the bar is made. It is defined by

+
+\[\alpha^2 = \frac{\kappa}{\rho c},\]
+

where \(\kappa\) is the thermal conductivity, \(\rho\) is the density, and \(c\) is the specific heat. A typical value of the thermal diffusivity for a granite bar is \(0.011\;cm^2/sec\), and \(0.0038\;cm^2/sec\) for a bar made of brick.

+

Using the method of separation of variables, we can look for a temperature function of the form \(u(x,t)=X(x) \cdot T(t)\), which leads to the infinite series solution

+
+\[u(x,t) = \sum_{n=1}^\infty b_n e^{-n^2\pi^2\alpha^2 t}\sin{(n\pi x)},\]
+

where the series coefficients are

+
+\[b_n = 2 \int_0^1 u_0(x) \sin{(n\pi x)} dx.\]
+

Mathematical Note: Details of the derivation can be found in any introductory text in PDE’s (for example, Boyce and DiPrima (1986) [p. 549]).

+

We do manage to obtain an explicit formula for the solution, which can be used to calculate actual values of the solution. However, there are two obvious reasons why this formula is not of much practical use:

+
    +
  1. The series involves an infinite number of terms (except for very special forms for the initial heat distribution … such as the one shown below). We might be able to truncate the series, since each term decreases exponentially in size, but it is not trivial to decide how many terms to choose in order to get an accurate answer and here we are already entering the realm of numerical approximation.

  2. +
  3. Each term in the series requires the evaluation of an integral. When these cannot be integrated analytically, we must find some way to approximate the integrals … numerical analysis rears its head once again!

  4. +
+

For most physical problems, an analytical expression cannot be obtained, and the exact formula is not of much use.

+

However, consider a very special case, when the initial temperature distribution is sinusoidal,

+
+\[u_0(x) = \sin(\pi x).\]
+

For this problem, the infinite series collapses into a single term

+
+\[u(x,t) = e^{-\pi^2\alpha^2t}\sin{\pi x}.\]
+

Sample solution curves are given in Figure 1d Diffusion.

+
+
[ ]:
+
+
+
Image(filename='diffusion/diffusion.png',width='60%')
+
+
+
+

Figure 1d-diffusion Temperature vs. position curves at various times, for heat diffusion in a rod with sinusoidal initial temperature distribution and parameter value \(\alpha=0.2\).

+
+
+
+

Movie: Diffusion

+

Here is a movie of the exact solution to the diffusion problem. Run the cell (how to), then run the video.

+
+
[ ]:
+
+
+
import IPython.display as display
+
+vid = display.YouTubeVideo("b4D2ktTtw7E", modestbranding=1, rel=0, width=800)
+display.display(vid)
+
+
+
+
+
+

Summary

+

This section is best summed up by the insightful comment of Strang (1986) [p. 587]:

+

Nature is nonlinear.

+

Most problems arising in physics (which are non-linear) cannot be solved analytically, or result in expressions that have little practical value, and we must turn to numerical solution techniques.

+
+
+
+

Discretization

+

When computing analytical solutions to differential equations, we are dealing with continuous functions; i.e. functions that depend continuously on the independent variables. A computer, however, has only finite storage capacity, and hence there is no way to represent continuous data, except approximately as a sequence of discrete values.

+
+

Example Four

+
+

We already saw an example of a discrete function in Example Two where the rate function \(\lambda\), depended on the temperature. If \(\lambda\) is not known by some empirical formula, then it can only be determined by experimental measurements at a discrete set of temperature values. In Figure Table, \(\lambda\) is given at a sequence of six temperature points (\((T_i, \lambda_i)\), for \(i = 0, 1, \dots, 5)\)), and so is an +example of a discrete function.

+
+
+

The process of interpolation, which was introduced in Example Two, will be considered in more detail next.

+
+
+
+

Example Five

+
+

Consider the two continuous functions

+
+\[f(x)=x^3-5x \;\; {\rm and} \;\; g(x)=x^{2/3} .\]
+

(In fact, \(g(x)\) was the function used to generate the values \(\lambda(T)\) in Example Two).

+
+
+

The representation of functions using mathematical notation or graphs is very convenient for mathematicians, where continuous functions make sense. However, a computer has a limited storage capacity, and so it can represent a function only at a finite number of discrete points \((x, y)\).

+

One question that arises immediately is: What do we do if we have to determine a value of the function which is not at one of the discrete points? The answer to this question is to use some form of interpolation – namely to use an approximation procedure to estimate values of the function at points between the known values.

+
+
+

For example, linear interpolation approximates the function at intermediate points using the straight line segment joining the two neighbouring discrete points. There are other types of interpolation schemes that are more complicated, a few of which are:

+
    +
  • quadratic interpolation: every two sucessive points are joined by a quadratic polynomial.

  • +
+
+
+
    +
  • cubic splines: each pair of points is joined by a cubic polynomial so that the function values and first derivatives match at each point.

  • +
  • Fourier series: instead of polynomials, uses a sum of \(\sin nx\) and \(\cos nx\) to approximate the function (Fourier series are useful in analysis, as well as spectral methods).

  • +
+
+
+
    +
  • Chebyshev polynomials: another type of polynomial approximation which is useful for spectral methods.

  • +
  • …many others …

  • +
+
+
+

For details on any of these interpolation schemes, see a numerical analysis text such as that by Burden and Faires (1981).

+
+

An application of linear interpolation to discrete versions of the functions \(f\) and \(g\) is shown in Figure f and g.

+
+
[ ]:
+
+
+
Image(filename='discrete/f.png', width='60%')
+
+
+
+
+
[ ]:
+
+
+
Image(filename='discrete/g.png', width='60%')
+
+
+
+
+

Figure f and g: The functions \(f\) and \(g\) are known only at discrete points. The function can be approximated at other values by linear interpolation, where straight line segments are used to join successive points.

+
+
+

Depending on the function, or number of location of the points chosen, the approximation may be more or less accurate. In Figure f and g, it is not clear which function is approximated more accurately. In the graph of \(f(x)\), the error seems to be fairly small throughout. However, for the function \(g(x)\), the error is large near \(x=0\), and then very small elsewhere. This problem of accuracy of discrete approximations will come up again and again +in this course.

+
+
+
+

Demo: Interpolation

+

Here is an interactive example demonstrating the use of interpolation (linear and cubic) in approximating functions.

+

The next cell imports a module containing two python functions that interpolate the two algebraic functions, f and g (Figure f and g). You need to run this cells (how to) to load them.

+
+
[ ]:
+
+
+
from numlabs.lab1 import interpolate as ip
+
+
+
+

Once you have loaded the module, you can call the interpolation routines as ip.interpol_f(pn) and ip.interpol_g(pn). pn is the number of points used the interpolation. Watch what changing pn does to the solutions.

+
+
[ ]:
+
+
+
ip.interpol_f(6)
+
+
+
+
+
[ ]:
+
+
+
ip.interpol_g(6)
+
+
+
+
+

Interpolation Quiz

+

The accuracy of an approximation using linear or cubic interpolation improves as the number of points is increased. True or False?

+

In the following, replace ‘xxxx’ by ‘True’, ‘False’, or ‘Hint’

+
+
[ ]:
+
+
+
print (quiz.interpolation_quiz(answer = 'xxx'))
+
+
+
+

When solving differential equations numerically, it is essential to reduce the continuous problem to a discrete one. The basic idea is to look for an approximate solution, which is defined at a finite number of discrete points. This set of points is called a grid. Consider the one-dimensional conduction problem of Example One, Conduction, which in its most general form reads

+

(Conduction Equation)

+
+\[\frac{dT}{dt} = -\lambda(T,t) \, (T-T_a),\]
+

with initial temperature \(T(0)\).

+

When we say we want to design a numerical procedure for solving this initial value problem, what we want is a procedure for constructing a sequence of approximations,

+
+\[T_0, \, T_1, \, \ldots, \, T_i, \, \ldots,\]
+

defined at a set of discrete \(t\)-points,

+
+\[t_0<t_1 < \cdots <t_i < \cdots\]
+

Each \(T_i\) is an approximation of the actual temperature at \(t_i\); that is

+
+\[T_i \approx T(t_i).\]
+

For now, we will consider equally-spaced points, each of which is separated by a distance \(\Delta t\), so that

+
+\[t_i=t_0+i \Delta t .\]
+

An example of such a grid is shown in Figure Grid.

+
+
[ ]:
+
+
+
Image(filename='discrete/grid.png', width='80%')
+
+
+
+

Figure Grid:

+

A grid of equally-spaced points, \(t_i=t_0+i\Delta t\), for \(i=0,1,2,\ldots\).

+

This process of reducing a continuous problem to one in a finite number of discrete unknowns is called discretization. The actual mechanics of discretizing differential equations are introduced in the following section.

+ +
+

Discretization Quiz

+

What phrase best describes “discretization”?

+

A The development and analysis of methods for the solution of mathematical problems on a computer.

+

B The process of replacing continuous functions by discrete values.

+

C Employing the discrete Fourier transform to analyze the stability of a numerical scheme.

+

D The method by which one can reduce an initial value problem to a set of discrete linear equations that can be solved on a computer.

+

In the following, replace ‘x’ by ‘A’, ‘B’, ‘C’, ‘D’ or ‘Hint’

+
+
[ ]:
+
+
+
print (quiz.discretization_quiz(answer = 'x'))
+
+
+
+
+
+

Summary

+

The basic idea in this section is that continuous functions can be approximated by discrete ones, through the process of discretization. In the course of looking at discrete approximations in the interactive example, we introduced the idea of the accuracy of an approximation, and showed that increasing the accuracy of an approximation is not straightforward.

+

We introduced the notation for approximate solutions to differential equations on a grid of points. The mechanics of discretization as they apply to differential equations, will be investigated further in the remainder of this Lab, as well as in Lab Two.

+
+ + +
+

Difference Approximations to the First Derivative

+

It only remains to write a discrete version of the differential equation (Conduction Equation) involving the approximations \(T_i\). The way we do this is to approximate the derivatives with finite differences. If this term is new to you, then you can think of it as just another name for a concept you have already seen in calculus. Remember the definition of the derivative of a function :math:`y(t)`, where \(y^\prime(t)\) is written as a limit of a divided +difference:

+

(limit definition of derivative)

+
+\[y^\prime(t) = \lim_{\Delta t\rightarrow 0} \frac{y(t+\Delta t)-y(t)}{\Delta t}.\]
+

We can apply the same idea to approximate the derivative \(dT/dt=T^\prime\) in (Conduction Equation) by the forward difference formula, using the discrete approximations, \(T_i\):

+

(Forward Difference Formula)

+
+\[T^\prime(t_i) \approx \frac{T_{i+1}-T_i}{\Delta t}.\]
+
+

Example Six

+

In order to understand the ability of the formula (Forward Difference Formula) to approximate the derivative, let’s look at a specific example. Take the function \(y(x)=x^3-5x\), and apply the forward difference formula at the point \(x=1\). The function and its tangent line (the short line segment with slope \(y^\prime(1)\)) are displayed in Figure Tangents.

+
+
[ ]:
+
+
+
Image(filename='deriv/deriv.png', width='60%')
+
+
+
+
+

Figure Tangents: Plot of the function \(y=x^3-5x\) and the forward difference approximations to the derivative for various values of \(\Delta t\)

+
+
+

Each of the remaining line segments represents the forward difference approximation to the tangent line for different values of \(\Delta t\), which are simply the secant lines through the points :math:`(t, y(t))` and :math:`(t+Delta t, y(t+Delta t))`. Notice that the approximation improves as \(\Delta t\) is reduced. This motivates the idea that grid refinement improves the accuracy of the discretization …but not always (as we will see in the coming sections).

+
+
+
+

Investigation

+

Investigate the use of the forward difference approximation of the derivative in the following interactive example.

+

The next cell loads a python function that plots a function f(x) and approximates its derivative at \(x=1\) based on a second x-point that you chose (xb). You need to run this cell (how to) to load it.

+
+
[ ]:
+
+
+
from numlabs.lab1 import derivative_approx as da
+
+
+
+

Once you have loaded the function you can call it as da.plot_secant(xb) where xb the second point used to estimate the derivative (slope) at \(x=1\). You can compare the slope of the estimate (straight line) to the slope of the function (blue curve).

+
+
[ ]:
+
+
+
da.plot_secant(2.)
+
+
+
+
+
+

Forward Euler Method

+

We can now write down a discrete version of our model ODE problem (Conduction Equation) at any point \(t_i\) by

+
    +
  1. discretizing the derivative on the left hand side (for example, using the forward difference approximation (Forward Difference Formula);

  2. +
  3. evaluating the right hand side function at the discrete point \(t_i\).

  4. +
+

The discrete form of the problem is

+
+\[\frac{T_{i+1}-T_i}{\Delta t} = \lambda(T_i,t_i) \, (T_i-T_a),\]
+

or, after rearranging,

+
+\[T_{i+1} = T_i + \Delta t \, \lambda(T_i,t_i) \, (T_i-T_a).\]
+

This formula is called the Forward Euler method (since it uses forward differences). Notice that this formula relates each discrete solution value to the solution at the preceding \(t\)-point. Consequently, if we are given an initial value \(T(0)\), then all subsequent values of the solution are easily computed.

+

(Note: The forward Euler formula for the more general first-order IVP in (Model ODE) is simply \(y_{i+1} = y_i + \Delta t f(y_i,t_i)\).)

+
+

Example Seven

+
+

Let us now turn to another example in atmospheric physics to illustrate the use of the forward Euler method. Consider the process of condensation and evaporation in a cloud. The saturation ratio, \(S\), is the ratio of the vapour pressure to the vapour pressure of a plane surface of water at temperature \(T\). \(S\) varies in time according to the

+
+
+

(saturation development equation)

+
+\[\frac{dS}{dt} = \alpha S^2 + \beta S + \gamma,\]
+

where \(\alpha\), \(\beta\) and \(\gamma\) are complicated (but constant) expressions involving the physical parameters in the problem (and so we won’t reproduce them here).

+
+
+

What are some physically reasonable values of the parameters (other than simply \(\alpha<0\) and \(\gamma>0\))?

+

Chen (1994) gives a detailed derivation of the equation, which is a non-linear, first order ODE (i.e. non-linear in the dependent variable \(S\), and it contains only a first derivative in the time variable). Chen also derives an analytical solution to the problem which takes a couple pages of messy algebra to come to. Rather than show these details, we would like to use the forward Euler method in order to compute the solution numerically, and as we will see, this is +actually quite simple.

+
+
+

Using the (forward difference formula), the discrete form of the (saturation development equation) is

+
+\[S_{i+1} = S_i + \Delta t \left( \alpha S_i^2 + \beta S_i + \gamma \right).\]
+

Consider an initial saturation ratio of \(0.98\), and take parameter values \(\alpha=-1\), \(\beta=1\) and \(\gamma=1\). The resulting solution, for various values of the time step \(\Delta t\),is plotted in Figure Saturation Time Series.

+
+
+
[ ]:
+
+
+
Image(filename='feuler/sat2.png', width='60%')
+
+
+
+

Figure Saturation Time Series: Plot of the saturation ratio as a function of time using the Forward Euler method. “nt” is the number of time steps.

+
+

There are two things to notice here, both related to the importance of the choice of time step \(\Delta t\):

+
+
+
    +
  • As \(\Delta t\) is reduced, the solution appears to converge to one solution curve, which we would hope is the exact solution to the differential equation. An important question to ask is: When will the numerical method converge to the exact solution as :math:`Delta t` is reduced?

  • +
  • If \(\Delta t\) is taken too large, however, the numerical solution breaks down. In the above example, the oscillations that occur for the largest time step (when \(nt=6\)) are a sign of numerical instability. The differential problem is stable and exhibits no such behaviour, but the numerical scheme we have used has introduced an instability. An obvious question that arises is: How can we avoid introducing instabilities in a numerical scheme?

  • +
+
+
+

Neither question has an obvious answer, and both issues will be investigated further in Lab 2.

+
+
+
+
+

Other Approximations

+

Look again at the (limit definition of derivative), and notice that an equivalent expression for \(T^\prime\) is

+
+\[T^\prime(t) = \lim_{\Delta t\rightarrow 0} \frac{T(t)-T(t-\Delta t)}{\Delta t}.\]
+

From this, we can derive the backward difference formula for the first derivative,

+

(Backward Difference Formula)

+
+\[T^\prime(t_i) \approx \frac{T_i-T_{i-1}}{\Delta t},\]
+

and similarly the centered difference formula

+

(Centered Difference Formula)

+
+\[T^\prime(t_i) \approx \frac{T_{i+1}-T_{i-1}}{2 \Delta t}.\]
+

The corresponding limit formulas are equivalent from a mathematical standpoint, but the discrete formulas are not! In particular, the accuracy and stability of numerical schemes derived from the three difference formulas: (Forward Difference Formula), (Backward Difference Formula) and (Centered Difference Formula) are quite different. More will said on this in the next Lab.

+
+

Summary

+

This section introduces the use of the forward difference formula to discretize the derivatives in a first order differential equation. The resulting numerical scheme is called the forward Euler method. We also introduced the backward and centered difference formulas for the first derivative, which were also obtained from the definition of derivative.

+

You saw how the choice of grid spacing affected the accuracy of the solution, and were introduced to the concepts of convergence and stability of a numerical scheme. More will be said about these topics in the succeeding lab, as well as other methods for discretizing derivatives.

+
+
+
+
+

Generalizations

+

The idea of discretization introduced in the previous section can be generalized in several ways, some of which are:

+
    +
  • problems with higher derivatives,

  • +
  • systems of ordinary differential equations,

  • +
  • boundary value problems, and

  • +
  • partial differential equations.

  • +
+
+

Higher Derivatives

+

Many problems in physics involve derivatives of second order and higher. Discretization of these derivatives is no more difficult than the first derivative in the previous section. The difference formula for the second derivative, which will be derived in Lab #2, is given by

+

(Centered Second Derivative)

+
+\[y^{\prime\prime}(t_i) \approx + \frac{y(t_{i+1})-2y(t_i)+y(t_{i-1})}{(\Delta t)^2} ,\]
+

and is called the second-order centered difference formula for the second derivative (“centered”, because it involves the three points centered about \(t_i\), and “second-order” for reasons we will see in the next Lab). We will apply this formula in the following example …

+
+

Example Eight

+

A weather balloon, filled with helium, climbs vertically until it reaches its level of neutral buoyancy, at which point it begins to oscillate about this equilibrium height. We can derive a DE describing the motion of the balloon by applying Newton’s second law:

+
+\[mass \; \times \; acceleration = force\]
+
+\[m \frac{d^2 y}{d t^2} = + \underbrace{- \beta \frac{dy}{dt}}_{\mbox{air resistance}} + \underbrace{- \gamma y}_{\mbox{buoyant force}},\]
+

where

+
    +
  • \(y(t)\) is the displacement of the balloon vertically from its equilibrium level, \(y=0\);

  • +
  • \(m\) is the mass of the balloon and payload;

  • +
  • the oscillations are assumed small, so that we can assume a linear functional form for the buoyant force, \(-\gamma y\).

  • +
+

This problem also requires initial values for both the initial displacement and velocity:

+
+\[y(0) = y_0 \;\; \mbox{and} \;\; \frac{dy}{dt}(0) = v_0.\]
+
+
[ ]:
+
+
+
Image(filename='balloon/balloon.png', width='60%')
+
+
+
+

Figure Weather Balloon: A weather balloon oscillating about its level of neutral buoyancy.

+
+
+

Problem One

+
    +
  1. Using the centered difference formula (Centered Second Derivative) for the second derivative, and the forward difference formula (Forward Difference Formula) for the first derivative at the point \(t_i,\) derive a difference scheme for \(y_{i+1}\), the vertical displacement of the weather balloon.

  2. +
  3. What is the difference between this scheme and the forward Euler scheme from Example Seven, related to the initial conditions? (Hint: think about starting values …)

  4. +
  5. Given the initial values above, explain how to start the numerical integration.

  6. +
+

Note: There are a number of problems in the text of each lab. See the syllabus for which problems you are assigned as part of your course. That is, you don’t have to do them all!

+
+
+
+

Systems of First-order ODE’s

+

Discretization extends in a simple way to first-order systems of ODE’s, which arise in many problems, as we will see in some of the later labs. For now, though, we can see:

+
+

Example 9

+

The second order DE for the weather balloon problem from Example Eight can be rewritten by letting \(u=dy/dt\). Then,

+

\begin{align} +\frac{dy}{dt} &= u\\ +\frac{du}{dt} &= -\frac{\beta}{m} u - \frac{\gamma}{m} y +\end{align}

+

which is a system of first order ODE’s in \(u\) and \(y\). This set of differential equations can be discretized to obtain another numerical scheme for the weather balloon problem.

+
+
+

Problem Two

+
    +
  1. Derive a difference scheme for the problem based on the above system of two ODE’s using the forward difference formula for the first derivative.

  2. +
  3. By combining the discretized equations into one equation for y, show that the difference between this scheme and the scheme obtained in problem one is the difference formula for the second derivative.

  4. +
+
+
+
+

Boundary Value Problem

+

So far, we’ve been dealing with initial value problems or IVP’s (such as the problem of heat conduction in a rock in Example One): a differential equation is given for an unknown function, along with its initial value. There is another class of problems, called boundary value problems (or BVP’s), where the independent variables are restricted to a closed domain (as opposed to an open domain) and the solution (or its derivative) is specified at every point along the +boundary of the domain. Contrast this to initial value problems, where the solution is not given at the end time.

+

A simple example of a boundary value problem is the steady state heat diffusion equation problem for the rod in Example Three. By steady state, we mean simply that the rod has reached a state where its temperature no longer changes in time; that is, \(\partial u/\partial t = 0\). The corresponding problem has a temperature, \(u(x)\), that depends on position only, and obeys the following equation and boundary conditions:

+
+\[u_{xx} = 0,\]
+
+\[u(0) = u(1) = 0.\]
+

This problem is known as an initial-boundary value problem(or IBVP), since it has a mix of both initial and boundary values.

+

The structure of initial and boundary value problems are quite different mathematically: IVP’s involve a time variable which is unknown at the end time of the integration (and hence the solution is known on an open domain or interval), whereas BVP’s specify the solution value on a closed domain or interval. The numerical methods corresponding to these problems are also quite different, and this can be best illustrated by an example.

+
+

Example 10

+

We can discretize the steady state diffusion equation using the centered difference formula for the second derivative to obtain:

+
+\[u_{i+1}-2u_i+u_{i-1} = 0\]
+

where \(u_i\approx u(i/N)\) and \(i=0,1,\ldots,N\) (and the factor of \((\Delta x)^2 = {1}/{N^2}\) has been multiplied out). The boundary values \(u_0\) and \(u_N\) are both known to be zero, so the above expression represents a system of \(N-1\) equations in \(N-1\) unknown values \(u_i\) that must be solved for simultaneously. The solution of such systems of linear equations will be covered in more detail in Lab #3 in fact, this equation forms the basis for +a Problem in the Linear Algebra Lab.

+

Compare this to the initial value problems discretized using the forward Euler method, where the resulting numerical scheme is a step-by-step, marching process (that is, the solution at one grid point can be computed using an explicit formula using only the value at the previous grid point).

+
+
+
+

Partial Differential Equations

+

So far, the examples have been confined to ordinary differential equations, but the procedure we’ve set out for ODE’s extends with only minor modifications to problems involving PDE’s.

+
+

Example 11

+

To illustrate the process, let us go back to the heat diffusion problem from Example Three, an initial-boundary value problem in the temperature \(u(x,t)\):

+
+\[u_{t} = \alpha^2 u_{xx},\]
+

along with initial values

+
+\[u(x,0) = u_0(x),\]
+

and boundary values

+
+\[u(0,t) = u(1,t) = 0.\]
+

As for ODE’s, the steps in the process of discretization remain the same:

+
    +
  1. First, replace the independent variables by discrete values

    +
    +\[x_i = i \Delta x = \frac{i}{M}, \;\; \mbox{where $i=0, 1, + \ldots, M$, and}\]
    +
    +\[t_n = n \Delta t, \;\; \mbox{where $n=0, 1, + \ldots$}\]
    +

    In this example, the set of discrete points define a two-dimensional grid of points, as pictured in Figure PDE Grid.

    +
  2. +
+
+
[ ]:
+
+
+
Image(filename='pdes/pde-grid.png', width='40%')
+
+
+
+

Figure PDE Grid: The computational grid for the heat diffusion problem, with discrete points \((x_i,t_n)\).

+
    +
  1. Replace the dependent variables (in this example, just the temperature \(u(x,t)\)) with approximations defined at the grid points:

    +
    +\[U_i^n \approx u(x_i,t_n).\]
    +

    The boundary and initial values for the discrete temperatures can then be written in terms of the given information.

    +
  2. +
  3. Approximate all of the derivatives appearing in the problem with finite difference approximations. If we use the centered difference approximation (Centered Second Derivative) for the second derivative in \(x\), and the forward difference formula (Forward Difference Formula) for the time derivative (while evaluating the terms on the right hand side at the previous time level), we obtain the following numerical scheme: +:nbsphinx-math:`begin{equation}

    +
    +
    +
    U_i^{n+1} = U_i^n + frac{alpha^2 Delta t}{(Delta x)^2} left(

    U_{i+1}^n - 2 U_i^n + U_{i-1}^n right)

    +
    +
    +
    +

    end{equation}`

    +

    Given the initial values, \(U_i^0=u_0(x_i)\), and boundary values \(U_0^n=U_M^n=0\), this difference formula allows us to compute values of temperature at any time, based on values at the previous time.

    +

    There are, of course, other ways of discretizing this problem, but the above is one of the simplest.

    +
  4. +
+
+
+
+
+

Mathematical Notes

+
+

Solution to the Heat Conduction Equation

+

In Example One, we had the equation

+
+\[\frac{dT}{dt} = -\lambda (T-T_a),\]
+

subject to the initial condition \(T(0)\). This equation can be solved by separation of variables, whereby all expressions involving the independent variable \(t\) are moved to the right hand side, and all those involving the dependent variable \(T\) are moved to the left

+
+\[\frac{dT}{T-T_a} = -\lambda dt.\]
+

The resulting expression is integrated from time \(0\) to \(t\)

+
+\[\int_{T(0)}^{T(t)} \frac{dS}{S-T_a} = -\int_0^t\lambda ds,\]
+

(where \(s\) and \(S\) are dummy variables of integration), which then leads to the relationship

+
+\[\ln \left( T(t)-T_a)-\ln(T(0)-T_a \right) = -\lambda t,\]
+

or, after exponentiating both sides and rearranging,

+
+\[T(t) = T_a + (T(0)-T_a)e^{-\lambda t},\]
+

which is exactly the Conduction Solution equation.

+
+
+
+

References

+

Boyce, W. E. and R. C. DiPrima, 1986: Elementary Differential Equations and Boundary Value Problems. John Wiley & Sons, New York, NY, 4th edition.

+

Burden, R. L. and J. D. Faires, 1981: Numerical Analysis. PWS-Kent, Boston, 4th edition.

+

Chen, J.-P., 1994: Predictions of saturation ratio for cloud microphysical models. Journal of the Atmospheric Sciences, 51(10), 1332–1338.

+

Garcia, A. L., 1994: Numerical Methods for Physics. Prentice-Hall, Englewood Cliffs, NJ.

+

Strang, G., 1986: Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA.

+
+
+

Glossary

+

backward difference discretization: used to estimate a derivative – uses the current points and points with smaller independent variable.

+

boundary value problem: a differential equation (or set of differential equations) along with boundary values for the unknown functions. Abbreviated BVP.

+

BVP: see boundary value problem

+

centre difference discretization: used to estimate a derivative – uses a discretization symmetric (in independent variable) around the current point.

+

closed domain: a domain for which the value of the dependent variables is known on the boundary of the domain.

+

converge: as the discretization step (eg. ∆t) is reduced the solutions generated approach one solution curve.

+

DE: see differential equation

+

dependent variable: a variable which is a (possibly unknown) function of the independent variables in a problem; for example, in a fluid the pressure can be thought of as a dependent variable, which depends on the time t and position (x, y, z).

+

differential equation: an equation involving derivatives. Abbreviated DE.

+

discretization: when referring to DE’s, it is the process whereby the independent variables are replaced by a grid of discrete points; the dependent variables are replaced by approximations at the grid points; and the derivatives appearing in the problem are replaced by a finite difference approximation. The discretization process replaces the DE (or DE’s) with an algebraic equation or finite system of algebraic equations which can be solved on a computer.

+

finite difference: an approximation of the derivative of a function by a difference quotient involving values of the function at discrete points. The simplest method of deriving finite difference formulae is using Taylor series.

+

first order differential equation: a differential equation involving only first derivatives of the unknown functions.

+

forward difference discretization: used to calculate a derivative – uses the current points and points with larger independent variable.

+

grid: when referring to discretization of a DE, a grid is a set of discrete values of the independent variables, defining a mesh or array of points, at which the solution is approximated.

+

independent variable: a variable that does not depend on other quantities (typical examples are time, position, etc.)

+

initial value problem: a differential equation (or set of differential equations) along with initial values for the unknown functions. Abbreviated IVP.

+

interpolation: a method for estimating the value of a function at points intermediate to those where its values are known.

+

IVP: initial value problem

+

linear: pertaining to a function or expression in which the quantities appear in a linear combination. If \(x_i\) are the variable quantities, and \(c_i\) are constants, then any linear function of the \(x_i\) can be written in the form \(c_0 + \sum_i c_i \cdot x_i\).

+

linear interpolation: interpolation using straight lines between the known points

+

Navier-Stokes equations: the system of non-linear PDE’s that describe the time evolution of the flow of a fluid.

+

non-linear: pertaining to a function or expression in which the quantities appear in a non-linear combination.

+

numerical instability: although the continuous differential equation has a finite solution, the numerical solution grows without bound as the numerical interation proceeds.

+

ODE: see ordinary differential equation

+

open domain: a domain for which the value of one or more dependent variables is unknown on a portion of the boundary of the domain or a domain for which one boundary (say time very large) is not specified.

+

ordinary differential equation: a differential equation where the derivatives appear only with respect to one independent variable. Abbreviated ODE.

+

partial differential equation: a differential equation where derivatives appear with respect to more than one independent variable. Abbreviated PDE.

+

PDE: see partial differential equation

+

second order differential equation: a differential equation involving only first and second derivatives of the unknown functions.

+

separation of variables: a technique whereby a function with several dependent variables is written as a product of several functions, each of which depends on only one of the dependent variables. For example, a function of three unknowns, u(x, y, t), might be written as u(x, y, t) = X(x) · Y (y) · T (t).

+
+
[ ]:
+
+
+

+
+
+
+
+ + + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab1/01-lab1.ipynb b/notebooks/lab1/01-lab1.ipynb new file mode 100644 index 0000000..74c624c --- /dev/null +++ b/notebooks/lab1/01-lab1.ipynb @@ -0,0 +1,1662 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "# Laboratory 1: An Introduction to the Numerical Solution of Differential Equations: Discretization (Jan 2024)\n", + "\n", + "John M. Stockie" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems\n", + "\n", + "- [Problem One](#Problem-One)\n", + "- [Problem Two](#Problem-Two)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "\n", + "The examples and exercises in this lab are meant to illustrate the\n", + "limitations of analytical solution techniques, using several\n", + "differential equation models for simple physical systems. This is the\n", + "prime motivation for the use of numerical methods.\n", + "\n", + "After completing this lab, you will understand the process of\n", + "*discretizing* a continuous problem, and be able to derive a simple\n", + "finite difference approximation for an ordinary or partial differential\n", + "equation. The examples will also introduce the concepts of *accuracy*\n", + "and *stability*, which will be discussed further in Lab 2.\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- Define the term or identify: Ordinary Differential Equation, Partial\n", + " Differential Equation, Linear equation, Non-linear equation, Initial\n", + " value problem, Boundary value problem, Open Domain, and Closed\n", + " Domain.\n", + "\n", + "- Define the term, identify or perform: Forward difference\n", + " discretization, Backward difference discretization, and Centre\n", + " difference discretization.\n", + "\n", + "- Define the term: Interpolation, Convergence, and Instability.\n", + "\n", + "- Define the term or perform: Linear interpolation.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. However, if you would like additional background on any of\n", + "the following topics, then refer to the sections indicated below:\n", + "\n", + "- **Differential Equations:**\n", + "\n", + " -  [Strang (1986)](#Ref:Strang), Chapter 6 (ODE’s).\n", + "\n", + " -  [Boyce and DiPrima (1986)](#Ref:BoyceDiPrima) (ODE’s and PDE’s).\n", + "\n", + "- **Numerical Methods:**\n", + "\n", + " -  [Strang (1986)](#Ref:Strang), Section 5.1.\n", + "\n", + " -  [Garcia (1994)](#Ref:Garcia), Sections 1.4–1.5, Chapter 2 (a basic introduction to\n", + " numerical methods for problems in physics).\n", + "\n", + " -  [Boyce and DiPrima (1986)](#Ref:BoyceDiPrima), Sections 8.1–8.5, 8.7, 8.8." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Running Code Cells\n", + "\n", + "\n", + "The next cell in this notebook is a code cell. Run it by selecting it and hitting ctrl enter, or by selecting it and hitting the run button (arrow to right) in the notebook controls." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "# import plotting package and numerical python package for use in examples later\n", + "import matplotlib.pyplot as plt\n", + "# import the numpy array handling library\n", + "import numpy as np\n", + "# import the quiz script\n", + "import context\n", + "from numlabs.lab1 import quiz1 as quiz" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction: Why bother with numerical methods?\n", + "\n", + "\n", + "In introductory courses in ordinary and partial differential equations\n", + "(ODE’s and PDE’s), many analytical techniques are introduced for\n", + "deriving solutions. These include the methods of undetermined\n", + "coefficients, variation of parameters, power series, Laplace transforms,\n", + "separation of variables, Fourier series, and phase plane analysis, to\n", + "name a few. When there are so many analytical tools available, one is\n", + "led to ask:\n", + "\n", + "> *Why bother with numerical methods at all?*\n", + "\n", + "The fact is that the class of problems that can be solved analytically\n", + "is *very small*. Most differential equations that model physical\n", + "processes cannot be solved explicitly, and the only recourse available\n", + "is to use a numerical procedure to obtain an approximate solution of the\n", + "problem.\n", + "\n", + "Furthermore, even if the equation can be integrated to obtain a closed\n", + "form expression for the solution, it may sometimes be much easier to\n", + "approximate the solution numerically than to evaluate it analytically.\n", + "\n", + "In the following two sections, we introduce two classical physical\n", + "models, seen in most courses in differential equations. Analytical\n", + "solutions are given for these models, but then seemingly minor\n", + "modifications are made which make it difficult (if not impossible) to\n", + "calculate actual solution values using analytical techniques. The\n", + "obvious alternative is to use numerical methods." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Ordinary Differential Equations\n", + "\n", + "\n", + "[lab1:sec:odes]: <#3.1-Ordinary-Differential-Equations> \"ODES\"\n", + "\n", + "In order to demonstrate the usefulness of numerical methods, let’s start\n", + "by looking at an example of a *first-order initial value problem* (or\n", + "*IVP*). In their most general form, these equations look like\n", + "\n", + "(Model ODE)\n", + "$$\\begin{array}{c}\n", + " {\\displaystyle \\frac{dy}{dt} = f(y,t),} \\\\\n", + " \\; \\\\\n", + " y(0) = y_0, \n", + " \\end{array}$$\n", + "\n", + "where\n", + "\n", + "- $t$ is the *independent variable* (in many physical systems, which\n", + " change in time, $t$ represents time);\n", + "\n", + "- $y(t)$ is the unknown quantity (or *dependent variable*) that we\n", + " want to solve for;\n", + "\n", + "- $f(y,t)$ is a known function that can depend on both $y$ and $t$;\n", + " and\n", + "\n", + "- $y_0$ is called the *initial value* or *initial condition*, since it\n", + " provides a value for the solution at an initial time, $t=0$ (the\n", + " initial value is required so that the problem has a unique\n", + " solution).\n", + "\n", + "This problem involves the first derivative of the solution, and also\n", + "provides an initial value for $y$, and hence the name “first-order\n", + "initial value problem”.\n", + "\n", + "Under certain very general conditions on the right hand side function\n", + "$f$, we know that there will be a unique solution to the problem ([Model ODE](#lab1:eq:modelode)).\n", + "However, only in very special cases can we actually write down a\n", + "closed-form expression for the solution.\n", + "\n", + "In the remainder of this section, we will leave the general equation,\n", + "and investigate a specific example related to heat conduction. It will\n", + "become clear that it is the problems which *do not have exact solutions*\n", + "which are the most interesting or meaningful from a physical standpoint.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### *Example One*\n", + "\n", + "\n", + "> Consider a small rock, surrounded by air or water,\n", + "which gains or loses heat only by conduction with its surroundings\n", + "(there are no radiation effects). If the rock is small enough, then we\n", + "can ignore the effects of diffusion of heat within the rock, and\n", + "consider only the flow of heat through its surface, where the rock\n", + "interacts with the surrounding medium.\n", + "\n", + "> It is well known from experimental observations that the rate at which\n", + "the temperature of the rock changes is proportional to the difference\n", + "between the rock’s surface temperature, $T(t),$ and the *ambient\n", + "temperature*, $T_a$ (the ambient temperature is simply the temperature\n", + "of the surrounding material, be it air, water, …). This relationship is\n", + "expressed by the following ordinary differential equation\n", + "
\n", + "(Conduction 1d)\n", + "$$% \\textcolor[named]{Red}{\\frac{dT}{dt}} = -\\lambda \\,\n", + "% \\textcolor[named]{Blue}{(T-T_a)} .\n", + " \\underbrace{\\frac{dT}{dt}}_{\\begin{array}{c} \n", + " \\mbox{rate of change}\\\\\n", + " \\mbox{of temperature}\n", + " \\end{array}}\n", + " = -\\lambda \\underbrace{(T-T_a)}_{\\begin{array}{c} \n", + " \\mbox{temperature}\\\\\n", + " \\mbox{difference}\n", + " \\end{array}} .$$\n", + "
\n", + " \n", + ">and is commonly known as *Newton’s\n", + "Law of Cooling*. (The parameter $\\lambda$ is defined to be\n", + "$\\lambda = \\mu A/cM$, where $A$ is the surface area of the rock, $M$ is\n", + "its mass, $\\mu$ its thermal conductivity, and $c$ its specific heat.)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Quiz on Newton's Law of Cooling \n", + "\n", + "\n", + "$\\lambda$ is positive? True or False?\n", + "\n", + "In the following, replace 'xxxx' by 'True', 'False', 'Hint 1' or 'Hint 2' and run the cell ([how to](#Running-Code-Cells))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.conduction_quiz(answer = 'xxxx'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If we assume that $\\lambda$ is a constant, then the solution to this\n", + "equation is given by \n", + "\n", + "
\n", + "(Conduction solution)\n", + "$$T(t) = T_a + (T(0)-T_a)e^{-\\lambda t},$$\n", + "
\n", + "\n", + "where $T(0)$ is the initial temperature.\n", + "\n", + "**Mathematical Note:** Details of the solution can be found in the [Appendix](#Solution-to-the-Heat-Conduction-Equation)\n", + "\n", + "\n", + "In order to obtain realistic value of the parameter $\\lambda$, let our\n", + "“small” rock be composed of granite, with mass of $1\\;gram$, which\n", + "corresponds to a $\\lambda \\approx 10^{-5}\\;sec^{-1}$.\n", + "\n", + "Sample solution curves are given in Figure [Conduction](#lab1:fig:conduction)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='conduction/conduction.png',width='60%') " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure: Conduction Plot of solution curves $T(t)$ for $T_0=-10,15,20,30$; parameter\n", + "values: $\\lambda=10^{-5}$, $T_a=20$.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Demo: Conduction\n", + "[lab1:demo:conduction]: <#Demo:-Conduction> \"Conduction Demo\"\n", + "\n", + "Here is an interactive example that investigates the behaviour of the solution.\n", + "\n", + "The first we import the function that does the calculation and plotting. You need to run this cell ([how to](#Running-Code-Cells)) to load it. Loading it does not run the function. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from numlabs.lab1 import temperature_conduction as tc" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "You need to call the function. Simpliest call is next cell. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# simple call to temperature demo\n", + "tc.temperature()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After running as is try changing To = To (the initial temperature), Ta = Ta (the ambient temperature) or la = λ (the effective conductivity) to investigate changes in the solution." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# setting different values \n", + "# (note this uses the defaults again as written, you should change the values)\n", + "tc.temperature(Ta = 20, To = np.array([-10., 10., 20., 30.]), la = 0.00001)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example Two\n", + "\n", + "\n", + "\n", + "Suppose that the rock in the previous\n", + "example has a $\\lambda$ which is *not* constant. For example, if that\n", + "the rock is made of a material whose specific heat varies with the\n", + "temperature or time, then $\\lambda$ can be a function of $T$ or $t$.\n", + "This might happen if the material composing the rock undergoes a phase\n", + "transition at a certain critical temperature (for example, a melting ice\n", + "pellet). The problem is now a *non-linear* one, for which analytical\n", + "techniques may or may not provide a solution.\n", + "\n", + "If $\\lambda=\\lambda(T)$, a function of temperature only, then the exact\n", + "solution may be written as\n", + "$$T(t) = T_a + \\exp{\\left[-\\int^{t}_{0} \\lambda(T(s))ds \\right]},$$\n", + "which involves an integral that may or may not be evaluated\n", + "analytically, in which case we can only approximate the integral.\n", + "Furthermore, if $\\lambda$ is a function of both $T$ and $t$ which is\n", + "*not separable* (cannot be written as a product of a function of $T$ and\n", + "$t$), then we may not be able to write down a closed form for the\n", + "solution at all, and we must resort to numerical methods to obtain a\n", + "solution.\n", + "\n", + "Even worse, suppose that we don’t know $\\lambda$ explicitly as a\n", + "function of temperature, but rather only from experimental measurements\n", + "of the rock (see Figure [Table](#lab1:fig:table) for an example). \n", + "\n", + "| i | Temperature ($T_i$) | Measured $\\lambda_i$ |\n", + "| - | :------------------: | :-------------------: |\n", + "| 0 | -5.0 | 2.92 |\n", + "| 1 | -2.0 | 1.59 |\n", + "| 2 | 1.0 | 1.00 |\n", + "| 3 | 4.0 | 2.52 |\n", + "| 4 | 7.0 | 3.66 | \n", + "| 5 | 10.0 | 4.64 |" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename=\"table/table-interp.png\",width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Table: A rock with $\\lambda$ known only at a sequence of discrete temperature\n", + "values, from experimental measurements. The function $\\lambda(T)$ can be\n", + "represented approximately using linear interpolation (and the resulting\n", + "approximate function can then be used to solve the problem\n", + "numerically.\n", + "
\n", + "\n", + "Then there is\n", + "no way to express the rock’s temperature as a function, and analytical\n", + "methods fail us, since we do not know the values at points between the\n", + "given values. One alternative is to approximate $\\lambda$ at\n", + "intermediate points by joining successive points with straight lines\n", + "(this is called *linear interpolation*), and then use the resulting\n", + "function in a numerical scheme for computing the solution.\n", + "\n", + "As the above example demonstrates, even for a simple ODE such as [1-d conduction](#lab1:eq:conduction1d), there\n", + "are situations where analytical methods are inadequate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Partial Differential Equations\n", + "\n", + "\n", + "#### Example Three\n", + "\n", + "[lab1:exm:diffusion1d]: <#Example-Three> \"Example 3\"\n", + "\n", + "The rock in [Example One](#Example-One) was\n", + "considered to be small enough that the effects of heat diffusion in the\n", + "interior were negligible in comparison to the heat lost by conduction\n", + "through its surface. In this example, consider a rock that is *not\n", + "small*, and whose temperature changes are dominated by internal\n", + "diffusion effects. Therefore, it is no longer possible to ignore the\n", + "spatial dependence in the problem.\n", + "\n", + "For simplicity, we will add spatial dependence in one direction only,\n", + "which corresponds to a “one-dimensional rock”, or a thin rod. Assume\n", + "that the rod is insulated along its sides, so that heat flows only along\n", + "its length, and possibly out the ends (see Figure [Rod](#lab1:fig:rock-1d))." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='conduction/rod.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Rod: A thin rod can be thought of as a model for a one-dimensional\n", + "rock.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Consequently, the temperature varies only with position, $x$, and time,\n", + "$t$, and can be written as a function $u(x,t)$. The temperature in the\n", + "rod is governed by the following PDE $$u_t = \\alpha^2 u_{xx},$$ for\n", + "which we have to provide an initial temperature $$u(x,0) = u_0(x),$$ and\n", + "boundary values $$u(0,t)=u(1,t)=0,$$ where\n", + "\n", + "- $\\alpha^2$ is the *thermal diffusivity* of the material,\n", + "\n", + "- $u_0(x)$ is the initial temperature distribution in the rod, and\n", + "\n", + "- the boundary conditions indicate that the ends of the rod are held\n", + " at constant temperature, which we’ve assumed is zero.\n", + "\n", + "Thermal diffusivity is a quantity that depends only on the material from\n", + "which the bar is made. It is defined by\n", + "$$\\alpha^2 = \\frac{\\kappa}{\\rho c},$$\n", + "where $\\kappa$ is the thermal\n", + "conductivity, $\\rho$ is the density, and $c$ is the specific heat. A\n", + "typical value of the thermal diffusivity for a granite bar is\n", + "$0.011\\;cm^2/sec$, and $0.0038\\;cm^2/sec$ for a bar made of brick.\n", + "\n", + "Using the method of *separation of variables*, we can look for a\n", + "temperature function of the form $u(x,t)=X(x) \\cdot T(t)$, which leads\n", + "to the infinite series solution\n", + "$$u(x,t) = \\sum_{n=1}^\\infty b_n e^{-n^2\\pi^2\\alpha^2 t}\\sin{(n\\pi x)},$$\n", + "where the series coefficients are\n", + "$$b_n = 2 \\int_0^1 u_0(x) \\sin{(n\\pi x)} dx.$$\n", + "\n", + "**Mathematical Note:** Details of the derivation can be found in any introductory text in PDE’s\n", + "(for example, [Boyce and DiPrima (1986)](#Ref:BoyceDiPrima) [p. 549])." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We do manage to obtain an explicit formula for the solution, which can\n", + "be used to calculate actual values of the solution. However, there are\n", + "two obvious reasons why this formula is not of much practical use:\n", + "\n", + "1. The series involves an infinite number of terms (except for very\n", + " special forms for the initial heat distribution … such as the one\n", + " shown below). We might be able to truncate the series, since each\n", + " term decreases exponentially in size, but it is not trivial to\n", + " decide how many terms to choose in order to get an accurate answer\n", + " and here we are already entering the realm of numerical\n", + " approximation.\n", + "\n", + "2. Each term in the series requires the evaluation of an integral. When\n", + " these cannot be integrated analytically, we must find some way to\n", + " approximate the integrals … numerical analysis rears its head once\n", + " again!\n", + "\n", + "For most physical problems, an analytical expression cannot be obtained,\n", + "and the exact formula is not of much use.\n", + "\n", + "However, consider a very special case, when the initial temperature\n", + "distribution is sinusoidal, $$u_0(x) = \\sin(\\pi x).$$ For this problem,\n", + "the infinite series collapses into a single term\n", + "$$u(x,t) = e^{-\\pi^2\\alpha^2t}\\sin{\\pi x}.$$\n", + "\n", + "Sample solution curves are given in Figure [1d Diffusion](#lab1:fig:diffusion-1d)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='diffusion/diffusion.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure 1d-diffusion Temperature vs. position curves at various times, for heat diffusion\n", + "in a rod with sinusoidal initial temperature distribution and parameter\n", + "value $\\alpha=0.2$.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Movie: Diffusion\n", + "Here is a movie of the exact solution to the diffusion problem. Run the cell ([how to](#Running-Code-Cells)), then run the video. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "import IPython.display as display\n", + "\n", + "vid = display.YouTubeVideo(\"b4D2ktTtw7E\", modestbranding=1, rel=0, width=800)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Summary\n", + "\n", + "This section is best summed up by the insightful comment of [Strang (1986)](#Ref:Strang)\n", + "[p. 587]:\n", + "\n", + "**Nature is nonlinear.**\n", + "\n", + "Most problems arising in physics (which are non-linear) cannot be solved\n", + "analytically, or result in expressions that have little practical value,\n", + "and we must turn to numerical solution techniques." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Discretization\n", + "\n", + "\n", + "When computing analytical solutions to differential equations, we are\n", + "dealing with *continuous functions*; i.e. functions that depend continuously\n", + "on the independent variables. A computer, however, has only finite\n", + "storage capacity, and hence there is no way to represent continuous\n", + "data, except approximately as a sequence of *discrete* values.\n", + "\n", + "### Example Four\n", + "\n", + "> We already saw an example of a discrete function in\n", + "Example [Two](#Example-Two) where the rate function $\\lambda$, depended on the temperature. If $\\lambda$ is not known by\n", + "some empirical formula, then it can only be determined by experimental\n", + "measurements at a discrete set of temperature values. In\n", + "Figure [Table](#lab1:fig:table), $\\lambda$ is given at a sequence of six\n", + "temperature points ($(T_i, \\lambda_i)$, for $i = 0, 1, \\dots, 5)$),\n", + "and so is an example of a *discrete function*.\n", + "\n", + "> The process of interpolation, which was introduced in\n", + "Example [Two](#Example-Two), will be considered in more\n", + "detail next.\n", + "\n", + "### Example Five\n", + "\n", + "\n", + "> Consider the two continuous functions\n", + "$$f(x)=x^3-5x \\;\\; {\\rm and} \\;\\; g(x)=x^{2/3} .$$\n", + "(In fact, $g(x)$ was the function used to generate the values $\\lambda(T)$ in\n", + "[Example Two](#Example-Two)).\n", + "\n", + "> The representation of functions using mathematical notation or graphs is\n", + "very convenient for mathematicians, where continuous functions make\n", + "sense. However, a computer has a limited storage capacity, and so it can\n", + "represent a function only at a finite number of discrete points $(x, y)$.\n", + "\n", + "> One question that arises immediately is: *What do we do if we have to\n", + "determine a value of the function which is not at one of the discrete\n", + "points?* The answer to this question is to use some form of\n", + "*interpolation* – namely to use an approximation procedure\n", + "to estimate values of the function at points between the known values.\n", + "\n", + "> For example, linear interpolation approximates the function at\n", + "intermediate points using the straight line segment joining the two\n", + "neighbouring discrete points. There are other types of interpolation\n", + "schemes that are more complicated, a few of which are:\n", + "\n", + ">- quadratic interpolation: every two sucessive points are joined by a\n", + " quadratic polynomial.\n", + "\n", + ">- cubic splines: each pair of points is joined by a cubic polynomial\n", + " so that the function values and first derivatives match at each\n", + " point.\n", + "\n", + ">- Fourier series: instead of polynomials, uses a sum of $\\sin nx$ and\n", + " $\\cos nx$ to approximate the function (Fourier series are useful in\n", + " analysis, as well as spectral methods).\n", + "\n", + ">- Chebyshev polynomials: another type of polynomial approximation\n", + " which is useful for spectral methods.\n", + "\n", + ">- …many others …\n", + "\n", + ">For details on any of these interpolation schemes, see a numerical\n", + "analysis text such as that by [Burden and Faires (1981)](#Ref-BurdenFaires)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "An application of linear interpolation to discrete versions of the\n", + "functions $f$ and $g$ is shown in Figure [f and g](#lab1:fig:discrete-f)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='discrete/f.png', width='60%') " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='discrete/g.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + ">
\n", + "Figure f and g: The functions $f$ and $g$ are known only at discrete points. The\n", + "function can be approximated at other values by linear interpolation,\n", + "where straight line segments are used to join successive points.\n", + "
\n", + "\n", + "> Depending on the function, or number of location of the points chosen,\n", + "the approximation may be more or less accurate. In\n", + "Figure [f and g](#lab1:fig:discrete-f), it is not clear which function is\n", + "approximated more accurately. In the graph of $f(x)$, the error seems to\n", + "be fairly small throughout. However, for the function $g(x)$, the error\n", + "is large near $x=0$, and then very small elsewhere. This problem of\n", + "*accuracy* of discrete approximations will come up again and again in\n", + "this course." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Demo: Interpolation\n", + "[lab1:demo:discrete]: <#Demo:-Interpolation> \"Interpolation Demo\"\n", + "Here is an interactive example demonstrating the use of interpolation (linear and cubic) in approximating functions. \n", + "\n", + "The next cell imports a module containing two python functions that interpolate the two algebraic functions, f and g ([Figure f and g](#lab1:fig:discrete-f)). You need to run this cells ([how to](#Running-Code-Cells)) to load them." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from numlabs.lab1 import interpolate as ip" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once you have loaded the module, you can call the interpolation routines as ip.interpol_f(pn) and ip.interpol_g(pn). pn is the number of points used the interpolation. Watch what changing pn does to the solutions." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ip.interpol_f(6)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "ip.interpol_g(6)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Interpolation Quiz \n", + "\n", + "\n", + " The accuracy of an approximation using\n", + "linear or cubic interpolation improves as the number of points is\n", + "increased. True or False?\n", + "\n", + "In the following, replace 'xxxx' by 'True', 'False', or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.interpolation_quiz(answer = 'xxx'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "When solving differential equations numerically, it is essential to\n", + "reduce the continuous problem to a discrete one. The basic idea is to\n", + "look for an approximate solution, which is defined at a finite number of\n", + "discrete points. This set of points is called a *grid*. Consider the\n", + "one-dimensional conduction problem of Example [One, Conduction](#Example-One),\n", + "which in its most general form reads\n", + "\n", + "
\n", + "(Conduction Equation)\n", + " $$\\frac{dT}{dt} = -\\lambda(T,t) \\, (T-T_a),$$\n", + "
\n", + "\n", + " with initial temperature $T(0)$.\n", + "\n", + "When we say we want to design a numerical procedure for solving this\n", + "initial value problem, what we want is a procedure for constructing a\n", + "sequence of approximations,\n", + "$$T_0, \\, T_1, \\, \\ldots, \\, T_i, \\, \\ldots,$$\n", + "defined at a set of\n", + "discrete $t$-points, \n", + "$$t_0\n", + "Figure Grid:
A grid of equally-spaced points, $t_i=t_0+i\\Delta t$, for $i=0,1,2,\\ldots$.\n", + "
\n", + "\n", + "This process of reducing a continuous problem to one in a finite number\n", + "of discrete unknowns is called *discretization*. The actual mechanics of\n", + "discretizing differential equations are introduced in the following\n", + "section." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Discretization Quiz \n", + "\n", + "\n", + "What phrase best describes \"discretization\"?\n", + "\n", + "**A** The development and analysis of methods for the\n", + "solution of mathematical problems on a computer.\n", + "\n", + "**B** The process of replacing continuous functions by\n", + "discrete values.\n", + "\n", + "**C** Employing the discrete Fourier transform to analyze the\n", + "stability of a numerical scheme.\n", + "\n", + "**D** The method by which one can reduce an initial value\n", + "problem to a set of discrete linear equations that can be solved on a\n", + "computer. \n", + "\n", + "In the following, replace 'x' by 'A', 'B', 'C', 'D' or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.discretization_quiz(answer = 'x'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Summary\n", + "\n", + "The basic idea in this section is that continuous functions can be\n", + "approximated by discrete ones, through the process of\n", + "*discretization*. In the course of looking at discrete\n", + "approximations in the interactive example, we introduced the idea of the\n", + "*accuracy* of an approximation, and showed that increasing the accuracy\n", + "of an approximation is not straightforward.\n", + "\n", + "We introduced the notation for approximate solutions to differential\n", + "equations on a grid of points. The mechanics of discretization as they\n", + "apply to differential equations, will be investigated further in the\n", + "remainder of this Lab, as well as in Lab Two." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Difference Approximations to the First Derivative\n", + "\n", + "\n", + "It only remains to write a discrete version of the differential equation ([Conduction Equation](#lab1:eq:conduction))\n", + "involving the approximations $T_i$. The way we do this is to approximate\n", + "the derivatives with *finite differences*. If this term is new to you,\n", + "then you can think of it as just another name for a concept you have\n", + "already seen in calculus. Remember the *definition of the derivative of\n", + "a function $y(t)$*, where $y^\\prime(t)$ is written as a limit of a\n", + "divided difference:\n", + "\n", + "
\n", + "(limit definition of derivative)\n", + "$$y^\\prime(t) = \\lim_{\\Delta t\\rightarrow 0} \\frac{y(t+\\Delta t)-y(t)}{\\Delta t}.\n", + " $$ \n", + "
\n", + "\n", + " We can apply the same idea to approximate\n", + "the derivative $dT/dt=T^\\prime$ in ([Conduction Equation](#lab1:eq:conduction)) by the *forward difference formula*,\n", + "using the discrete approximations, $T_i$:\n", + "\n", + "
\n", + "(Forward Difference Formula)\n", + "$$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_i}{\\Delta t}.$$\n", + "
\n", + "\n", + "### Example Six\n", + "\n", + "In order to understand the ability of the formula ([Forward Difference Formula](#lab1:eq:forward-diff)) to approximate the\n", + "derivative, let’s look at a specific example. Take the function\n", + "$y(x)=x^3-5x$, and apply the forward difference formula at the point\n", + "$x=1$. The function and its tangent line (the short line segment with\n", + "slope $y^\\prime(1)$) are displayed in Figure [Tangents](#lab1:fig:deriv)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='deriv/deriv.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + ">
\n", + "Figure Tangents: Plot of the function $y=x^3-5x$ and the forward difference\n", + "approximations to the derivative for various values of $\\Delta t$\n", + "
\n", + "\n", + "> Each of the remaining line segments represents the forward difference\n", + "approximation to the tangent line for different values of $\\Delta t$, which are\n", + "simply *the secant lines through the points $(t, y(t))$ and\n", + "$(t+\\Delta t, y(t+\\Delta t))$*. Notice that the approximation improves as $\\Delta t$ is\n", + "reduced. This motivates the idea that grid refinement improves the\n", + "accuracy of the discretization …but not always (as we will see in the\n", + "coming sections)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Investigation\n", + "\n", + "Investigate the use of the forward difference approximation of the derivative in the following interactive example. \n", + "\n", + "The next cell loads a python function that plots a function f(x) and approximates its derivative at $x=1$ based on a second x-point that you chose (xb). You need to run this cell ([how to](#Running-Code-Cells)) to load it." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "from numlabs.lab1 import derivative_approx as da" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Once you have loaded the function you can call it as da.plot_secant(xb) where xb the second point used to estimate the derivative (slope) at $x=1$. You can compare the slope of the estimate (straight line) to the slope of the function (blue curve)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "da.plot_secant(2.)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Forward Euler Method\n", + "\n", + "\n", + "\n", + "We can now write down a discrete version of our model ODE problem ([Conduction Equation](#lab1:eq:conduction)) at any\n", + "point $t_i$ by\n", + "\n", + "1. discretizing the derivative on the left hand side (for example,\n", + " using the forward difference approximation ([Forward Difference Formula](#lab1:eq:forward-diff));\n", + "\n", + "2. evaluating the right hand side function at the discrete point $t_i$.\n", + "\n", + "The discrete form of the problem is\n", + "\n", + "$$\\frac{T_{i+1}-T_i}{\\Delta t} = \\lambda(T_i,t_i) \\, (T_i-T_a),$$\n", + "or, after rearranging, \n", + "\n", + "
\n", + "$$T_{i+1} = T_i + \\Delta t \\, \\lambda(T_i,t_i) \\, (T_i-T_a).$$ \n", + "
\n", + "\n", + "This formula is called the\n", + "*Forward Euler method* (since it uses forward differences). Notice that\n", + "this formula relates each discrete solution value to the solution at the\n", + "preceding $t$-point. Consequently, if we are given an initial value\n", + "$T(0)$, then all subsequent values of the solution are easily computed.\n", + "\n", + "(**Note:** The forward Euler formula for the more general\n", + "first-order IVP in ([Model ODE](#lab1:eq:modelode')) is simply $y_{i+1} = y_i + \\Delta t f(y_i,t_i)$.)\n", + "\n", + "#### Example Seven\n", + "\n", + "\n", + "> Let us now turn to another example in atmospheric\n", + "physics to illustrate the use of the forward Euler method. Consider the\n", + "process of condensation and evaporation in a cloud. The *saturation\n", + "ratio*, $S$, is the ratio of the vapour pressure to the vapour pressure\n", + "of a plane surface of water at temperature $T$. $S$ varies in time\n", + "according to the \n", + "\n", + ">
\n", + "(saturation development equation)\n", + "$$\\frac{dS}{dt} = \\alpha S^2 + \\beta S + \\gamma,$$ \n", + "
\n", + "\n", + "> where $\\alpha$, $\\beta$ and $\\gamma$\n", + "are complicated (but constant) expressions involving the physical\n", + "parameters in the problem (and so we won’t reproduce them here).\n", + "\n", + "> What are some physically reasonable values of the parameters (other than\n", + "simply $\\alpha<0$ and $\\gamma>0$)?\n", + "\n", + "> [Chen (1994)](#Ref:Chen) gives a detailed derivation of the equation, which is a\n", + "non-linear, first order ODE (i.e. non-linear in the dependent variable $S$,\n", + "and it contains only a first derivative in the time variable). Chen also\n", + "derives an analytical solution to the problem which takes a couple pages\n", + "of messy algebra to come to. Rather than show these details, we would\n", + "like to use the forward Euler method in order to compute the solution\n", + "numerically, and as we will see, this is actually quite simple." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> Using the ([forward difference formula](#lab1:eq:forward-diff)), the discrete form of the ([saturation development equation](#lab1:eq:saturation)) is\n", + "$$S_{i+1} = S_i + \\Delta t \\left( \\alpha S_i^2 + \\beta S_i + \\gamma \\right).$$\n", + "Consider an initial saturation ratio of $0.98$,\n", + "and take parameter values $\\alpha=-1$, $\\beta=1$ and $\\gamma=1$. The\n", + "resulting solution, for various values of the time step $\\Delta t$,is plotted in\n", + "Figure [Saturation Time Series](#lab1:fig:saturation)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='feuler/sat2.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Saturation Time Series: Plot of the saturation ratio as a function of time using the Forward\n", + "Euler method. “nt” is the number of time steps.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> There are two things to notice here, both related to the importance of\n", + "the choice of time step $\\Delta t$:\n", + "\n", + "> - As $\\Delta t$ is reduced, the solution appears to *converge* to one solution\n", + " curve, which we would hope is the exact solution to the differential\n", + " equation. An important question to ask is: *When will the numerical\n", + " method converge to the exact solution as $\\Delta t$ is reduced?*\n", + "\n", + "> - If $\\Delta t$ is taken too large, however, the numerical solution breaks down.\n", + " In the above example, the oscillations that occur for the largest\n", + " time step (when $nt=6$) are a sign of *numerical\n", + " instability*. The differential problem is stable and exhibits\n", + " no such behaviour, but the numerical scheme we have used has\n", + " introduced an instability. An obvious question that arises is: *How\n", + " can we avoid introducing instabilities in a numerical scheme?*\n", + "\n", + "> Neither question has an obvious answer, and both issues will be\n", + "investigated further in Lab 2." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Other Approximations\n", + "\n", + "\n", + "Look again at the ([limit definition of derivative](#lab1:eq:defn-deriv)), and notice that an\n", + "equivalent expression for $T^\\prime$ is\n", + "\n", + "
\n", + "$$T^\\prime(t) = \\lim_{\\Delta t\\rightarrow 0} \\frac{T(t)-T(t-\\Delta t)}{\\Delta t}.$$ \n", + "
\n", + " \n", + "From this, we can derive the *backward\n", + "difference formula* for the first derivative,\n", + "\n", + "
\n", + "(Backward Difference Formula)\n", + "$$T^\\prime(t_i) \\approx \\frac{T_i-T_{i-1}}{\\Delta t},$$ \n", + "
\n", + "\n", + "and similarly the *centered difference formula* \n", + "\n", + "
\n", + "(Centered Difference Formula)\n", + "$$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_{i-1}}{2 \\Delta t}.$$\n", + "
\n", + "\n", + "The corresponding limit formulas are equivalent from a mathematical standpoint, **but the discrete formulas are not!** In particular, the accuracy and stability of numerical schemes derived from the three difference formulas: ([Forward Difference Formula](#lab1:eq:forward-diff')), ([Backward Difference Formula](#lab1:eq:backward-diff')) and ([Centered Difference Formula](#lab1:eq:centered-diff))\n", + " are quite different. More will said on this in the next Lab." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Summary\n", + "\n", + "This section introduces the use of the forward difference formula to\n", + "discretize the derivatives in a first order differential equation. The\n", + "resulting numerical scheme is called the forward Euler method. We also\n", + "introduced the backward and centered difference formulas for the first\n", + "derivative, which were also obtained from the definition of derivative.\n", + "\n", + "You saw how the choice of grid spacing affected the accuracy of the\n", + "solution, and were introduced to the concepts of convergence and\n", + "stability of a numerical scheme. More will be said about these topics in\n", + "the succeeding lab, as well as other methods for discretizing\n", + "derivatives." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Generalizations\n", + "\n", + "\n", + "The idea of discretization introduced in the previous section can be\n", + "generalized in several ways, some of which are:\n", + "\n", + "- problems with higher derivatives,\n", + "\n", + "- systems of ordinary differential equations,\n", + "\n", + "- boundary value problems, and\n", + "\n", + "- partial differential equations.\n", + "\n", + "### Higher Derivatives\n", + "\n", + "\n", + "Many problems in physics involve derivatives of second order and higher.\n", + "Discretization of these derivatives is no more difficult than the first\n", + "derivative in the previous section. The difference formula for the\n", + "second derivative, which will be derived in Lab \\#2, is given by\n", + "\n", + "
\n", + "(Centered Second Derivative)\n", + "$$y^{\\prime\\prime}(t_i) \\approx \n", + " \\frac{y(t_{i+1})-2y(t_i)+y(t_{i-1})}{(\\Delta t)^2} ,$$\n", + "
\n", + "\n", + "and is called the *second-order\n", + "centered difference formula* for the second derivative (“centered”,\n", + "because it involves the three points centered about $t_i$, and\n", + "“second-order” for reasons we will see in the next Lab). We will apply\n", + "this formula in the following example …" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Eight\n", + "\n", + "[lab1:exm:balloon]: <#Example-Eight>\n", + "\n", + "A weather balloon, filled with helium, climbs\n", + "vertically until it reaches its level of neutral buoyancy, at which\n", + "point it begins to oscillate about this equilibrium height. We can\n", + "derive a DE describing the motion of the balloon by applying Newton’s\n", + "second law: \n", + "$$mass \\; \\times \\; acceleration = force$$\n", + "$$m \\frac{d^2 y}{d t^2} = \n", + " \\underbrace{- \\beta \\frac{dy}{dt}}_{\\mbox{air resistance}} \n", + " \\underbrace{- \\gamma y}_{\\mbox{buoyant force}},$$ where\n", + "\n", + "- $y(t)$ is the displacement of the balloon vertically from its\n", + " equilibrium level, $y=0$;\n", + "\n", + "- $m$ is the mass of the balloon and payload;\n", + "\n", + "- the oscillations are assumed small, so that we can assume a linear\n", + " functional form for the buoyant force, $-\\gamma y$.\n", + "\n", + "This problem also requires initial values for both the initial\n", + "displacement and velocity:\n", + "$$y(0) = y_0 \\;\\; \\mbox{and} \\;\\; \\frac{dy}{dt}(0) = v_0.$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='balloon/balloon.png', width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Weather Balloon: A weather balloon oscillating about its level of neutral\n", + "buoyancy.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Problem One\n", + "\n", + "\n", + "a\\) Using the centered difference formula ([Centered Second Derivative](#lab1:eq:centered-diff2)) for the second derivative, and\n", + " the forward difference formula ([Forward Difference Formula](#lab1:eq:forward-diff')) for the first derivative at the point\n", + " $t_i,$ derive a difference scheme for $y_{i+1}$, the vertical\n", + " displacement of the weather balloon.\n", + "\n", + "b\\) What is the difference between this scheme and the forward Euler\n", + " scheme from [Example Seven](#Example-Seven), related to the initial\n", + " conditions? (**Hint:** think about starting values …)\n", + "\n", + "c\\) Given the initial values above, explain how to start the numerical\n", + " integration.\n", + " \n", + "*Note*: There are a number of problems in the text of each lab. See the syllabus for which problems you are assigned as part of your course. That is, you don't have to do them all!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Systems of First-order ODE's\n", + "\n", + "\n", + "Discretization extends in a simple way to first-order systems of ODE’s,\n", + "which arise in many problems, as we will see in some of the later labs.\n", + "For now, though, we can see:\n", + "\n", + "#### Example 9\n", + "\n", + "\n", + "The second order DE for the weather balloon problem from\n", + "Example [Eight](#Example-Eight) can be rewritten by letting $u=dy/dt$. Then,\n", + "\n", + "\\begin{align}\n", + "\\frac{dy}{dt} &= u\\\\\n", + "\\frac{du}{dt} &= -\\frac{\\beta}{m} u - \\frac{\\gamma}{m} y\n", + "\\end{align}\n", + "\n", + "which is a\n", + "system of first order ODE’s in $u$ and $y$. This set of differential\n", + "equations can be discretized to obtain another numerical scheme for the\n", + "weather balloon problem." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Problem Two\n", + "\n", + "\n", + "a\\) Derive a difference scheme for the problem based on the above system\n", + " of two ODE’s using the forward difference formula for the first\n", + " derivative.\n", + "\n", + "b\\) By combining the discretized equations into one equation for y, show\n", + " that the difference between this scheme and the scheme obtained in\n", + " problem one is the difference formula for the second derivative." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Boundary Value Problem\n", + "\n", + "[lab1:sec:bvp]: (#6.3-Boundary-Value-Problems)\n", + "\n", + "\n", + "So far, we’ve been dealing with *initial value problems* or *IVP’s*\n", + "(such as the problem of heat conduction in a rock in\n", + "Example [One](#Example-One)): a differential equation is given for an\n", + "unknown function, along with its initial value. There is another class\n", + "of problems, called *boundary value problems* (or *BVP’s*),\n", + "where the independent variables are restricted to a *closed domain* (as\n", + "opposed to an *open domain*) and the solution (or its derivative) is\n", + "specified at every point along the boundary of the domain. Contrast this\n", + "to initial value problems, where the solution is *not given* at the end\n", + "time.\n", + "\n", + "A simple example of a boundary value problem is the steady state heat\n", + "diffusion equation problem for the rod in\n", + "Example [Three](#Example-Three). By *steady state*, we mean simply that\n", + "the rod has reached a state where its temperature no longer changes in\n", + "time; that is, $\\partial u/\\partial t = 0$. The corresponding problem\n", + "has a temperature, $u(x)$, that depends on position only, and obeys the\n", + "following equation and boundary conditions:\n", + "$$u_{xx} = 0,$$\n", + "$$u(0) = u(1) = 0.$$\n", + "This problem is known as an *initial-boundary value problem*(or *IBVP*),\n", + "since it has a mix of both initial and boundary values.\n", + "\n", + "The structure of initial and boundary value problems are quite different\n", + "mathematically: IVP’s involve a time variable which is unknown at the\n", + "end time of the integration (and hence the solution is known on an open\n", + "domain or interval), whereas BVP’s specify the solution value on a\n", + "closed domain or interval. The numerical methods corresponding to these\n", + "problems are also quite different, and this can be best illustrated by\n", + "an example.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example 10\n", + "\n", + "[lab1:exm:steady-diffusion]: (#Example-Ten)\n", + "\n", + "We can discretize the steady state diffusion\n", + "equation using the centered difference formula for the second\n", + "derivative to obtain: $$u_{i+1}-2u_i+u_{i-1} = 0$$ where\n", + "$u_i\\approx u(i/N)$ and $i=0,1,\\ldots,N$ (and the factor of\n", + "$(\\Delta x)^2 = {1}/{N^2}$ has been multiplied out). The boundary values\n", + "$u_0$ and $u_N$ are both known to be zero, so the above expression\n", + "represents a system of $N-1$ equations in $N-1$ unknown values $u_i$\n", + "that must be solved for *simultaneously*. The solution of such systems\n", + "of linear equations will be covered in more detail in Lab \\#3 in fact, this\n", + "equation forms the basis for a Problem in the Linear Algebra Lab.\n", + "\n", + "Compare this to the initial value problems discretized using the forward\n", + "Euler method, where the resulting numerical scheme is a step-by-step,\n", + "marching process (that is, the solution at one grid point can be\n", + "computed using an explicit formula using only the value at the previous\n", + "grid point).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Partial Differential Equations\n", + "\n", + "\n", + "So far, the examples have been confined to ordinary differential\n", + "equations, but the procedure we’ve set out for ODE’s extends with only\n", + "minor modifications to problems involving PDE’s.\n", + "\n", + "#### Example 11\n", + "\n", + "To illustrate the process, let us go back to the heat diffusion problem\n", + "from Example [Three](#Example-Three), an initial-boundary value problem\n", + "in the temperature $u(x,t)$: $$u_{t} = \\alpha^2 u_{xx},$$ along with\n", + "initial values $$u(x,0) = u_0(x),$$ and boundary values\n", + "$$u(0,t) = u(1,t) = 0.$$\n", + "\n", + "As for ODE’s, the steps in the process of discretization remain the\n", + "same:\n", + "\n", + "1) First, replace the independent variables by discrete values\n", + " $$x_i = i \\Delta x = \\frac{i}{M}, \\;\\; \\mbox{where $i=0, 1,\n", + " \\ldots, M$, and}$$\n", + " $$t_n = n \\Delta t, \\;\\; \\mbox{where $n=0, 1,\n", + " \\ldots$}$$ In this example, the set of discrete points define\n", + " a two-dimensional grid of points, as pictured in\n", + " Figure [PDE Grid](#lab1:fig:pde-grid).\n", + "\n", + " " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='pdes/pde-grid.png', width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure PDE Grid: The computational grid for the heat diffusion problem, with\n", + " discrete points $(x_i,t_n)$.\n", + "
\n", + "\n", + "2) Replace the dependent variables (in this example, just the\n", + " temperature $u(x,t)$) with approximations defined at the grid\n", + " points: $$U_i^n \\approx u(x_i,t_n).$$ The boundary and initial\n", + " values for the discrete temperatures can then be written in terms of\n", + " the given information.\n", + "\n", + "3) Approximate all of the derivatives appearing in the problem with\n", + " finite difference approximations. If we use the centered difference\n", + " approximation ([Centered Second Derivative](#lab1:eq:centered-diff2)) for the second derivative in $x$, and\n", + " the forward difference formula ([Forward Difference Formula](#lab1:eq:forward-diff')) for the time derivative (while evaluating the\n", + " terms on the right hand side at the previous time level), we obtain\n", + " the following numerical scheme:\n", + " \\begin{equation} \n", + " U_i^{n+1} = U_i^n + \\frac{\\alpha^2 \\Delta t}{(\\Delta x)^2} \\left(\n", + " U_{i+1}^n - 2 U_i^n + U_{i-1}^n \\right)\n", + " \\end{equation}\n", + "\n", + " Given the initial values, $U_i^0=u_0(x_i)$, and boundary values\n", + " $U_0^n=U_M^n=0$, this difference formula allows us to compute values of\n", + " temperature at any time, based on values at the previous time.\n", + "\n", + " There are, of course, other ways of discretizing this problem, but the\n", + " above is one of the simplest." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes\n", + "\n", + "\n", + "\n", + "### Solution to the Heat Conduction Equation\n", + "\n", + "\n", + "\n", + "In Example [One](#Example-One), we had the equation\n", + "$$\\frac{dT}{dt} = -\\lambda (T-T_a),$$\n", + "subject to the initial condition\n", + "$T(0)$. This equation can be solved by *separation of variables*,\n", + "whereby all expressions involving the independent variable $t$ are moved\n", + "to the right hand side, and all those involving the dependent variable\n", + "$T$ are moved to the left $$\\frac{dT}{T-T_a} = -\\lambda dt.$$\n", + "The resulting expression is integrated from time $0$ to $t$\n", + "$$\\int_{T(0)}^{T(t)} \\frac{dS}{S-T_a} = -\\int_0^t\\lambda ds,$$\n", + "(where $s$ and $S$ are dummy variables of integration), which then leads to the\n", + "relationship \n", + "$$\\ln \\left( T(t)-T_a)-\\ln(T(0)-T_a \\right) = -\\lambda t,$$\n", + "or, after exponentiating both sides and rearranging,\n", + "$$T(t) = T_a + (T(0)-T_a)e^{-\\lambda t},$$\n", + "which is exactly the [Conduction Solution](#lab1:eq:conduction-soln) equation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References\n", + "\n", + "\n", + "
\n", + "Boyce, W. E. and R. C. DiPrima, 1986: Elementary Differential Equations and Boundary Value Problems. John Wiley & Sons, New York, NY, 4th edition.\n", + "
\n", + "
\n", + "Burden, R. L. and J. D. Faires, 1981: Numerical Analysis. PWS-Kent, Boston, 4th edition.\n", + "
\n", + "
\n", + "Chen, J.-P., 1994: Predictions of saturation ratio for cloud microphysical models. Journal of the Atmospheric\n", + "Sciences, 51(10), 1332–1338.\n", + "
\n", + "Garcia, A. L., 1994: Numerical Methods for Physics. Prentice-Hall, Englewood Cliffs, NJ.\n", + "
\n", + "Strang, G., 1986: Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary\n", + "\n", + "\n", + "**backward difference discretization:** used to estimate a derivative – uses the current points and points with smaller independent variable.\n", + "\n", + "**boundary value problem:** a differential equation (or set of differential equations) along with boundary values for the unknown functions. Abbreviated BVP.\n", + "\n", + "**BVP:** see *boundary value problem*\n", + "\n", + "**centre difference discretization:** used to estimate a derivative – uses a discretization symmetric (in\n", + "independent variable) around the current point.\n", + "\n", + "**closed domain:** a domain for which the value of the dependent variables is known on the boundary of the domain.\n", + "\n", + "**converge:** as the discretization step (eg. ∆t) is reduced the solutions generated approach one solution curve.\n", + "\n", + "**DE:** see *differential equation*\n", + "\n", + "**dependent variable:** a variable which is a (possibly unknown) function of the independent variables in a problem; for example, in a fluid the pressure can be thought of as a dependent variable, which depends on the time t and position (x, y, z).\n", + "\n", + "**differential equation:** an equation involving derivatives. Abbreviated DE.\n", + "\n", + "**discretization:** when referring to DE’s, it is the process whereby the independent variables are replaced by a *grid* of discrete points; the dependent variables are replaced by approximations at the grid points; and the derivatives appearing in the problem are replaced by a *finite difference* approximation. The discretization process replaces the DE (or DE’s) with an algebraic equation or finite system of algebraic equations which can be solved on a computer.\n", + "\n", + "**finite difference:** an approximation of the derivative of a function by a difference quotient involving values of the function at discrete points. The simplest method of deriving finite difference formulae is using Taylor series.\n", + "\n", + "**first order differential equation:** a differential equation involving only first derivatives of the unknown functions.\n", + "\n", + "**forward difference discretization:** used to calculate a derivative – uses the current points and points with larger independent variable.\n", + "\n", + "**grid:** when referring to discretization of a DE, a grid is a set of discrete values of the independent variables, defining a *mesh* or array of points, at which the solution is approximated.\n", + "\n", + "**independent variable:** a variable that does not depend on other quantities (typical examples are time, position, etc.)\n", + "\n", + "**initial value problem:** a differential equation (or set of differential equations) along with initial values for the unknown functions. Abbreviated IVP.\n", + "\n", + "**interpolation:** a method for estimating the value of a function at points intermediate to those where its values are known.\n", + "\n", + "**IVP:** initial value problem\n", + "\n", + "**linear:** pertaining to a function or expression in which the quantities appear in a linear combination. If $x_i$ are the variable quantities, and $c_i$ are constants, then any linear function of the $x_i$ can be written in the form $c_0 + \\sum_i c_i \\cdot x_i$.\n", + "\n", + "**linear interpolation:** interpolation using straight lines between the known points\n", + "\n", + "**Navier-Stokes equations:** the system of non-linear PDE’s that describe the time evolution of the flow of\n", + "a fluid.\n", + "\n", + "**non-linear:** pertaining to a function or expression in which the quantities appear in a non-linear combination.\n", + "\n", + "**numerical instability:** although the continuous differential equation has a finite solution, the numerical solution grows without bound as the numerical interation proceeds.\n", + "\n", + "**ODE:** see *ordinary differential equation*\n", + "\n", + "**open domain:** a domain for which the value of one or more dependent variables is unknown on a portion\n", + "of the boundary of the domain or a domain for which one boundary (say time very large) is not specified.\n", + "\n", + "**ordinary differential equation:** a differential equation where the derivatives appear only with respect to one independent variable. Abbreviated ODE.\n", + "\n", + "**partial differential equation:** a differential equation where derivatives appear with respect to more than one independent variable. Abbreviated PDE.\n", + "\n", + "**PDE:** see *partial differential equation*\n", + "\n", + "**second order differential equation:** a differential equation involving only first and second derivatives of the unknown functions.\n", + "\n", + "**separation of variables:** a technique whereby a function with several dependent variables is written as a product of several functions, each of which depends on only one of the dependent variables. For example, a function of three unknowns, u(x, y, t), might be written as u(x, y, t) = X(x) · Y (y) · T (t)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": { + "height": "512px", + "width": "252px" + }, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": true, + "toc_position": {}, + "toc_section_display": "block", + "toc_window_display": false + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab10/01-lab10.html b/notebooks/lab10/01-lab10.html new file mode 100644 index 0000000..eebaea1 --- /dev/null +++ b/notebooks/lab10/01-lab10.html @@ -0,0 +1,912 @@ + + + + + + + + <no title> — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+
{
+
“cells”: [
+
{

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“# Laboratory 10: Numerical Advection Schemes #n”, +“n”, +“Carmen Guo”

+
+

]

+
+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## Learning Goals ##n”, +“n”, +“- Explain why modelling advection is particularly difficultn”, +“- Explain and contrast diffusion and dispersionn”, +“- Define positive definite”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## List of Problems ##n”, +“n”, +“- [Problem One](#Problem-One)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“import contextn”, +“from IPython.display import Imagen”, +“import matplotlib.pyplot as pltn”, +“# make the plots happen inlinen”, +“%matplotlib inlinen”, +“# import the numpy array handling libraryn”, +“import numpy as npn”, +“# import the advection code from the numlabs directoryn”, +“import numlabs.lab10.advection_funs as afs”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## Advection Process ##n”, +“n”, +“The word advection means ‘transfer of heat through the horizontal motionn”, +“of a flow’. More generally, we will consider a flow of somen”, +“non-diffusive quantity. For example, consider the wind as a flow and then”, +“water vapour in the air as the non-diffusive quantity. Suppose the windn”, +“is travelling in the positive x direction, and we are considering then”, +“vapour concentration from $x=1$ to $x=80$.n”, +“n”, +“Assume that initially the distribution curve of the vapour is Gaussiann”, +“([Figure Initial Distribution](#fig:initial)). Ideally, the water droplets move at then”, +“same speed as that of the air, so the distribution curve retains itsn”, +“initial shape as it travels along the x-axis. This process is describedn”, +“by the following PDE: n”, +“n”, +“<a name=’eqn:advection’></a>n”, +“(Advection Eqn)n”, +“$$\frac{\partial c}{\partial t} + u \frac{\partial c}{\partial x} = 0$$ n”, +“where $c$ is the concentration of the watern”, +“vapour, and $u$ is the speed of the wind (assuming the wind is blowingn”, +“at constant speed).”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“Image(filename=’images/initial.png’,width=’60%’)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’fig:initial’></a>n”, +“Figure Initial Distribution: This is the initial distribution of water vapour concentration.n”, +“n”, +“As you will see in the upcoming examples, it is not easy to obtain an”, +“satisfactory numerical solution to this PDE.”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## Simple Solution Using Centred Differencing Scheme ##n”, +“n”, +“Let’s start off simple and solve the PDE ([Advection Eqn](#eqn:advection)) usingn”, +“centred differences, i.e., by expanding the time and spatial derivativesn”, +“in the following way:n”, +“n”, +“<a name=’eqn:centered’></a>n”, +“(Centered Difference Scheme)n”, +“$$\frac{\partial c}{\partial t}(x=m dx, t=n dt) =\frac {c(x=m dx, t=(n+1) dt) - c(x=m dx, t=(n-1) dt)}{2 dt}$$n”, +“$$\frac{\partial c}{\partial x}(x=m dx, t=n dt)=\frac {c(x=(m+1) dx, t=n dt) - c(x=(m-1) dx, t=n dt)}{2 dx}$$n”, +“n”, +“where $m=2, \ldots, 79$, and $n=2, \ldots$. Substitution of then”, +“equations into the PDE yields the following recurrence relation:n”, +“$$c(m, n+1)= c(m, n-1) - u \frac{dt}{dx} (c(m+1, n) - c(m-1, n))$$n”, +“n”, +“The boundary conditions are : $$c(x=1 dx)= c(x=79 dx)$$n”, +“$$c(x=80 dx)= c(x=2 dx)$$n”, +“n”, +“The initial conditions are:n”, +“$$c(x=n dx) = \exp( - \alpha (n dx - \hbox{offset})^2)$$ n”, +“where $\hbox{offset}$ is the location of the peak of the distribution curve.n”, +“We don’t want the peak to be located near $x=0$ due to the boundaryn”, +“conditions.n”, +“n”, +“Now we need the values of $c$ at $t= 1 dt$ to calculate $c$ atn”, +“$t= 2 dt$, and we will use the Forward Euler scheme to approximate $c$n”, +“at $t= 1 dt$. n”, +“So n”, +“$$\frac{\partial c}{\partial t}(m, 0)= \frac {c(m, 1) - c(m, 0)}{dt}$$”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“Substitution of the equations into the PDE yieldsn”, +“$$c(m, 1) = c(m, 0) - u \frac{dt}{2 dx}(c(m+1, 0) - c(m-1, 0))$$ n”, +“where $m=2, \ldots, 79$. n”, +“The end points at $t= 1 dt$ can be found using the boundary conditions.n”, +“n”, +“The function that computes the numerical solution using this scheme is inn”, +“advection_funs.py. It is a python function advection(timesteps), which takes inn”, +“the number of time steps as input and plots the distribution curve at 20 time steps.n”, +“n”, +“We can see the problem with this scheme just by running the functionn”, +“with 10 time steps ([Figure Distribution with Centered Scheme](#fig:centered)). Following is a plot ofn”, +“the distribution curve at the last time step.”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“Image(filename=’images/centered.png’,width=’60%’)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’fig:centered’></a>n”, +“Figure Distribution with Centered Scheme: This is the distribution after 10 time steps approximated using the centred differencing scheme.n”, +“n”, +“Comparing this curve with the initial staten”, +“([Figure Initial Distribution](#fig:initial)), we can see that ripples are produced ton”, +“the left of the curve which means negative values are generated. Butn”, +“water vapour does not have negative concentrations. The centredn”, +“differencing scheme does not work well for the advection process becausen”, +“it is not positive definite, i.e., it generates negative values whichn”, +“are impossible in real life.”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“Another example of using the same scheme”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“afs.advection(10, lab_example=True)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## Numerical Solution Using Upstream Method ##n”, +“n”, +“n”, +“Let’s see what is wrong with our simple centred differencing scheme. Wen”, +“used centred differences to compute the time and spatial derivativesn”, +“([Centered Difference Scheme](#eqn:centered)). In other words, $c(x=m dx)$ depends onn”, +“$c(x=(m-1) dx)$ and $c(x=(m+1) dx)$, and $c(t=n dt)$ depends onn”, +“$c(t=(n-1) dt)$ and $c(t=(n+1) dt)$. But we know the wind is moving inn”, +“the positive x direction, so $c(x=m dx)$ should not depend onn”, +“$c(x=(m+1) dx)$. Therefore, we will change the centred differencingn”, +“scheme to backward differencing scheme. In other words, we will alwaysn”, +“be looking ‘upstream’ in the approximation.n”, +“n”, +“If we use backward differences to approximate the spatial derivative butn”, +“continue to use centred differences to approximate the time derivative,n”, +“we will end up with an unstable scheme. Thus, we will use backwardn”, +“differences for both time and spatial derivatives. Now the time andn”, +“spatial derivatives are given by:n”, +“n”, +“<a name=’eqn:upstream’></a>n”, +“(Upstream Scheme)n”, +“$$\frac{\partial c}{\partial t}(x=m dx, t=n dt)=\frac {c(x=m dx, t=n dt) - c(x=m dx, t=(n-1) dt)}{dt}$$n”, +“$$\frac{\partial c}{\partial x}(x=m dx, t=n dt)=\frac {c(x=m dx, t=n dt) - c(x=(m-1) dx, t=n dt)}{dx}$$n”, +“n”, +“Substitution of the equations into the PDE yieldsn”, +“$$c(m, n+1)=c(m, n)- u \frac{dt}{dx} (c(m, n) - c(m-1, n))$$n”, +“n”, +“The boundary conditions and the initial conditions are the same as inn”, +“the centred differencing scheme. This time we compute $c$ at $t= 1 dt$n”, +“using backward differences just as with all subsequent time steps.n”, +“n”, +“The function that computes the solution using this scheme is inn”, +“advection_funs.py. It is a python function advection2(timesteps), which takesn”, +“in the number of time steps as input and plots the distribution curve at 20 time steps.n”, +“n”, +“Although this scheme is positive definite and conservative (the arean”, +“under the curve is the same as in the initial state), it introduces an”, +“new problem — diffusion. As the time step increases, you can see thatn”, +“the curve becomes wider and lower, i.e., it diffuses quicklyn”, +“([Figure Upstream Distribution](#fig:upstream)).”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“Image(filename=’images/upstream.png’,width=’60%’)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’fig:upstream’></a>n”, +“Figure Upstream Distribution:n”, +“This is the distribution after 60 time steps approximated using then”, +“upstream method.n”, +“n”, +“But ideally, the curve should retain its original shape as the waven”, +“travels along the x-axis, so the upstream method is still not goodn”, +“enough for the advection problem. In the next section, we will presentn”, +“another method that does a better job.”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“Another example using the same scheme”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“afs.advection2(60, lab_example=True)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## A Better Solution ##n”, +“n”, +“In previous sections, we were concerned with values at grid points only,n”, +“ie, values of $c$ at $x= 1dx, 2dx, \ldots$. But in this section, we willn”, +“also consider grid boxes each containing a grid point in the centre. Forn”, +“each grid point $j$, the left boundary of the grid box containing $j$ isn”, +“indexed as $j- 1/2$, and the right boundary as $j+ 1/2$. The schemen”, +“presented here was developed by Andreas Bott ([Bott, 1989](#Ref:Bott)).n”, +“n”, +“The PDE ([Advection Eqn](#eqn:advection)) is rewritten as :n”, +“n”, +“<a name=’eqn:FluxForm’></a>n”, +“(Flux Form Eqn)n”, +“$$\frac{\partial c}{\partial t} + \frac{\partial uc}{\partial x} = 0$$ n”, +“where $F= uc$ gives the flux of water vapour.n”, +“n”, +“Expand the time derivative using forward differences and the spatialn”, +“derivative as follows:n”, +“$$\frac{\partial F}{\partial x}(x=j dx) = \frac {F(x= (j+1/2) dx) - F(x= (j-1/2) dx)}{dx}$$n”, +“where $F(x= j+1/2)$ gives the flux through the right boundary of then”, +“grid box $j$. For simplicity, we use the notation $F(x= j+1/2)$ forn”, +“$F(x= j+1/2, n)$, ie, the flux through the right boundary of the gridn”, +“box j after $n$ time steps.n”, +“n”, +“Substituting the expanded derivatives into the PDE, we obtain then”, +“following recurrence formula ($c(m, n)= c(x= m dx, t= n dt)$):n”, +“$$c(m, n+1)= c(m, n) - \frac {dt} {dx}(F(m+1/2, n)-F(m-1/2, n))$$n”, +“n”, +“Since flux is defined as the amount flowing through per unit time, wen”, +“need to calculate the portion of the distribution curve in each grid boxn”, +“that passes the right boundary after $dt$”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“Image(filename=’images/flux.png’,width=’60%’)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’fig:flux’></a>n”, +“Figure Amount Leaving Grid:n”, +“After $dt$, the shaded area will be past the right boundary of then”, +“grid box $j$.n”, +“n”, +“As ([Figure Amount Leaving Grid](#fig:flux)) shows, the distribution curve in grid box $j$n”, +“has travelled a distance of ${u dt}$ after each time step; in othern”, +“words, the curve has moved $u dt/dx$ of a grid space within a time unit.n”, +“The shaded region is the portion of the vapour that passes the rightn”, +“boundary of grid box $j$ within $dt$. We can use integration to find outn”, +“the area of the shaded region and then divide the result by $dt$ to getn”, +“$F(j+1/2)$.n”, +“n”, +“Since we only know $c$ at the grid point $j$, we are going to usen”, +“polynomial interpolation techniques to approximate $c$ at other pointsn”, +“in the grid box. We define $c$ in grid box $j$ with a polynomial ofn”, +“order $l$ as follows:n”, +“n”, +“$$c _{j, \ell}(x^\prime) = \sum _{k=0} ^{\ell} a _{j, k} x ^{\prime k}$$ n”, +“where $x^\prime = (x- x_j)/dx$ and $-1/2 \le x^\prime \le \ell/2$. n”, +“n”, +“Then”, +“coefficients $a _{j, k}$ are obtained by interpolating the curve withn”, +“the aid of neighbouring grid points. We will skip the detail of then”, +“interpolation process. Values of $a _{j, k}$ for $\ell=0, 1, \ldots, 4$n”, +“have been computed and are summarised in Tablesn”, +“[Table $\ell=0$](#tab:ell0), [Table $\ell=1$](#tab:ell1), [Table $\ell=2$](#tab:ell2), [Table $\ell=3$](#tab:ell3) andn”, +“[Table $\ell=4$](#tab:ell4),”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’tab:ell0’></a>n”, +“Table $\ell = 0$:n”, +“n”, +“|.............|…………….|n”, +“| :————-: | :—————–: |\n", +"| $k=0$ | $a_{j,0}=c_j$ |\n", +"\n", +"<a name='tab:ell1'></a>\n", +"**Table $\\ell = 1$:** two representations\n", +"\n", +"|………….|……………………….|n”, +“| :————-: | :—————–: |\n", +"| $k=0$ | $a_{j,0}= c_j$ |\n", +"| $k=1$ | $a_{j, 1}= c_{j+1} - c_j$ |\n", +"\n", +"|………….|……………………….|n”, +“| :————-: | :—————–: |\n", +"| $k=0$ | $a_{j,0}= c_j$ |\n", +"| $k=1$ | $a_{j, 1}= c_j-c_{j-1}$ |n”, +“ “

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’tab:ell2’></a>n”, +“Table $\ell = 2$:n”, +“n”, +“|.............|………………………………………….|n”, +“| :————-: | :—————–: |\n", +"| $k=0$ | $a_{j,0}= c_j$ |\n", +"| $k=1$ | $a_{j, 1}=\frac {1}{2}(c_{j+1}-c_{j-1})$ |\n", +"| $k=2$ | $a_{j, 2}=\frac {1}{2}(c_{j+1}-2c_j+c_{j-1})$ |\n", +"\n", +"<a name='tab:ell3'></a>\n", +"**Table $\\ell = 3$:** two representations\n", +"\n", +"|………….|…………………………………………………………..|n”, +“| :————-: | :—————–: |\n", +"| $k=0$ | $a_{j,0}= c_j$ |\n", +"| $k=1$ | $a_{j, 1}=\frac {1}{6}(-c_{j+2}+6c_{j+1}-3c_j-2c_{j-1})$ |\n", +"| $k=2$ | $a_{j, 2}=\frac {1}{2}(c_{j+1}-2c_j+c_{j-1})$ |\n", +"| $k=3$ | $a_{j, 3}=\frac {1}{6}(c_{j+2}-3c_{j+1}+3c_j-c_{j-1})$ |\n", +"\n", +"|………….|…………………………………………………………..|n”, +“| :————-: | :—————–: |\n", +"| $k=0$ | $a_{j,0}= c_j$ |\n", +"| $k=1$ | $a_{j, 1}=\frac {1}{6}(2c_{j+1}+3c_j-6{j-1}+c_{j-2})$ |\n", +"| $k=2$ | $a_{j, 2}=\frac {1}{2}(c_{j+1}-2c_j+c_{j-1})$ |\n", +"| $k=3$ | $a_{j, 3}=\frac {1}{6}(c_{j+1}-3c_j+3c_{j-1}-c_{j-2})$ |

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’tab:ell4’></a>n”, +“Table $\ell = 4$:n”, +“n”, +“|.............|…………………………………………………………………………….|n”, +“| :————-: | :—————–: |\n", +"| $k=0$ | $a_{j,0}= c_j$ |\n", +"| $k=1$ | $a_{j, 1}=\frac {1}{12}(-c_{j+2}+8c_{j+1}-8c_{j-1}+c_{j-2})$ |\n", +"| $k=2$ | $a_{j, 2}=\frac {1}{24}(-c_{j+2}+16c_{j+1}-30c_j+16c_{j-1}-c_{j-2})$ |n”, +“ | $k=3$ | $a_{j, 3}=\frac {1}{12}(c_{j+2}-2c_{j+1}+2c_{j-1}-c_{j-2})$ |n”, +“ | $k=4$ | $a_{j, 4}=\frac {1}{24}(c_{j+2}-4c_{j+1}+6c_j-4c_{j-1}+c_{j-2})$ |

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“Note that for even order polynomials, an odd number of $c$ valuesn”, +“including the grid point $j$ are needed to calculate the coefficientsn”, +“$a_{j, k}$. This means the same number of grid points to the left andn”, +“right of $x_j$ are used in the calculation of $a_{j, k}$. If on then”, +“other hand, we choose odd order polynomials, there will be one extran”, +“point used to either side of $x_j$, thus resulting in 2 differentn”, +“representations of $a_{j, k}$. This is why there are 2 sets ofn”, +“$a_{j, k}$ for $\ell=1, 3$ in the table. Decision as to which set is to ben”, +“used must be made according to specific conditions of the calculation.n”, +“n”, +“If we choose $\ell=0$, we will end up with the upstream method. In othern”, +“words, the upstream method assumes that $c$ is constant in each gridn”, +“box. This poor representation of $c$ results in strong numericaln”, +“diffusion. Experiments have shown that generally if we use higher ordern”, +“polynomials (where $\ell \le 4$), we can significantly suppress numericaln”, +“diffusion.n”, +“n”, +“Now we define $I_{j+1/2}$ as the shaded area in grid box $j$ in ([Figure Amount Leaving Grid](#fig:flux)): n”, +“n”, +“<a name=’eq:area’></a>n”, +“(Flux Leaving Eqn)n”, +“$$n”, +“ I_{j+1/2} = \int _{1/2 - \frac{udt}{dx}}^{1/2} c_j(x^\prime) dx^\prime $$n”, +“ $$ = \sum _{k=0}^{l} \frac {a_{j, k}}{(k+1) 2^{k+1}} \left[1- \left(1- 2 u \frac{dt}{dx}\right)^{k+1} \right] $$”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“Note that we are integrating over $x^\prime$ instead of $x$. Thus, ton”, +“get the real shaded area, we need to multiply $I_{j+1/2}$ by $dx$. Son”, +“n”, +“<a name=’eq:flux2’></a>n”, +“(Flux Eqn)n”, +“$$F_{j+1/2}= \frac {dx} {dt}I_{j+1/2}$$ n”, +“In this form, the scheme is conservative andn”, +“weakly diffusive. But it still lacks positive definiteness. A sufficientn”, +“condition for this is n”, +“n”, +“<a name=’eq:posdef’></a>n”, +“(Positive Definite Condition)n”, +“$$0 \le I_{j+1/2} dx \le c_j dx$$ n”, +“That is, the total outflow is nevern”, +“negative and never greater than $c_j dx$. In other words, the shadedn”, +“area should be no less than zero but no greater than the area of then”, +“rectangle with length $c_j$ and width $dx$. n”, +“([Figure Total in Cell](#fig:limit))n”, +“shows why the total outflow should be limited above by $c_j dx$:”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“Image(filename=’images/limit.png’,width=’60%’)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“<a name=’fig:limit’></a>n”, +“Figure Total in Cell: The shaded area is equal to $c_j dx$, and it is already greater thann”, +“the total amount of vapour (the area under the curve) in grid box $j$.n”, +“If $I_{j+1/2} dx > c_j dx$, then the total outflow $I_{j+1/2} dx$ wouldn”, +“be greater than the amount of vapour in the grid box, and the amount ofn”, +“vapour will be negative at the next time step, thus violating then”, +“positive definiteness requirement. n”, +“n”, +“If the total outflow is larger than the shaded area $c_j dx$, we willn”, +“get negative values of $c$ in this grid box at the next time step. We don”, +“not want this to happen since negative values are meaningless.n”, +“n”, +“To satisfy the condition for positive definitenessn”, +“([Positive Definite Condition](#eq:posdef)) we need to guarantee thatn”, +“$I_{j+1/2} \le c_j$ holds at all time steps. We can achieve thisn”, +“condition by multiplying $I_{j+1/2}$ by a weighting factor. Definen”, +“$I_{j+1/2}^\prime$ as n”, +“n”, +“<a name=’eq:normalize’></a>n”, +“(Normalization Eqn)n”, +“$$I_{j+1/2}^\prime=I_{j+1/2} \frac {c_j}{I_j}$$ n”, +“where n”, +“$$n”, +“ {I_j} = \int_{-1/2}^{1/2} c_j(x^\prime) dx^\prime $$n”, +“$$ = \sum_{k=0}^{l} \frac {a_{j, k}} {(k+1) 2^{k+1}} [(-1)^k +1]$$n”, +“n”, +“Since the total flow out of a grid box is always less than the totaln”, +“grid volume, $I_{j+1/2}/I_j$ is always less than 1, thusn”, +“$I_{j+1/2} c_j/I_j$ is always less than $c_j$. Thus we can satisfy then”, +“upper limit of the positive definiteness conditionn”, +“([Positive Definite Condition](#eq:posdef)) by multiplying $I_{j+1/2}$ by a weightingn”, +“factor $c_j/I_j$. So now $F$ is defined as:n”, +“$$F_{j+1/2}= \frac {dx} {dt}\frac {c_j}{I_j} I_{j+1/2}$$”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“Now to satisfy the lower limit of the positive definiteness condition,n”, +“([Positive Definite Condition](#eq:posdef))n”, +“we need to make sure $I_{j+1/2}$ remains non negative at all time steps.n”, +“So we will set $I_{j+1/2}$ to 0 whenever it is negative.n”, +“n”, +“If we are looking at the parts of the curve that are far away from then”, +“peak, $I_j=0, I_{j+1/2}=0$, and we will be dividing by 0 in ([Normalization Eqn](#eq:normalize))! To avoid this instability, we introduce a smalln”, +“term $\epsilon$ when $I_j=0, I_{j+1/2}=0$, i.e., we set $I_j$ ton”, +“$\epsilon$.n”, +“n”, +“Combining all the conditions from above, the advection scheme isn”, +“described as follows:n”, +“$$c(j, n+1)= c(j, n) - \frac {dt} {dx} [F(j+1/2, n)- F(j-1/2, n)]$$n”, +“$$F(j+1/2, n)= \frac {dx}{dt} \frac {i_{l, j+1/2}}{i_{l, j}} c_j$$ n”, +“withn”, +“$$i_{l, j+1/2} = \hbox{max}(0, I_{j+1/2})$$n”, +“$$i_{l, j} = \hbox{max}(I_{l, j}, i_{l, j+1/2} + \epsilon)$$ n”, +“where $l$ is the order of the polynomial we use to interpolate $c$ in each grid box.n”, +“n”, +“An example function for this scheme is in advection_funs.py. The pythonn”, +“function advection3(timesteps, order) takes in 2 arguments, the first isn”, +“the number of time steps to be computed, the second is the order of then”, +“polynomial for the approximation of $c$ within each grid box. It plots the curve at 20 time steps.n”, +“Students should try it out and compare this scheme with the previousn”, +“two.”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {}, +“outputs”: [], +“source”: [

+
+

“afs.advection3(60, 4, lab_example=True)”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“### Problem One ###n”, +“n”, +“Using the Bott Scheme, modify initialize in advection_funs.py to solve the following advection problem: The wind is moving along the x-axis with speed u=20 m/s. The initial distribution curve is 50 km in width. Use your program to approximate the curve during 24 hours.n”, +“n”, +“a\) Run your program for different orders of approximating polynomialsn”, +“(up to 4). Compare the accuracy of approximation for different orders.n”, +“Do you see better results with increasing order? Is this true for alln”, +“orders from 0 to 4? Is there any particularity to odd and even ordern”, +“polynomials? What if you decrease or increase the Courant number (value in front of dx/u for dt calculation)?n”, +“n”, +“b\) For odd ordered polynomials, advection_funs.py uses the representationn”, +“of $a_{j,k}$ that involves an extra point to the right of the centren”, +“grid point. Modify the table of coefficients for odd ordered polynomialsn”, +“([Table $\ell=1$](#tab:ell1))and ([Table $\ell=3$](#tab:ell3)) to use the extra point to the left of then”, +“centre grid point. Run your program again and compare the results of 2n”, +“different representations of $a_{j,k}$ for order 1 and 3, respectively.n”, +“Is one representation better than the other, or about the same, or doesn”, +“each have its own problem? How, do you think the differentn”, +“representation affects the result?n” +“n”, +“c\) What happens if you increase the Courant number to greater than one? Hint: check the speed.n”

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## Conclusion ##n”, +“n”, +“The last scheme presented solves 2 problems introduced in the previousn”, +“two schemes. The centred differencing scheme lacks positive definitenessn”, +“because it is of second order accuracy, thus it introduces additionaln”, +“oscillations near the peak. The scheme presented here solves thisn”, +“problem by checking and normalising the relevant values (ie, $I_{j+1/2}$n”, +“and $I_j$) when needed at each time step. The upstream scheme producesn”, +“strong diffusion because it is of only first order accuracy. The schemen”, +“presented here solves this problem by using higher order polynomials ton”, +“approximate $c$ at each grid box.n”, +“n”, +“Experiments have shown that this scheme is numerically stable in mostn”, +“atmospheric situations. This scheme is only slightly unstable in then”, +“case of strong deformational flow field models.n”, +“n”, +“For more detail about this advection scheme , please refer to ([Bott, 1989](#Ref:Bott)).n”, +“Since 1989, new advection schemes including the MUSCL and TVD have been developed and are more routinely used. “

+
+

]

+
+

}, +{

+
+

“cell_type”: “markdown”, +“metadata”: {}, +“source”: [

+
+

“## References ##n”, +“n”, +“<a name=’Ref:Bott’></a>n”, +“Bott, A., 1989: A positive definite advection scheme obtained by nonlinear renormalization of the advective fluxes. Monthly Weather Review, 117, 1006–1015.”

+
+

]

+
+

}, +{

+
+

“cell_type”: “code”, +“execution_count”: null, +“metadata”: {

+
+

“collapsed”: true

+
+

}, +“outputs”: [], +“source”: []

+
+

}

+
+
+

], +“metadata”: {

+
+
+
“jupytext”: {

“encoding”: “# -- coding: utf-8 --“, +“formats”: “ipynb,py:percent”, +“notebook_metadata_filter”: “all,-language_info,-toc,-latex_envs”

+
+
+

}, +“kernelspec”: {

+
+

“display_name”: “Python 3 (ipykernel)”, +“language”: “python”, +“name”: “python3”

+
+

}, +“language_info”: {

+
+
+
“codemirror_mode”: {

“name”: “ipython”, +“version”: 3

+
+
+

}, +“file_extension”: “.py”, +“mimetype”: “text/x-python”, +“name”: “python”, +“nbconvert_exporter”: “python”, +“pygments_lexer”: “ipython3”, +“version”: “3.12.1”

+
+

}, +“nbsphinx”: {

+
+

“execute”: “never”

+
+

}, +“toc”: {

+
+

“nav_menu”: {}, +“number_sections”: true, +“sideBar”: true, +“skip_h1_title”: false, +“toc_cell”: true, +“toc_position”: {}, +“toc_section_display”: “block”, +“toc_window_display”: true

+
+

}

+
+

}, +“nbformat”: 4, +“nbformat_minor”: 2

+
+
+

}

+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab2/01-lab2.html b/notebooks/lab2/01-lab2.html new file mode 100644 index 0000000..e4dff67 --- /dev/null +++ b/notebooks/lab2/01-lab2.html @@ -0,0 +1,618 @@ + + + + + + + + Lab 2: Stability and accuracy — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Lab 2: Stability and accuracy

+

Important - before you start

+

Before starting on a new lab, you should “fetch” any changes that we have made to the labs in our repository (we are continually trying to improve them). Follow the instructions here: https://rhwhite.github.io/numeric_2024/getting_started/python.html#Pulling-changes-from-the-github-repository

+

Caution - if you have made changes to, for example, lab 1, but you didn’t duplicate and rename the file first, this can write over your changes! Follow the instructions above carefully to not lose your work, and remember to always create a copy of each lab for you to do your work in.

+

This step will be smoother if you haven’t saved any changes to the default files (sometimes even opening it, and saving it, counts as a change!)

+

Some notes about navigating in Jupyter Lab

+

In Jupyter Lab, once you have opened a lab, if you click on the symbol that looks like three bullet points over on the far left, this brings up a contents page that allows you to jump immediately to any particular section in the lab you are currently looking at.

+

The jigsaw puzzle icon below this allows you to install extensions, if you want extra functionality in your notebooks (you may need to go to Settings, Enable Extension Manager to access this).

+

Also, in Jupyter Lab you can click on links in the ‘List of Problems’ to take you to directly to each problem. Remember to check canvas for which problems you need to submit for the weekly assignments.

+
+

List of Problems

+

There are probems throughout this lab, as you might find at the end of a textbook chapter. Some of these problems will be set as an assignment for you to hand in to be graded - see the Lab 2: Assignment on the canvas site for which problems you should hand in. However, if you have time, you are encouraged to work through all of these problems to help you better understand the material.

+ +
+
+

Objectives

+

In Lab #1 you were introduced to the concept of discretization, and saw that there were many different ways to approximate a given problem. This Lab will delve further into the concepts of accuracy and stability of numerical schemes, in order that we can compare the many possible discretizations.

+

At the end of this Lab, you will have seen where the error in a numerical scheme comes from, and how to quantify the error in terms of order. The stability of several examples will be demonstrated, so that you can recognize when a scheme is unstable, and how one might go about modifying the scheme to eliminate the instability.

+

Specifically you will be able to:

+
    +
  • Define the term and identify: Implicit numerical scheme and Explicit numerical scheme.

  • +
  • Define the term, identify, or write down for a given equation: Backward Euler method and Forward Euler method.

  • +
  • Explain the difference in terminology between: Forward difference discretization and Forward Euler method.

  • +
  • Define: truncation error, local truncation error, global truncation error, and stiff equation.

  • +
  • Explain: a predictor-corrector method.

  • +
  • Identify from a plot: an unstable numerical solution.

  • +
  • Be able to: find the order of a scheme, use the test equation to find the stability of a scheme, find the local truncation error from a graph of the exact solution and the numerical solution.

  • +
  • Evaluate and compare the accuracy and stability of at least 3 different discretization methods.

  • +
+
+
+

Readings

+

This lab is designed to be self-contained. If you would like additional background on any of the following topics, I’d recommend this book: Finite difference computing with PDES by Hans Petter Langtangen and Svein Linge The entire book is available on github with the python code +here. Much of the content of this lab is summarized in Appendix B – truncation analysis

+ +
+
+

Introduction

+

Remember from Lab #1  that you were introduced to three approximations to the first derivative of a function, \(T^\prime(t)\). If the independent variable, \(t\), is discretized at a sequence of N points, \(t_i=t_0+i \Delta t\), where \(i += 0,1,\ldots, N\) and \(\Delta t= 1/N\), then we can write the three approximations as follows:

+

Forward difference formula:

+
+\[T^\prime(t_i) \approx \frac{T_{i+1}-T_i}{\Delta t}\]
+

Backward difference formula:

+
+\[T^\prime(t_i) \approx \frac{T_{i}-T_{i-1}}{\Delta t}\]
+

Centered difference formula (add together the forwards and backwards formula):

+
+\[T^\prime(t_i) \approx \frac{T_{i+1}-T_{i-1}}{2 \Delta t}\]
+

In fact, there are many other possible methods to approximate the derivative (some of which we will see later in this Lab). With this large choice we have in the choice of approximation scheme, it is not at all clear at this point which, if any, of the schemes is the “best”. It is the purpose of this Lab to present you with some basic tools that will help you to decide on an appropriate discretization for a given problem. There is no generic “best” method, and the choice of discretization will +always depend on the problem that is being dealt with.

+

In an example from Lab #1, the forward difference formula was used to compute solutions to the saturation development equation, and you saw two important results:

+
    +
  • reducing the grid spacing, \(\Delta t\), seemed to improve the accuracy of the approximate solution; and

  • +
  • if \(\Delta t\) was taken too large (that is, the grid was not fine enough), then the approximate solution exhibited non-physical oscillations, or a numerical instability.

  • +
+

There are several questions that arise from this example:

+
    +
  1. Is it always true that reducing \(\Delta t\) will improve the discrete solution?

  2. +
  3. Is it possible to improve the accuracy by using another approximation scheme (such as one based on the backward or centered difference formulas)?

  4. +
  5. Are these numerical instabilities something that always appear when the grid spacing is too large?

  6. +
  7. By using another difference formula for the first derivative, is it possible to improve the stability of the approximate solution, or to eliminate the stability altogether?

  8. +
+

The first two questions, related to accuracy, will be dealt with in the next section, Section 5 (1.5), and the last two will have to wait until Section 6 (1.6) when stability is discussed.

+
+
+

Accuracy of Difference Approximations

+

Before moving on to the details of how to measure the error in a scheme, let’s take a closer look at another example which we’ve seen already …

+
+

Accuracy Example

+

Let’s go back to the heat conduction equation from Lab #1, where the temperature, \(T(t)\), of a rock immersed in water or air, evolves in time according to the first order ODE:

+
+\[\frac{dT}{dt} = \lambda(T,t) \, (T-T_a)\]
+

with initial condition \(T(0)\). We saw in the section on the forward Euler method that one way to discretize this equation was using the forward difference formula  for the derivative, leading to

+

\(T_{i+1} = T_i + \Delta t \, \lambda(T_i,t_i) \, (T_i-T_a).\) (eq: euler)

+

Similarly, we could apply either of the other two difference formulae to obtain other difference schemes, namely what we called the backward Euler method

+

\(T_{i+1} = T_i + \Delta t \, \lambda(T_{i+1},t_{i+1}) \, (T_{i+1}-T_a),\) (eq: beuler)

+

and the mid-point or leap-frog centered method

+

\(T_{i+1} = T_{i-1} + 2 \Delta t \, \lambda(T_{i},t_{i}) \, (T_{i}-T_a).\) (eq: midpoint)

+

The forward Euler and mid-point schemes are called explicit methods, since they allow the temperature at any new time to be computed in terms of the solution values at previous time steps only, i.e. it does not require any information from current or future time steps. The backward Euler scheme, on the other hand, is called an implicit scheme, since it gives an equation defining \(T_{i+1}\) implicitly, that is, the function \(\lambda\) takes the value \(T_{i+1}\) as an input, +in order to calculate \(T_{i+1}\). If \(\lambda\) depends non-linearly on \(T\), then this equation may require an additional step, involving the iterative solution of a non-linear equation. We will pass over this case for now, and refer you to a reference such as Burden and Faires (1981) for the details on non-linear solvers such as Newton’s method.

+

Important point: Note that eq: midpoint requires the value of the temperature at two points: \(T_{i-1}\) and \(T_{i}\) to calculate the temperature \(T_{i+1}\). This requires an approximate guess for \(T_i\), which we will discuss in more detail below.

+

For now, let’s assume that the function \(\lambda\) is a constant, and thus it is independent of \(T\) and \(t\). Plots of the numerical results from each of these schemes, along with the exact solution, are given in Figure 1 (with the “unphysical” parameter value \(\lambda=0.8\) chosen to enhance the show the growth of numerical errors, even though in a real material this would violate conservation of energy).

+

The functions used in make the following figure are imported from lab2_functions.py

+
+
[ ]:
+
+
+
# import and define functions
+%matplotlib inline
+import context
+import matplotlib.pyplot as plt
+from numlabs.lab2.lab2_functions import euler,beuler,leapfrog
+import numpy as np
+plt.style.use('ggplot')
+
+#
+# save our three functions to a dictionary, keyed by their names
+#
+theFuncs={'euler':euler,'beuler':beuler,'leapfrog':leapfrog}
+#
+# store the results in another dictionary
+#
+output={}
+#
+#end time = 10 seconds
+#
+tend=10.
+#
+# start at 30 degC, air temp of 20 deg C
+#
+Ta=20.
+To=30.
+#
+# note that lambda is a reserved keyword in python so call this
+# thelambda
+#
+theLambda=0.8  #units have to be per minute if time in seconds
+#
+# dt = 10/npts = 10/30 = 1/3
+#
+npts=30
+for name,the_fun in theFuncs.items():
+    output[name]=the_fun(npts,tend,To,Ta,theLambda)
+#
+# calculate the exact solution for comparison
+#
+exactTime=np.linspace(0,tend,npts)
+exactTemp=Ta + (To-Ta)*np.exp(theLambda*exactTime)
+#
+# now plot all four curves
+#
+fig,ax=plt.subplots(1,1,figsize=(8,8))
+ax.plot(exactTime,exactTemp,label='exact',lw=2)
+for fun_name in output.keys():
+    the_time,the_temp=output[fun_name]
+    ax.plot(the_time,the_temp,label=fun_name,lw=2)
+ax.set_xlim([0,2.])
+ax.set_ylim([30.,90.])
+ax.grid(True)
+ax.set(xlabel='time (seconds)',ylabel='bar temp (deg C)')
+out=ax.legend(loc='upper left')
+
+
+
+

Figure 1 A plot of the exact and computed solutions for the temperature of a rock, with parameters: \(T_a=20\), \(T(0)=30\), \(\lambda= +0.8\), \(\Delta t=\frac{1}{3}\)

+

Notice from these results that the mid-point/leap-frog scheme is the most accurate, and backward Euler the least accurate.

+

The next section explains why some schemes are more accurate than others, and introduces a means to quantify the accuracy of a numerical approximation.

+
+
+

Round-off Error and Discretization Error

+

From Accuracy Example and the example in the Forward Euler section of the previous lab,  it is obvious that a numerical approximation is exactly that - an approximation. The process of discretizing a differential equation inevitably leads to errors. In this section, we will tackle two fundamental questions related to the accuracy of a numerical approximation:

+
    +
  • Where does the error come from (and how can we measure it)?

  • +
  • How can the error be controlled?

  • +
+
+
+

Where does the error come from?

+
+

Round-off error:

+

When attempting to solve differential equations on a computer, there are two main sources of error. The first, round-off error, derives from the fact that a computer can only represent real numbers by floating point approximations, which have only a finite number of digits of accuracy.

+
    +
  • Mathematical note floating point notation

  • +
+

For example, we all know that the number \(\pi\) is a non-repeating decimal, which to the first twenty significant digits is \(3.1415926535897932385\dots\) Imagine a computer which stores only eight significant digits, so that the value of \(\pi\) is rounded to \(3.1415927\).

+

In many situations, these five digits of accuracy may be sufficient. However, in some cases, the results can be catastrophic, as shown in the following example:

+
+\[\frac{\pi}{(\pi + 0.00000001)-\pi}.\]
+

Since the computer can only “see” 8 significant digits, the addition \(\pi+0.00000001\) is simply equal to \(\pi\) as far as the computer is concerned. Hence, the computed result is \(\frac{1}{0}\) - an undefined expression! The exact answer \(100000000\pi\), however, is a very well-defined non-zero value.

+

A side note: round-off errors played a key role in Edward Lorenz’s exploration of chaos theory in physics, see https://www.aps.org/publications/apsnews/200301/history.cfm

+
+
+

Truncation error:

+

The second source of error stems from the discretization of the problem, and hence is called discretization error or truncation error. In comparison, round-off error is always present, and is independent of the discretization being used. The simplest and most common way to analyse the truncation error in a scheme is using Taylor series expansions.

+

Let us begin with the forward difference formula for the first derivative, , which involves the discrete solution at times \(t_{i+1}\) and \(t_{i}\). Since only continuous functions can be written as Taylor series, we expand the exact solution (instead of the discrete values \(T_i\)) at the discrete point \(t_{i+1}\):

+
+\[T(t_{i+1}) = T(t_i+\Delta t) = T(t_i) + (\Delta t) T^\prime(t_i) + + \frac{1}{2}(\Delta t)^2 T^{\prime\prime}(t_i) +\cdots\]
+

Rewriting to clean this up slightly gives eq: feuler

+
+\[\begin{split}\begin{aligned} +T(t_{i+1}) &= T(t_i) + \Delta t T^{\prime}(t_i,T(t_i)) + + \underbrace{\frac{1}{2}(\Delta t)^2T^{\prime\prime}(t_i) + \cdots} +_{\mbox{ truncation error}} \\ \; + &= T(t_i) + \Delta t T^{\prime}(t_i) + {\cal O}(\Delta t^2). + \end{aligned}\end{split}\]
+

This second expression writes the truncation error term in terms of order notation. If we write \(y = {\cal O}(\Delta t)\), then we mean simply that \(y < c \cdot \Delta t\) for some constant \(c\), and we say that “ \(y\) is first order in \(\Delta t\) ” (since it depends on \(\Delta t\) to the first power) or “ \(y\) is big-oh of \(\Delta t\).” As \(\Delta t\) is assumed small, the next term in the series, \(\Delta t^2\) is small compared to the +\(\Delta t\) term. In words, we say that forward euler is first order accurate with errors of second order.

+

It is clear from this that as \(\Delta t\) is reduced in size (as the computational grid is refined), the error is also reduced. If you remember that we derived the approximation from the limit definition of derivative, then this should make sense. This dependence of the error on powers of the grid spacing \(\Delta t\) is an underlying characteristic of difference approximations, and we will see approximations with higher orders in the coming sections …

+

There is one more important distinction to be made here. The “truncation error” we have been discussing so far is actually what is called local truncation error. It is “local” in the sense that we have expanded the Taylor series locally about the exact solution at the point \(t_i\).

+

There is also a global truncation error (or, simply, global error), which is the error made during the course of the entire computation, from time \(t_0\) to time \(t_n\). The difference between local and global truncation error is illustrated in Figure 2. If the local error stays approximately constant, then the global error will be approximately the local error times the number of timesteps. For a fixed simulation length of \(t\), the number of timesteps required is +\(t/\Delta t\), thus the global truncation error will be approximately of the order of \(1/\Delta t\) times the local error, or about one order of \(\Delta t\) worse (lower order) than the local error.

+

2ad96d8db86d411792dbadd1d9b845d5

+

Figure Error: Local and global truncation error.

+

It is easy to get a handle on the order of the local truncation error using Taylor series, regardless of whether the exact solution is known, but no similar analysis is available for the global error. We can write

+
+\[\text{global error} = |T(t_n)-T_n|\]
+

but this expression can only be evaluated if the exact solution is known ahead of time (which is not the case in most problems we want to compute, since otherwise we wouldn’t be computing it in the first place!). Therefore, when we refer to truncation error, we will always be referring to the local truncation error.

+
+
+

Second order accuracy

+

Above we mentioned a problem with evaluating the mid-point method. If we start with three points \((t_0,t_1,t_2)\), each separated by \(\Delta t/2\) so that \(t_2 - t_0=\Delta t\)

+

\begin{align} +y(t_2)&=y(t_1) + y^\prime (t_1,y(t_1))(t_2 - t_1) + \frac{y^{\prime \prime}(t_1,y(t_1))}{2} (t_2 - t_1)^2 + \frac{y^{\prime \prime \prime}(t_1,y(t_1))}{6} (t_2 - t_1)^3 + h.o.t. \ (eq.\ a)\\ +y(t_0)&=y(t_1) + y^\prime (t_1,y(t_1))(t_0 - t_1) + \frac{y^{\prime \prime}(t_1)}{2} (t_0 - t_1)^2 + \frac{y^{\prime \prime \prime}(t_1)}{6} (t_0 - t_1)^3 + h.o.t. \ (eq.\ b) +\end{align}

+

where h.o.t. stands for “higher order terms”. Rewriting in terms of \(\Delta t\):

+

\begin{align} +y(t_2)&=y(t_1) + \frac{\Delta t}{2}y^\prime (t_1,y(t_1)) + \frac{\Delta t^2}{8} y^{\prime \prime}(t_1,y(t_1)) + \frac{\Delta t^3}{48} y^{\prime \prime \prime}(t_1,y(t_1)) + h.o.t. \ (eq.\ a)\\ +y(t_0)&=y(t_1) - \frac{\Delta t}{2}y^\prime (t_1,y(t_1)) + \frac{\Delta t^2}{8} y^{\prime \prime}(t_1,y(t_1)) - \frac{\Delta t^3}{48} y^{\prime \prime \prime}(t_1,y(t_1)) + h.o.t. \ (eq.\ b) +\end{align}

+

and subtracting:

+

\begin{align} +y(t_2)&=y(t_0) + \Delta t y^\prime (t_1,y(t_1)) + \frac{\Delta t^3}{24} y^{\prime \prime \prime}(t_1,y(t_1)) + h.o.t. \ (eq.\ c) +\end{align}

+

where \(t_1=t_0 + \Delta t/2\)

+

Comparing with eq: feuler we can see that we’ve canceled the \(\Delta t^2\) terms, so that if we drop the \(\frac{\Delta t^3}{24} y^{\prime \prime \prime}(t_1,y(t_1))\) and higher order terms we’re doing one order better that foward euler, as long as we can solve the problem of estimating y at the midpoint: \(y(t_1) = y(t_0 + \Delta t/2)\)

+
+
+

Mid-point and leap-frog

+
+
The mid-point and leap-frog methods take two slightly different approaches to estimating \(y(t_0 + \Delta t/2)\).
+
For the explicit mid-point method, we estimate \(y\) at the midpoint by taking a half-step:
+
+

\begin{align} +k_1 & = \Delta t y^\prime(t_0,y(t_0)) \\ +k_2 & = \Delta t y^\prime(t_0 + \Delta t/2,y(t_0) + k_1/2) \\ +y(t_0 + \Delta t) &= y(t_0) + k_2 +\end{align}

+

Compare this to the leap-frog method, which uses the results from one half-interval to calculate the results for the next half-interval:

+

\begin{align} +y(t_0 + \Delta t/2) & = y(t_0) + \frac{\Delta t}{2} y^\prime(t_0,y(t_0))\ (i) \\ +y(t_0 + \Delta t) & = y(t_0) + \Delta t y^\prime(t_0 + \Delta t/2,y(t_0 + \Delta t/2)\ (ii)\\ +y(t_0 + 3 \Delta t/2) & = y(t_0 + \Delta t/2) + \Delta t y^\prime(t_0 + \Delta t,y(t_0 + \Delta t))\ (iii) \\ +y(t_0 + 2 \Delta t) & = y(t_0 + \Delta t) + \Delta t y^\prime(t_0 + 3\Delta t/2,y(t_0 + 3 \Delta t/2))\ (iv) \\ +\end{align}

+

Comparing (iii) and (iv) shows how the method gets its name: the half-interval and whole interval values are calculated by leaping over each other. Once the first half and whole steps are done, the rest of the integration is completed by repeating (iii) and (iv) as until the endpoint is reached.

+

The leap-frog scheme has the advantage that it is time reversible or as the Wikipedia article says sympletic. This means that estimating \(y(t_1)\) and then using that value to go backwards by \(-\Delta t\) yields \(y(t_0)\) exactly, which the mid-point method does not. The mid-point method, however, is one member (the 2nd order member) of a family of Runge Kutta integrators, which will be covered in more detail in Lab 4.

+
+
+
+

How can we control the error?

+

Now that we’ve determined the source of the error in numerical methods, we would like to find a way to control it; that is, we would like to be able to compute and be confident that our approximate solution is “close” to the exact solution. Round-off error is intrinsic to all numerical computations, and cannot be controlled (except to develop methods that do not magnify the error unduly … more on this later). Truncation error, on the other hand, is under our control.

+

In the simple ODE examples that we’re dealing with in this lab, the round-off error in a calculation is much smaller than the truncation error. Furthermore, the schemes being used are stable with respect to round-off error in the sense that round-off errors are not magnified in the course of a computation. So, we will restrict our discussion of error control in what follows to the truncation error.

+

However, there are many numerical algorithms in which the round-off error can dominate the the result of a computation (Gaussian elimination is one example, which you will see in Lab #3 ), and so we must always keep it in mind when doing numerical computations.

+

There are two fundamental ways in which the truncation error in an approximation  can be reduced:

+
    +
  1. Decrease the grid spacing. Provided that the second derivative of the solution is bounded, it is clear from the error term in eq: feuler that as \(\Delta t\) is reduced, the error will also get smaller. This principle was demonstrated in an example from Lab #1 using the Forward Euler method. The disadvantage to decreasing \(\Delta t\) is that the cost of the computation increases since more steps must be taken. Also, there is a limit to how small \(\Delta t\) can be, +beyond which round-off errors will start polluting the computation.

  2. +
  3. Increase the order of the approximation. We saw above that the forward difference approximation of the first derivative is first order accurate in the grid spacing. It is also possible to derive higher order difference formulas which have a leading error term of the form \((\Delta t)^p\), with \(p>1\). As noted above in Section Second Order the midpoint formula is a second order scheme, and some further examples will be given in Section Higher order +Taylor. The main disadvantage to using very high order schemes is that the error term depends on higher derivatives of the solution, which can sometimes be very large – in this case, the stability of the scheme can be adversely affected (for more on this, see Section Stability.

  4. +
+
+
+

Problem Accuracy

+

In order to investigate these two approaches to improving the accuracy of an approximation, you can use the code in terror.ipynb to play with the solutions to the heat conduction equation. You will need the additional functions provided for this lab. These can be found on your local computer: numeric_2024/numlabs/lab2 (you will need to fetch upstream from github to get recent changes from our version to your clone +before pulling those changes to your local machine; don’t forget to commit your previous labs!). For a given function \(\lambda(T)\), and specified parameter values, you should experiment with various time steps and schemes, and compare the computed results (Note: only the answers to the assigned questions need to be handed in). Look at the different schemes (euler, leap-frog, midpoint, 4th order runge kutta) run them for various total times (tend) and step sizes (dt=tend/npts).

+

The three schemes that will be used here are forward Euler (first order), leap-frog (second order) and the fourth order Runge-Kutta scheme (which will be introduced more thoroughly in Lab 4).

+

Try three different step sizes for all three schemes for a total of 9 runs. It’s helpful to be able to change the axis limits to look at various parts of the plot.

+

Use your 9 results to answer parts a and b below.

+
    +
    1. +
    2. Does increasing the order of the scheme, or decreasing the time step always improve the solution?

    3. +
    +
  • +
    1. +
    2. How would you compute the local truncation error from the error plot? And the global error? Do this on a plot for one set of parameters.

    3. +
    +
  • +
    1. +
    2. Similarly, how might you estimate the order of the local truncation error? The order of the global error? ( Hint: An order \(p\) scheme has truncation error that looks like \(c\cdot(\Delta t)^p\). Read the error off the plots for several values of the grid spacing and use this to find \(p\).) Are the local and global error significantly different? Why or why not?

    3. +
    +
  • +
+
+
+

Other Approximations to the First Derivative

+

The Taylor series method of deriving difference formulae for the first derivative is the simplest, and can be used to obtain approximations with even higher order than two. There are also many other ways to discretize the derivatives appearing in ODE’s, as shown in the following sections…

+
+

Higher Order Taylor Methods

+

As mentioned earlier, there are many other possible approximations to the first derivative using the Taylor series approach. The basic approach in these methods is as follows:

+
    +
  1. expand the solution in a Taylor series at one or more points surrounding the point where the derivative is to be approximated (for example, for the centered scheme, you used two points, \(T(t_i+\Delta t)\) and \(T(t_i-\Delta t)\). You also have to make sure that you expand the series to a high enough order …

  2. +
  3. take combinations of the equations until the \(T_i\) (and possibly some other derivative) terms are eliminated, and all you’re left with is the first derivative term.

  4. +
+

One example is the fourth-order centered difference formula for the first derivative:

+
+\[\frac{-T(t_{i+2})+8T(t_{i+1})-8T(t_{i-1})+T(t_{i-2})}{12\Delta t} = + T^\prime(t_i) + {\cal O}((\Delta t)^4)\]
+

Quiz: Try the quiz at this link related to this higher order scheme.

+
+
+

Predictor-Corrector Methods

+

Another class of discretizations are called predictor-corrector methods. Implicit methods can be difficult or expensive to use because of the solution step, and so they are seldom used to integrate ODE’s. Rather, they are often used as the basis for predictor-corrector algorithms, in which a “prediction” for \(T_{i+1}\) based only on an explicit method is then “corrected” to give a better value by using this precision in an implicit method.

+

To see the basic idea behind these methods, let’s go back (once again) to the backward Euler method for the heat conduction problem which reads:

+
+\[T_{i+1} = T_{i} + \Delta t \, \lambda( T_{i+1}, t_{i+1} ) \, ( T_{i+1} +- T_a ).\]
+

Note that after applying the backward difference formula , all terms in the right hand side are evaluated at time \(t_{i+1}\).

+

Now, \(T_{i+1}\) is defined implicitly in terms of itself, and unless \(\lambda\) is a very simple function, it may be very difficult to solve this equation for the value of \(T\) at each time step. One alternative, mentioned already, is the use of a non-linear equation solver such as Newton’s method to solve this equation. However, this is an iterative scheme, and can lead to a lot of extra expense. A cheaper alternative is to realize that we could try estimating or predicting the +value of \(T_{i+1}\) using the simple explicit forward Euler formula and then use this in the right hand side, to obtain a corrected value of \(T_{i+1}\). The resulting scheme,

+
+\[\begin{split}\begin{array}{ll} + \mathbf{Prediction}: & \widetilde{T}_{i+1} = T_i + \Delta t \, + \lambda(T_i,t_i) \, (T_i-T_a), \\ \; \\ + \mathbf{Correction}: & T_{i+1} = T_i + \Delta t \, + \lambda(\widetilde{T}_{i+1},t_{i+1}) \, (\widetilde{T}_{i+1}-T_a). +\end{array}\end{split}\]
+

This method is an explicit scheme, which can also be shown to be second order accurate in . It is the simplest in a whole class of schemes called predictor-corrector schemes (more information is available on these methods in a numerical analysis book such as  @burden-faires).

+
+
+

Other Methods

+

The choice of methods is made even greater by two other classes of schemes:

+

Runge-Kutta methods:

+
    +
  • We have already seen two examples of the Runge-Kutta family of integrators: Forward Euler is a first order Runge-Kutta, and the midpoint method is second order Runge-Kutta. Fourth and fifth order Runge-Kutta algorithms will be described in Labs #4 and #5

  • +
+

Multi-step methods:

+
    +
  • These use values of the solution at more than one previous time step in order to increase the accuracy. Compare these to one-step schemes, such as forward Euler, which use the solution only at one previous step.

  • +
+

More can be found on these (and other) methods in  Burden and Faires (1981) and Newman (2013)

+
+
+
+

Accuracy Summary

+

In this section, you’ve been given a short overview of the accuracy of difference schemes for first order ordinary differential equations. We’ve seen that accuracy can be improved by either decreasing the grid spacing, or by choosing a higher order scheme from one of several classes of methods. When using a higher order scheme, it is important to realize that the cost of the computation usually rises due to an added number of function evaluations (especially for multi-step and Runge-Kutta +methods). When selecting a numerical scheme, it is important to keep in mind this trade-off between accuracy and cost.

+

However, there is another important aspect of discretization that we have pretty much ignored. The next section will take a look at schemes of various orders from a different light, namely that of stability.

+
+
+
+

Stability of Difference Approximations

+

The easiest way to introduce the concept of stability is for you to see it yourself.

+
+

Problem Stability

+

This example is a slight modification of Problem accuracy from the previous section on accuracy. We will add one scheme (backward euler) and drop the 4th order Runge-Kutta, and change the focus from error to stability. The value of \(\lambda\) is assumed a constant, so that the backward Euler scheme results in an explicit method, and we’ll also compute a bit further in time, so that any instability manifests itself more clearly. Run the +stability2.ipynb notebook in numlabs/lab2 with \(\lambda= -8\ s^{-1}\), with \(\Delta t\) values that just straddle the stability condition for the forward euler scheme (\(\Delta t < \frac{-2}{\lambda}\), derived below). Create plots that show that 1) the stability condition does in fact predict the onset of the instablity in the euler scheme, and

+
    +
  1. determine whether the backward euler and leap-frog are stable or unstable for the same \(\Delta t\) values. (you should run out to longer than tend=10 seconds to see if there is a delayed instability.)

  2. +
+

and provide comments/markdown code explaining what you see in the plots.

+
+
+

Determining Stability Properties

+

The heat conduction problem, as you saw in Lab #1, has solutions that are stable when \(\lambda<0\). It is clear from Problem stability above that some higher order schemes (namely, the leap-frog scheme) introduce a spurious oscillation not present in the continuous solution. This is called a computational or numerical instability, because it is an artifact of the discretization process only. This instability is not a characteristic of the heat conduction problem +alone, but is present in other problems where such schemes are used. Furthermore, as we will see below, even a scheme such as forward Euler can be unstable for certain problems and choices of the time step.

+

There is a way to determine the stability properties of a scheme, and that is to apply the scheme to the test equation

+
+\[\frac{dz}{dt} = \lambda z\]
+

where \(\lambda\) is a complex constant.

+

The reason for using this equation may not seem very clear. But if you think in terms of \(\lambda z\) as being the linearization of some more complex right hand side, then the solution to is \(z=e^{\lambda t}\), and so \(z\) represents, in some sense, a Fourier mode of the solution to the linearized ODE problem. We expect that the behaviour of the simpler, linearized problem should mimic that of the original problem.

+

Applying the forward Euler scheme to this test equation, results in the following difference formula

+
+\[z_{i+1} = z_i+(\lambda \Delta t)z_i\]
+

which is a formula that we can apply iteratively to \(z_i\) to obtain

+
+\[\begin{split}\begin{aligned} +z_{i+1} &=& (1+\lambda \Delta t)z_{i} \\ + &=& (1+\lambda \Delta t)^2 z_{i-1} \\ + &=& \cdots \\ + &=& (1+\lambda \Delta t)^{i+1} z_{0}.\end{aligned}\end{split}\]
+

The value of \(z_0\) is fixed by the initial conditions, and so this difference equation for \(z_{i+1}\) will “blow up” as \(i\) gets bigger, if the factor in front of \(z_0\) is greater than 1 in magnitude – this is a sign of instability. Hence, this analysis has led us to the conclusion that if

+
+\[|1+\lambda\Delta t| < 1,\]
+

then the forward Euler method is stable. For real values of \(\lambda<0\), this inequality can be shown to be equivalent to the stability condition

+
+\[\Delta t < \frac{-2}{\lambda},\]
+

which is a restriction on how large the time step can be so that the numerical solution is stable.

+
+
+

Problem Backward Euler

+

Perform a similar analysis for the backward Euler formula, and show that it is always stable when \(\lambda\) is real and negative. Confirm this using plots using similar code to Problem: Stability (i.e. using stability2.ipynb if you haven’t gone through Problem: Stability yet)

+
+
+

Example: leap-frog

+

Now, what about the leap-frog scheme?

+

Applying the test equation to the leap-frog scheme results in the difference equation

+
+\[z_{i+1} = z_{i-1} + 2 \lambda \Delta t z_i.\]
+

Difference formulas such as this one are typically solved by looking for a solution of the form \(z_i = w^i\) which, when substituted into this equation, yields

+
+\[w^2 - 2\lambda\Delta t w - 1 = 0,\]
+

a quadratic equation with solution

+
+\[w = \lambda \Delta t \left[ 1 \pm \sqrt{1+\frac{1}{(\lambda + \Delta t)^2}} \right].\]
+

The solution to the original difference equation, \(z_i=w^i\) is stable only if all solutions to this quadratic satisfy \(|w|<1\), since otherwise, \(z_i\) will blow up as \(i\) gets large.

+

The mathematical details are not important here – what is important is that there are two (possibly complex) roots to the quadratic equation for \(w\), and one is always greater than 1 in magnitude unless \(\lambda\) is pure imaginary ( has real part equal to zero), and \(|\lambda \Delta t|<1\). For the heat conduction equation in Problem Stability (which is already of the same form as the test equation ), \(\lambda\) is clearly not imaginary, which +explains the presence of the instability for the leap-frog scheme.

+

Nevertheless, the leap-frog scheme is still useful for computations. In fact, it is often used in geophysical applications, as you will see later on when discretizing.

+

An example of where the leap-frog scheme is superior to the other first order schemes is for undamped periodic motion (which arose in the weather balloon example from Lab #1 ). This corresponds to the system of ordinary differential equations (with the damping parameter, \(\beta\), taken to be zero):

+
+\[\frac{dy}{dt} = u,\]
+
+\[\frac{du}{dt} = - \frac{\gamma}{m} y.\]
+

You’ve already discretized this problem using the forward difference formula, and the same can be done with the second order centered formula. We can then compare the forward Euler and leap-frog schemes applied to this problem. We code this in the module

+

Solution plots are given in Figure oscilator below, for parameters \(\gamma/m=1\), \(\Delta t=0.25\), \(y(0)=0.0\) and \(u(0)=1.0\), and demonstrate that the leap-frog scheme is stable, while forward Euler is unstable. This can easily be explained in terms of the stability criteria we derived for the two schemes when applied to the test equation. The undamped oscillator problem is a linear problem with pure imaginary eigenvalues, so as long as +\(|\sqrt{\gamma/m}\Delta t|<1\), the leap-frog scheme is stable, which is obviously true for the parameter values we are given. Furthermore, the forward Euler stability condition :math:`|1+lambdaDelta

+
+

t|<1` is violated for any choice of time step (when \(\lambda\) is pure imaginary) and so this scheme is always unstable for the undamped oscillator. The github link to the oscillator module is oscillator.py

+
+
+
[ ]:
+
+
+
import numlabs.lab2.oscillator as os
+the_times=np.linspace(0,20.,80)
+yvec_init=[0,1]
+output_euler=os.euler(the_times,yvec_init)
+output_mid=os.midpoint(the_times,yvec_init)
+output_leap=os.leapfrog(the_times,yvec_init)
+answer=np.sin(the_times)
+plt.style.use('ggplot')
+fig,ax=plt.subplots(1,1,figsize=(7,7))
+ax.plot(the_times,(output_euler[0,:]-answer),label='euler')
+ax.plot(the_times,(output_mid[0,:]-answer),label='midpoint')
+ax.plot(the_times,(output_leap[0,:]-answer),label='leapfrog')
+ax.set(ylim=[-2,2],xlim=[0,20],title='global error between sin(t) and approx. for three schemes',
+      xlabel='time',ylabel='exact - approx')
+ax.legend(loc='best');
+
+
+
+

Figure numerical: Numerical solution to the undamped harmonic oscillator problem, using the forward Euler and leap-frog schemes. Parameter values: \(\gamma / m=1.0\), \(\Delta t=0.25\), \(y(0)=0\), \(u(0)=1.0\). The exact solution is a sinusoidal wave.

+

Had we taken a larger time step (such as \(\Delta t=2.0\), for example), then even the leap-frog scheme is unstable. Furthermore, if we add damping (\(\beta\neq 0\)), then the eigenvalues are no longer pure imaginary, and the leap-frog scheme is unstable no matter what time step we use.

+
+
+
+

Stiff Equations

+

This Lab has dealt only with ODE’s (and systems of ODE’s) that are non-stiff. Stiff equations are equations that have solutions with at least two widely varying times scales over which the solution changes. An example of stiff solution behaviour is a problem with solutions that have rapid, transitory oscillations, which die out over a short time scale, after which the solution slowly decays to an equilibrium. A small time step is required in the initial transitory region in order to capture +the rapid oscillations. However, a larger time step can be taken in the non-oscillatory region where the solution is smoother. Hence, using a very small time step will result in very slow and inefficient computations.

+

There are also many other numerical schemes designed specifically for stiff equations, most of which are implicit schemes. We will not describe any of them here – you can find more information in a numerical analysis text such as  @burden-faires.

+
+
+

Difference Approximations of Higher Derivatives

+

Higher derivatives can be discretized in a similar way to what we did for first derivatives. Let’s consider for now only the second derivative, for which one possible approximation is the second order centered formula:

+
+\[\frac{y(t_{i+1})-2y(t_i)+y(t_{i-1})}{(\Delta t)^2} = + y^{\prime\prime}(t_i) + {\cal O}((\Delta t)^2),\]
+

There are, of course, many other possible formulae that we might use, but this is the most commonly used.

+
+

Problem Taylor Series

+

(Hand in png file)

+
    +
  • Use Taylor series to derive the second order centered formula above

  • +
  • For more practice (although not required), try deriving a higher order approximation as well.

  • +
+
+
+
+

Summary

+

This lab has discussed the accuracy and stability of difference schemes for simple first order ODEs. The results of the problems should have made it clear to you that choosing an accurate and stable discretization for even a very simple problem is not straightforward. One must take into account not only the considerations of accuracy and stability, but also the cost or complexity of the scheme. Selecting a numerical method for a given problem can be considered as an art in itself.

+
+
+

Mathematical Notes

+
+

Taylor Polynomials and Taylor Series

+

Taylor Series are of fundamental importance in numerical analysis. They are the most basic tool for talking about the approximation of functions. Consider a function \(f(x)\) that is smooth – when we say “smooth”, what we mean is that its derivatives exist and are bounded (for the following discussion, we need \(f\) to have \((n+1)\) derivatives). We would like to approximate \(f(x)\) near the point \(x=x_0\), and we can do it as follows:

+
+\[f(x) = \underbrace{P_n(x)}_{\mbox{Taylor polynomial}} + + \underbrace{R_n(x)}_{\mbox{remainder term}},\]
+

where

+
+\[P_n(x)=f(x_0)+ f^\prime(x_0)(x-x_0) + + \frac{f^{\prime\prime}(x_0)}{2!}(x-x_0)^2 + \cdots + + \frac{f^{(n)}(x_0)}{n!}(x-x_0)^n\]
+

is the :math:`n`th order Taylor polynomial of \(f\) about \(x_0\), and

+
+\[R_n(x)=\frac{f^{(n+1)}(\xi(x))}{(n+1)!}(x-x_0)^{n+1}\]
+

is the remainder term or truncation error. The point \(\xi(x)\) in the error term lies somewhere between the points \(x_0\) and \(x\). If we look at the infinite sum ( let \(n\rightarrow\infty\)), then the resulting infinite sum is called the Taylor series of :math:`f(x)` about :math:`x=x_0`. This result is also know as Taylor’s Theorem.

+

Remember that we assumed that \(f(x)\) is smooth (in particular, that its derivatives up to order \((n+1)\) exist and are finite). That means that all of the derivatives appearing in \(P_n\) and \(R_n\) are bounded. Therefore, there are two ways in which we can think of the Taylor polynomial \(P_n(x)\) as an approximation of \(f(x)\):

+
    +
  1. First of all, let us fix \(n\). Then, we can improve the approximation by letting \(x\) approach \(x_0\), since as \((x-x_0)\) gets small, the error term \(R_n(x)\) goes to zero (\(n\) is considered fixed and all terms depending on \(n\) are thus constant). Therefore, the approximation improves when \(x\) gets closer and closer to \(x_0\).

  2. +
  3. Alternatively, we can think of fixing \(x\). Then, we can improve the approximation by taking more and more terms in the series. When \(n\) is increased, the factorial in the denominator of the error term will eventually dominate the \((x-x_0)^{n+1}\) term (regardless of how big \((x-x_0)\) is), and thus drive the error to zero.

  4. +
+

In summary, we have two ways of improving the Taylor polynomial approximation to a function: by evaluating it at points closer to the point \(x_0\); and by taking more terms in the series.

+

This latter property of the Taylor expansion can be seen by a simple example. Consider the Taylor polynomial for the function \(f(x)=\sin(x)\) about the point \(x_0=0\). All of the even terms are zero since they involve \(sin(0)\), so that if we take \(n\) odd ( \(n=2k+1\)), then the \(n\)th order Taylor polynomial for \(sin(x)\) is

+
+\[P_{2k+1}(x)=x - \frac{x^3}{3!}+\frac{x^5}{5!} -\frac{x^7}{7!}+\cdots + +\frac{x^{2k+1}}{(2k+1)!}.\ eq: taylor\]
+

The plot in Figure: Taylor illustrates quite clearly how the approximation improves both as \(x\) approaches 0, and as \(n\) is increased.

+

ee180711069b46388114155efe9375c6

+

Figure: Taylor – Plot of \(\sin(x)\) compared to its Taylor polynomial approximations about \(x_0=0\), for various values of \(n=2k +1\) in eq: taylor.

+

Consider a specific Taylor polynomial, say \(P_3(x)\) ( fix \(n=3\)). Notice that for \(x\) far away from the origin, the polynomial is nowhere near the function \(\sin(x)\). However, it approximates the function quite well near the origin. On the other hand, we could take a specific point, \(x=5\), and notice that the Taylor series of orders 1 through 7 do not approximate the function very well at all. Nevertheless the approximation improves as \(n\) increases, as is +shown by the 15th order Taylor polynomial.

+
+
+

Floating Point Representation of Numbers

+

Unlike a mathematician, who can deal with real numbers having infinite precision, a computer can represent numbers with only a finite number of digits. The best way to understand how a computer stores a number is to look at its floating-point form, in which a number is written as

+
+\[\pm 0.d_1 d_2 d_3 \ldots d_k \times 10^n,\]
+

where each digit, \(d_i\) is between 0 and 9 (except \(d_1\), which must be non-zero). Floating point form is commonly used in the physical sciences to represent numerical values; for example, the Earth’s radius is approximately 6,400,000 metres, which is more conveniently written in floating point form as \(0.64\times 10^7\) (compare this to the general form above).

+

Computers actually store numbers in binary form (i.e. in base-2 floating point form, as compared to the decimal or base-10 form shown above). However, it is more convenient to use the decimal form in order to illustrate the basic idea of computer arithmetic. For a good discussion of the binary representation of numbers, see Burden & Faires [sec. 1.2] or Newman section 4.2.

+

For the remainder of this discussion, assume that we’re dealing with a computer that can store numbers with up to 8 significant digits (i.e. \(k=8\)) and exponents in the range \(-38 \leq n \leq 38\). Based on these values, we can make a few observations regarding the numbers that can be represented:

+
    +
  • +
    The largest number that can be represented is about :math:`1.0times

    10^{+38}`, while the smallest is \(1.0\times 10^{-38}\).

    +
    +
    +
  • +
  • These numbers have a lot of holes, where real numbers are missed. For example, consider the two consecutive floating point numbers

    +
    +\[0.13391482 \times 10^5 \;\;\; {\rm and} \;\;\; 0.13391483 \times 10^5,\]
    +

    or 13391.482 and 13391.483. Our floating-point number system cannot represent any numbers between these two values, and hence any number in between 13391.482 and 13391.483 must be approximated by one of the two values. Another way of thinking of this is to observe that \(0.13391482 \times 10^5\) does not represent just a single real number, but a whole range of numbers.

    +
  • +
  • Notice that the same amount of floating-point numbers can be represented between \(10^{-6}\) and \(10^{-5}\) as are between \(10^{20}\) and \(10^{21}\). Consequently, the density of floating points numbers increases as their magnitude becomes smaller. That is, there are more floating-point numbers close to zero than there are far away. This is illustrated in the figure below.

    +

    The floating-point numbers (each represented by a \(\times\)) are more dense near the origin.

    +
  • +
+

af0bdd6edd2e4f29b5fca4a2bc145934

+

The values \(k=8\) and \(-38\leq n \leq 38\) correspond to what is known as single precision arithmetic, in which 4 bytes (or units of memory in a computer) are used to store each number. It is typical in many programming languages, including \(C++\), to allow the use of higher precision, or double precision, using 8 bytes for each number, corresponding to values of \(k=16\) and \(-308\leq n \leq 308\), thereby greatly increasing the range and density of numbers that can +be represented. When doing numerical computations, it is customary to use double-precision arithmetic, in order to minimize the effects of round-off error (in a \(C++\) program, you can define a variable x to be double precision using the declaration double x;).

+

Sometimes, double precision arithmetic may help in eliminating round-off error problems in a computation. On the minus side, double precision numbers require more storage than their single precision counterparts, and it is sometimes (but not always) more costly to compute in double precision. Ultimately, though, using double precision should not be expected to be a cure-all against the difficulties of round-off errors. The best approach is to use an algorithm that is not unstable with respect to +round-off error. For an example where increasing precision will not help, see the section on Gaussian elimination in Lab #3.

+
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab2/01-lab2.ipynb b/notebooks/lab2/01-lab2.ipynb new file mode 100644 index 0000000..50353ce --- /dev/null +++ b/notebooks/lab2/01-lab2.ipynb @@ -0,0 +1,1294 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Lab 2: Stability and accuracy\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important - before you start**\n", + "\n", + "Before starting on a new lab, you should \"fetch\" any changes that we have made to the labs in our repository (we are continually trying to improve them). Follow the instructions here: https://rhwhite.github.io/numeric_2024/getting_started/python.html#Pulling-changes-from-the-github-repository \n", + "\n", + "Caution - if you have made changes to, for example, lab 1, but you didn't duplicate and rename the file first, this can write over your changes! Follow the instructions above carefully to not lose your work, and remember to always create a copy of each lab for you to do your work in. \n", + "\n", + "This step will be smoother if you haven't saved any changes to the default files (sometimes even opening it, and saving it, counts as a change!)\n", + "\n", + "\n", + "**Some notes about navigating in Jupyter Lab**\n", + "\n", + "In Jupyter Lab, once you have opened a lab, if you click on the symbol that looks like three bullet points over on the far left, this brings up a contents page that allows you to jump immediately to any particular section in the lab you are currently looking at.\n", + "\n", + "The jigsaw puzzle icon below this allows you to install extensions, if you want extra functionality in your notebooks (you may need to go to Settings, Enable Extension Manager to access this).\n", + "\n", + "Also, in Jupyter Lab you can click on links in the 'List of Problems' to take you to directly to each problem. Remember to check canvas for which problems you need to submit for the weekly assignments. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems\n", + "\n", + "There are probems throughout this lab, as you might find at the end of a textbook chapter. Some of these problems will be set as an assignment for you to hand in to be graded - see the Lab 2: Assignment on the canvas site for which problems you should hand in. However, if you have time, you are encouraged to work through all of these problems to help you better understand the material.\n", + "\n", + "- [Problem Accuracy](#Problem-Accuracy) \n", + "- [Problem Stability](#Problem-Stability) \n", + "- [Problem Backward-Euler](#Problem-Backward-Euler)\n", + "- [Problem Taylor Series](#Problem-Taylor-Series)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "\n", + "In Lab \\#1 you were introduced to the concept of discretization, and saw that\n", + "there were many different ways to approximate a given problem. This Lab\n", + "will delve further into the concepts of accuracy and stability of\n", + "numerical schemes, in order that we can compare the many possible\n", + "discretizations.\n", + "\n", + "At the end of this Lab, you will have seen where the error in a\n", + "numerical scheme comes from, and how to quantify the error in terms of\n", + "*order*. The stability of several examples will be demonstrated, so that\n", + "you can recognize when a scheme is unstable, and how one might go about\n", + "modifying the scheme to eliminate the instability.\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- Define the term and identify: Implicit numerical scheme and Explicit\n", + " numerical scheme.\n", + "\n", + "- Define the term, identify, or write down for a given equation:\n", + " Backward Euler method and Forward Euler method.\n", + "\n", + "- Explain the difference in terminology between: Forward difference\n", + " discretization and Forward Euler method.\n", + "\n", + "- Define: truncation error, local truncation error, global truncation\n", + " error, and stiff equation.\n", + "\n", + "- Explain: a predictor-corrector method.\n", + "\n", + "- Identify from a plot: an unstable numerical solution.\n", + "\n", + "- Be able to: find the order of a scheme, use the test equation to find\n", + " the stability of a scheme, find the local truncation error from a\n", + " graph of the exact solution and the numerical solution.\n", + "\n", + "- Evaluate and compare the accuracy and stability of at least 3 different discretization methods." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "\n", + "This lab is designed to be self-contained. If you would like\n", + "additional background on any of the following topics, I'd recommend this book:\n", + "[Finite difference computing with PDES](https://link-springer-com.ezproxy.library.ubc.ca/book/10.1007/978-3-319-55456-3) by Hans Petter Langtangen and Svein Linge\n", + "The entire book is available on [github](http://hplgit.github.io/fdm-book/doc/pub/book/html/decay-book.html) with the python code [here](https://github.com/hplgit/fdm-book/tree/master/src). Much of the content of this lab is summarized in [Appendix B -- truncation analysis](https://link-springer-com.ezproxy.library.ubc.ca/content/pdf/bbm%3A978-3-319-55456-3%2F1.pdf)\n", + "\n", + "\n", + "### Other recommended books\n", + "\n", + "- **Differential Equations:**\n", + "\n", + " - Strang (1986), Chapter 6 (ODE’s).\n", + "\n", + "- **Numerical Methods:**\n", + "\n", + " - Strang (1986), Section 6.5 (a great overview of difference methods\n", + " for initial value problems)\n", + "\n", + " - Burden and Faires (1981), Chapter 5 (a more in-depth analysis of the\n", + " numerical methods and their accuracy and stability).\n", + " \n", + " - Newman (2013) Derivatives, round-off and truncation errors, Section 5.10 pp. 188-198.\n", + " Forward Euler, mid-point and leap-frog methods, Chapter 8 pp. 327-335.\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction \n", + "\n", + "\n", + "Remember from Lab \\#1  that you were introduced to three approximations\n", + "to the first derivative of a function, $T^\\prime(t)$. If the independent\n", + "variable, $t$, is discretized at a sequence of N points,\n", + "$t_i=t_0+i \\Delta t$, where $i\n", + "= 0,1,\\ldots, N$ and $\\Delta t= 1/N$, then we can write the three\n", + "approximations as follows:\n", + "\n", + " **Forward difference formula:**\n", + "\n", + " $$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_i}{\\Delta t}$$\n", + "\n", + " **Backward difference formula:**\n", + "\n", + " $$T^\\prime(t_i) \\approx \\frac{T_{i}-T_{i-1}}{\\Delta t}$$\n", + "\n", + " **Centered difference formula** (add together the forwards and backwards formula):\n", + "\n", + " $$T^\\prime(t_i) \\approx \\frac{T_{i+1}-T_{i-1}}{2 \\Delta t}$$\n", + "\n", + "In fact, there are many other possible methods to approximate the\n", + "derivative (some of which we will see later in this Lab). With this\n", + "large choice we have in the choice of approximation scheme, it is not at\n", + "all clear at this point which, if any, of the schemes is the “best”. It\n", + "is the purpose of this Lab to present you with some basic tools that\n", + "will help you to decide on an appropriate discretization for a given\n", + "problem. There is no generic “best” method, and the choice of\n", + "discretization will always depend on the problem that is being dealt\n", + "with.\n", + "\n", + "In an example from Lab \\#1, the forward difference formula was used to\n", + "compute solutions to the saturation development equation, and you saw\n", + "two important results:\n", + "\n", + "- reducing the grid spacing, $\\Delta t$, seemed to improve the\n", + " accuracy of the approximate solution; and\n", + "\n", + "- if $\\Delta t$ was taken too large (that is, the grid was not fine\n", + " enough), then the approximate solution exhibited non-physical\n", + " oscillations, or a *numerical instability*.\n", + "\n", + "There are several questions that arise from this example:\n", + "\n", + "1. Is it always true that reducing $\\Delta t$ will improve the discrete\n", + " solution?\n", + "\n", + "2. Is it possible to improve the accuracy by using another\n", + " approximation scheme (such as one based on the backward or centered\n", + " difference formulas)?\n", + "\n", + "3. Are these numerical instabilities something that always appear when\n", + " the grid spacing is too large?\n", + "\n", + "4. By using another difference formula for the first derivative, is it\n", + " possible to improve the stability of the approximate solution, or to\n", + " eliminate the stability altogether?\n", + "\n", + "The first two questions, related to *accuracy*, will be dealt with in\n", + "the next section, [Section 5 (1.5)](#Accuracy), and the last two will have to wait\n", + "until [Section 6 (1.6)](#Stability) when *stability* is discussed." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Accuracy of Difference Approximations \n", + "\n", + "Before moving on to the details of how to measure the error in a scheme,\n", + "let’s take a closer look at another example which we’ve seen already …\n", + "\n", + "\n", + "\n", + "### Accuracy Example\n", + "\n", + "Let’s go back to the heat conduction equation from\n", + "Lab \\#1, where the temperature, $T(t)$, of a rock immersed in water or\n", + "air, evolves in time according to the first order ODE:\n", + "$$\\frac{dT}{dt} = \\lambda(T,t) \\, (T-T_a) $$ with initial condition $T(0)$. We saw\n", + "in the section on the **forward Euler method** that one way to discretize\n", + "this equation was using the forward difference formula  for the\n", + "derivative, leading to\n", + "\n", + "$T_{i+1} = T_i + \\Delta t \\, \\lambda(T_i,t_i) \\, (T_i-T_a).$ (**eq: euler**)\n", + "\n", + "Similarly, we could apply either of the other two difference formulae to obtain other difference schemes, namely what we called the\n", + "**backward Euler method**\n", + "\n", + "$T_{i+1} = T_i + \\Delta t \\, \\lambda(T_{i+1},t_{i+1}) \\, (T_{i+1}-T_a),$ (**eq: beuler**)\n", + "\n", + "and the **mid-point** or **leap-frog** centered method\n", + "\n", + "$T_{i+1} = T_{i-1} + 2 \\Delta t \\, \\lambda(T_{i},t_{i}) \\, (T_{i}-T_a).$ (**eq: midpoint**)\n", + "\n", + "The forward Euler and mid-point schemes are called *explicit methods*,\n", + "since they allow the temperature at any new time to be computed in terms\n", + "of the solution values at *previous* time steps only, i.e. it does not require \n", + "any information from current or future time steps. The backward Euler\n", + "scheme, on the other hand, is called an *implicit scheme*, since it\n", + "gives an equation defining $T_{i+1}$ implicitly, that is, the function \n", + "$\\lambda$ takes the value $T_{i+1}$ as an input, in order to calculate $T_{i+1}$. \n", + "If $\\lambda$ depends non-linearly on $T$, then this equation may require an additional step,\n", + "involving the iterative solution of a non-linear equation. We will pass\n", + "over this case for now, and refer you to a reference such as\n", + "Burden and Faires (1981) for the details on non-linear solvers such as *Newton’s\n", + "method*." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Important point**: Note that **eq: midpoint** requires the value of the temperature at two points: $T_{i-1}$ and \n", + "$T_{i}$ to calculate the temperature $T_{i+1}$. This requires an approximate guess for $T_i$, which we will discuss\n", + "in more detail below." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "For now, let’s assume that the function $\\lambda$ is a constant, and thus it is independent of $T$\n", + "and $t$. Plots of the numerical results from each of these schemes,\n", + "along with the exact solution, are given in Figure 1\n", + "(with the “unphysical” parameter value $\\lambda=0.8$ chosen to enhance\n", + "the show the growth of numerical errors, even though in a real material\n", + "this would violate conservation of energy).\n", + "\n", + "The functions used in make the following figure are imported from [lab2_functions.py](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/lab2_functions.py)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2023-08-15T20:57:12.556777Z", + "start_time": "2023-08-15T20:57:09.233103Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# import and define functions\n", + "%matplotlib inline\n", + "import context\n", + "import matplotlib.pyplot as plt\n", + "from numlabs.lab2.lab2_functions import euler,beuler,leapfrog\n", + "import numpy as np\n", + "plt.style.use('ggplot')\n", + "\n", + "#\n", + "# save our three functions to a dictionary, keyed by their names\n", + "#\n", + "theFuncs={'euler':euler,'beuler':beuler,'leapfrog':leapfrog}\n", + "#\n", + "# store the results in another dictionary\n", + "#\n", + "output={}\n", + "#\n", + "#end time = 10 seconds\n", + "#\n", + "tend=10.\n", + "#\n", + "# start at 30 degC, air temp of 20 deg C\n", + "#\n", + "Ta=20.\n", + "To=30.\n", + "#\n", + "# note that lambda is a reserved keyword in python so call this\n", + "# thelambda\n", + "#\n", + "theLambda=0.8 #units have to be per minute if time in seconds\n", + "#\n", + "# dt = 10/npts = 10/30 = 1/3\n", + "#\n", + "npts=30\n", + "for name,the_fun in theFuncs.items():\n", + " output[name]=the_fun(npts,tend,To,Ta,theLambda)\n", + "#\n", + "# calculate the exact solution for comparison\n", + "#\n", + "exactTime=np.linspace(0,tend,npts)\n", + "exactTemp=Ta + (To-Ta)*np.exp(theLambda*exactTime)\n", + "#\n", + "# now plot all four curves\n", + "#\n", + "fig,ax=plt.subplots(1,1,figsize=(8,8))\n", + "ax.plot(exactTime,exactTemp,label='exact',lw=2)\n", + "for fun_name in output.keys():\n", + " the_time,the_temp=output[fun_name]\n", + " ax.plot(the_time,the_temp,label=fun_name,lw=2)\n", + "ax.set_xlim([0,2.])\n", + "ax.set_ylim([30.,90.])\n", + "ax.grid(True)\n", + "ax.set(xlabel='time (seconds)',ylabel='bar temp (deg C)')\n", + "out=ax.legend(loc='upper left')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure 1** A plot of the exact and computed solutions for the temperature of a\n", + "rock, with parameters: $T_a=20$, $T(0)=30$, $\\lambda= +0.8$,\n", + "$\\Delta t=\\frac{1}{3}$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice from these results that the mid-point/leap-frog scheme is the most accurate, and backward Euler the least accurate.\n", + "\n", + "The next section explains why some schemes are more accurate than others, and introduces a means to quantify the accuracy of a numerical approximation.\n", + "\n", + "### Round-off Error and Discretization Error \n", + "\n", + "From [Accuracy Example](#ex_accuracy) and the example in the Forward Euler\n", + "section of the previous lab,  it is obvious that a numerical\n", + "approximation is exactly that - **an approximation**. The process of\n", + "discretizing a differential equation inevitably leads to errors. In this\n", + "section, we will tackle two fundamental questions related to the\n", + "accuracy of a numerical approximation:\n", + "\n", + "- Where does the error come from (and how can we measure it)?\n", + "\n", + "- How can the error be controlled?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "### Where does the error come from? \n", + "\n", + "#### Round-off error:\n", + "\n", + "When attempting to solve differential equations on a computer, there are\n", + "two main sources of error. The first, *round-off error*, derives from\n", + "the fact that a computer can only represent real numbers by *floating\n", + "point* approximations, which have only a finite number of digits of\n", + "accuracy.\n", + "\n", + "* Mathematical note [floating point notation](#floating-point)\n", + "\n", + "\n", + "For example, we all know that the number $\\pi$ is a non-repeating\n", + "decimal, which to the first twenty significant digits is\n", + "$3.1415926535897932385\\dots$ Imagine a computer which stores only eight\n", + "significant digits, so that the value of $\\pi$ is rounded to\n", + "$3.1415927$.\n", + "\n", + "In many situations, these five digits of accuracy may be sufficient.\n", + "However, in some cases, the results can be catastrophic, as shown in the\n", + "following example: $$\\frac{\\pi}{(\\pi + 0.00000001)-\\pi}.$$ Since the\n", + "computer can only “see” 8 significant digits, the addition\n", + "$\\pi+0.00000001$ is simply equal to $\\pi$ as far as the computer is\n", + "concerned. Hence, the computed result is $\\frac{1}{0}$ - an undefined\n", + "expression! The exact answer $100000000\\pi$, however, is a very\n", + "well-defined non-zero value.\n", + "\n", + "A side note: round-off errors played a key role in Edward Lorenz's exploration of chaos theory in physics, see https://www.aps.org/publications/apsnews/200301/history.cfm" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Truncation error:\n", + "\n", + "The second source of error stems from the discretization of the problem,\n", + "and hence is called *discretization error* or *truncation error*. In\n", + "comparison, round-off error is always present, and is independent of the\n", + "discretization being used. The simplest and most common way to analyse\n", + "the truncation error in a scheme is using *Taylor series expansions*.\n", + "\n", + "Let us begin with the forward difference formula for the first\n", + "derivative, , which involves the discrete solution at times $t_{i+1}$\n", + "and $t_{i}$. Since only continuous functions can be written as Taylor\n", + "series, we expand the exact solution (instead of the discrete values\n", + "$T_i$) at the discrete point $t_{i+1}$:\n", + "\n", + "$$T(t_{i+1}) = T(t_i+\\Delta t) = T(t_i) + (\\Delta t) T^\\prime(t_i) + \n", + " \\frac{1}{2}(\\Delta t)^2 T^{\\prime\\prime}(t_i) +\\cdots$$\n", + "\n", + "\n", + "Rewriting to clean this up slightly gives **eq: feuler**\n", + "\n", + "$$\\begin{aligned}\n", + "T(t_{i+1}) &= T(t_i) + \\Delta t T^{\\prime}(t_i,T(t_i)) +\n", + " \\underbrace{\\frac{1}{2}(\\Delta t)^2T^{\\prime\\prime}(t_i) + \\cdots}\n", + "_{\\mbox{ truncation error}} \\\\ \\; \n", + " &= T(t_i) + \\Delta t T^{\\prime}(t_i) + {\\cal O}(\\Delta t^2).\n", + " \\end{aligned}$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "This second expression writes the truncation error term in terms of\n", + "*order notation*. If we write $y = {\\cal O}(\\Delta t)$, then we mean\n", + "simply that $y < c \\cdot \\Delta t$ for some constant $c$, and we say that\n", + "“ $y$ is first order in $\\Delta t$ ” (since it depends on $\\Delta t$ to\n", + "the first power) or “ $y$ is big-oh of $\\Delta t$.” As $\\Delta t$ is\n", + "assumed small, the next term in the series, $\\Delta t^2$ is small\n", + "compared to the $\\Delta t$ term. In words, we say that forward euler is\n", + "*first order accurate* with errors of second order.\n", + "\n", + "It is clear from this that as $\\Delta t$ is reduced in size (as the\n", + "computational grid is refined), the error is also reduced. If you\n", + "remember that we derived the approximation from the limit definition of\n", + "derivative, then this should make sense. This dependence of the error on\n", + "powers of the grid spacing $\\Delta t$ is an underlying characteristic of\n", + "difference approximations, and we will see approximations with higher\n", + "orders in the coming sections …\n", + "\n", + "There is one more important distinction to be made here. The “truncation\n", + "error” we have been discussing so far is actually what is called *local\n", + "truncation error*. It is “local” in the sense that we have expanded the\n", + "Taylor series *locally* about the exact solution at the point $t_i$.\n", + "\n", + "There is also a *global truncation error* (or, simply, *global error*),\n", + "which is the error made during the course of the entire computation,\n", + "from time $t_0$ to time $t_n$. The difference between local and global\n", + "truncation error is illustrated in Figure 2. If the local error stays approximately\n", + "constant, then the global error will be approximately the local error times\n", + "the number of timesteps. For a fixed simulation length of $t$, the number of\n", + "timesteps required is $t/\\Delta t$, thus the global truncation error will be\n", + "approximately of the order of $1/\\Delta t$ times the local error, or about one \n", + "order of $\\Delta t$ *worse* (lower order) than the local error." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure Error:** Local and global truncation error. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "It is easy to get a handle on the order of the local truncation error\n", + "using Taylor series, regardless of whether the exact solution is known,\n", + "but no similar analysis is available for the global error. We can write\n", + "\n", + "$$\\text{global error} = |T(t_n)-T_n|$$\n", + "\n", + "but this expression can only be\n", + "evaluated if the exact solution is known ahead of time (which is not the\n", + "case in most problems we want to compute, since otherwise we wouldn’t be\n", + "computing it in the first place!). Therefore, when we refer to\n", + "truncation error, we will always be referring to the local truncation\n", + "error." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "#### Second order accuracy\n", + "\n", + "Above we mentioned a problem with evaluating the mid-point method. If we start with three points $(t_0,t_1,t_2)$, \n", + "each separated by $\\Delta t/2$ so that $t_2 - t_0=\\Delta t$\n", + "\n", + "\\begin{align}\n", + "y(t_2)&=y(t_1) + y^\\prime (t_1,y(t_1))(t_2 - t_1) + \\frac{y^{\\prime \\prime}(t_1,y(t_1))}{2} (t_2 - t_1)^2 + \\frac{y^{\\prime \\prime \\prime}(t_1,y(t_1))}{6} (t_2 - t_1)^3 + h.o.t. \\ (eq.\\ a)\\\\\n", + "y(t_0)&=y(t_1) + y^\\prime (t_1,y(t_1))(t_0 - t_1) + \\frac{y^{\\prime \\prime}(t_1)}{2} (t_0 - t_1)^2 + \\frac{y^{\\prime \\prime \\prime}(t_1)}{6} (t_0 - t_1)^3 + h.o.t. \\ (eq.\\ b)\n", + "\\end{align}\n", + "\n", + "\n", + "where h.o.t. stands for \"higher order terms\". Rewriting in terms of $\\Delta t$:\n", + "\n", + "\n", + "\\begin{align}\n", + "y(t_2)&=y(t_1) + \\frac{\\Delta t}{2}y^\\prime (t_1,y(t_1)) + \\frac{\\Delta t^2}{8} y^{\\prime \\prime}(t_1,y(t_1)) + \\frac{\\Delta t^3}{48} y^{\\prime \\prime \\prime}(t_1,y(t_1)) + h.o.t. \\ (eq.\\ a)\\\\\n", + "y(t_0)&=y(t_1) - \\frac{\\Delta t}{2}y^\\prime (t_1,y(t_1)) + \\frac{\\Delta t^2}{8} y^{\\prime \\prime}(t_1,y(t_1)) - \\frac{\\Delta t^3}{48} y^{\\prime \\prime \\prime}(t_1,y(t_1)) + h.o.t. \\ (eq.\\ b)\n", + "\\end{align}\n", + "\n", + "\n", + "and subtracting:\n", + "\n", + "\\begin{align}\n", + "y(t_2)&=y(t_0) + \\Delta t y^\\prime (t_1,y(t_1)) + \\frac{\\Delta t^3}{24} y^{\\prime \\prime \\prime}(t_1,y(t_1)) + h.o.t. \\ (eq.\\ c)\n", + "\\end{align}\n", + "\n", + "where $t_1=t_0 + \\Delta t/2$\n", + "\n", + "Comparing with [eq: feuler](#eq:feuler) we can see that we've canceled the $\\Delta t^2$ terms, so that\n", + "if we drop the $\\frac{\\Delta t^3}{24} y^{\\prime \\prime \\prime}(t_1,y(t_1))$\n", + "and higher order terms we're doing one order better that foward euler, as long as we can solve the problem of\n", + "estimating y at the midpoint: $y(t_1) = y(t_0 + \\Delta t/2)$\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Mid-point and leap-frog\n", + "\n", + "The mid-point and leap-frog methods take two slightly different approaches to estimating $y(t_0 + \\Delta t/2)$. \n", + "For the [explicit mid-point method](https://en.wikipedia.org/wiki/Midpoint_method), we estimate $y$ at\n", + "the midpoint by taking a half-step:\n", + "\n", + "\n", + "\\begin{align}\n", + "k_1 & = \\Delta t y^\\prime(t_0,y(t_0)) \\\\\n", + "k_2 & = \\Delta t y^\\prime(t_0 + \\Delta t/2,y(t_0) + k_1/2) \\\\\n", + "y(t_0 + \\Delta t) &= y(t_0) + k_2\n", + "\\end{align}\n", + "\n", + "\n", + "Compare this to the [leap-frog method](https://en.wikipedia.org/wiki/Leapfrog_integration), which uses the results\n", + "from one half-interval to calculate the results for the next half-interval:\n", + "\n", + "\n", + "\\begin{align}\n", + "y(t_0 + \\Delta t/2) & = y(t_0) + \\frac{\\Delta t}{2} y^\\prime(t_0,y(t_0))\\ (i) \\\\\n", + "y(t_0 + \\Delta t) & = y(t_0) + \\Delta t y^\\prime(t_0 + \\Delta t/2,y(t_0 + \\Delta t/2)\\ (ii)\\\\\n", + "y(t_0 + 3 \\Delta t/2) & = y(t_0 + \\Delta t/2) + \\Delta t y^\\prime(t_0 + \\Delta t,y(t_0 + \\Delta t))\\ (iii) \\\\\n", + "y(t_0 + 2 \\Delta t) & = y(t_0 + \\Delta t) + \\Delta t y^\\prime(t_0 + 3\\Delta t/2,y(t_0 + 3 \\Delta t/2))\\ (iv) \\\\\n", + "\\end{align}\n", + "\n", + "\n", + "Comparing (iii) and (iv) shows how the method gets its name: the half-interval and whole interval values\n", + "are calculated by leaping over each other. Once the first half and whole steps are done, the rest of the\n", + "integration is completed by repeating (iii) and (iv) as until the endpoint is reached.\n", + "\n", + "The leap-frog scheme has the advantage that it is *time reversible* or as the Wikipedia article says *sympletic*. \n", + "This means that estimating $y(t_1)$ and then using that value to go backwards by $-\\Delta t$ yields $y(t_0)$\n", + "exactly, which the mid-point method does not. The mid-point method, however, is one member (the 2nd order member)\n", + "of a family of *Runge Kutta* integrators, which will be covered in more detail in Lab 4.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### How can we control the error? \n", + "\n", + "Now that we’ve determined the source of the error in numerical methods,\n", + "we would like to find a way to control it; that is, we would like to be\n", + "able to compute and be confident that our approximate solution is\n", + "“close” to the exact solution. Round-off error is intrinsic to all\n", + "numerical computations, and cannot be controlled (except to develop\n", + "methods that do not magnify the error unduly … more on this later).\n", + "Truncation error, on the other hand, *is* under our control.\n", + "\n", + "In the simple ODE examples that we’re dealing with in this lab, the\n", + "round-off error in a calculation is much smaller than the truncation\n", + "error. Furthermore, the schemes being used are *stable with respect to\n", + "round-off error* in the sense that round-off errors are not magnified in\n", + "the course of a computation. So, we will restrict our discussion of\n", + "error control in what follows to the truncation error.\n", + "\n", + "However, there are many numerical algorithms in which the round-off\n", + "error can dominate the the result of a computation (Gaussian elimination\n", + "is one example, which you will see in Lab \\#3 ), and so we must always\n", + "keep it in mind when doing numerical computations.\n", + "\n", + "There are two fundamental ways in which the truncation error in an\n", + "approximation  can be reduced:\n", + "\n", + "1. **Decrease the grid spacing**. Provided that the second derivative of\n", + " the solution is bounded, it is clear from the error term in **eq: feuler** that as\n", + " $\\Delta t$ is reduced, the error will also get smaller. This principle was\n", + " demonstrated in an example from Lab \\#1 using the Forward Euler method. The disadvantage to\n", + " decreasing $\\Delta t$ is that the cost of the computation increases since more\n", + " steps must be taken. Also, there is a limit to how small $\\Delta t$ can be,\n", + " beyond which round-off errors will start polluting the computation.\n", + "\n", + "2. **Increase the order of the approximation**. We saw above that the\n", + " forward difference approximation of the first derivative is first\n", + " order accurate in the grid spacing. It is also possible to derive\n", + " higher order difference formulas which have a leading error term of\n", + " the form $(\\Delta t)^p$, with $p>1$. As noted above in [Section Second Order](#sec_secondOrder)\n", + " the midpoint formula\n", + " is a second order scheme, and some further examples will be given in\n", + " [Section Higher order Taylor](#sec_HigherOrderTaylor). The main disadvantage to using\n", + " very high order schemes is that the error term depends on higher\n", + " derivatives of the solution, which can sometimes be very large – in\n", + " this case, the stability of the scheme can be adversely affected\n", + " (for more on this, see [Section Stability](#Stability).\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-16T06:27:59.383864Z", + "start_time": "2022-01-16T06:27:59.341911Z" + } + }, + "source": [ + "### Problem Accuracy\n", + "\n", + "In order to investigate these two approaches to improving the accuracy of an approximation, you can use the code in\n", + "[terror.ipynb](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/terror2.ipynb)\n", + "to play with the solutions to the heat conduction equation. You will need the additional functions provided for this lab. These can be found on your local computer: numeric_2024/numlabs/lab2 (you will need to fetch upstream from github to get recent changes from our version to your clone before pulling those changes to your local machine; don't forget to commit your previous labs!). For a given function $\\lambda(T)$, and specified parameter values, you should experiment with various time steps and schemes, and compare the computed results (Note: only the answers to the assigned questions need to be handed in). Look at the different schemes (euler, leap-frog, midpoint, 4th order runge kutta) run them for various total times (tend) and step sizes (dt=tend/npts).\n", + "\n", + "The three schemes that will be used here are forward Euler (first order), leap-frog (second order) and the fourth order Runge-Kutta scheme (which will be introduced more thoroughly in Lab 4).\n", + "\n", + "Try three different step sizes for all three schemes for a total of 9 runs. It’s helpful to be able to change the axis limits to look at various parts of the plot." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Use your 9 results to answer parts a and b below.\n", + "\n", + "- a\\) Does increasing the order of the scheme, or decreasing the time step\n", + " always improve the solution?\n", + "\n", + "- b\\) How would you compute the local truncation error from the error plot?\n", + " And the global error? Do this on a plot for one set of parameters.\n", + "\n", + "- c\\) Similarly, how might you estimate the *order* of the local truncation\n", + " error? The order of the global error? ( **Hint:** An order $p$ scheme\n", + " has truncation error that looks like $c\\cdot(\\Delta t)^p$. Read the\n", + " error off the plots for several values of the grid spacing and use this\n", + " to find $p$.) Are the local and global error significantly different?\n", + " Why or why not?\n", + "\n", + "### Other Approximations to the First Derivative \n", + "\n", + "The Taylor series method of deriving difference formulae for the first\n", + "derivative is the simplest, and can be used to obtain approximations\n", + "with even higher order than two. There are also many other ways to\n", + "discretize the derivatives appearing in ODE’s, as shown in the following\n", + "sections…" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Higher Order Taylor Methods \n", + "\n", + "As mentioned earlier, there are many other possible approximations to\n", + "the first derivative using the Taylor series approach. The basic\n", + "approach in these methods is as follows:\n", + "\n", + "1. expand the solution in a Taylor series at one or more points\n", + " surrounding the point where the derivative is to be approximated\n", + " (for example, for the centered scheme, you used two points,\n", + " $T(t_i+\\Delta t)$ and $T(t_i-\\Delta t)$. You also have to make sure that you\n", + " expand the series to a high enough order …\n", + "\n", + "2. take combinations of the equations until the $T_i$ (and possibly\n", + " some other derivative) terms are eliminated, and all you’re left\n", + " with is the first derivative term.\n", + "\n", + "One example is the fourth-order centered difference formula for the\n", + "first derivative:\n", + "$$\\frac{-T(t_{i+2})+8T(t_{i+1})-8T(t_{i-1})+T(t_{i-2})}{12\\Delta t} =\n", + " T^\\prime(t_i) + {\\cal O}((\\Delta t)^4)$$\n", + "\n", + "**Quiz:** Try the quiz at [this\n", + "link](https://phaustin.github.io/numeric/quizzes2/order.html)\n", + "related to this higher order scheme." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Predictor-Corrector Methods \n", + "\n", + "Another class of discretizations are called *predictor-corrector\n", + "methods*. Implicit methods can be difficult or expensive to use because\n", + "of the solution step, and so they are seldom used to integrate ODE’s.\n", + "Rather, they are often used as the basis for predictor-corrector\n", + "algorithms, in which a “prediction” for $T_{i+1}$ based only on an\n", + "explicit method is then “corrected” to give a better value by using this\n", + "precision in an implicit method.\n", + "\n", + "To see the basic idea behind these methods, let’s go back (once again)\n", + "to the backward Euler method for the heat conduction problem which\n", + "reads:\n", + "$$T_{i+1} = T_{i} + \\Delta t \\, \\lambda( T_{i+1}, t_{i+1} ) \\, ( T_{i+1}\n", + "- T_a ).$$ Note that after applying the backward difference formula ,\n", + "all terms in the right hand side are evaluated at time $t_{i+1}$.\n", + "\n", + "Now, $T_{i+1}$ is defined implicitly in terms of itself, and unless\n", + "$\\lambda$ is a very simple function, it may be very difficult to solve\n", + "this equation for the value of $T$ at each time step. One alternative,\n", + "mentioned already, is the use of a non-linear equation solver such as\n", + "Newton’s method to solve this equation. However, this is an iterative\n", + "scheme, and can lead to a lot of extra expense. A cheaper alternative is\n", + "to realize that we could try estimating or *predicting* the value of\n", + "$T_{i+1}$ using the simple explicit forward Euler formula and then use\n", + "this in the right hand side, to obtain a *corrected* value of $T_{i+1}$.\n", + "The resulting scheme, $$\\begin{array}{ll}\n", + " \\mathbf{Prediction}: & \\widetilde{T}_{i+1} = T_i + \\Delta t \\,\n", + " \\lambda(T_i,t_i) \\, (T_i-T_a), \\\\ \\; \\\\\n", + " \\mathbf{Correction}: & T_{i+1} = T_i + \\Delta t \\,\n", + " \\lambda(\\widetilde{T}_{i+1},t_{i+1}) \\, (\\widetilde{T}_{i+1}-T_a).\n", + "\\end{array}$$\n", + "\n", + "This method is an explicit scheme, which can also be shown to be second\n", + "order accurate in . It is the simplest in a whole class of schemes\n", + "called *predictor-corrector schemes* (more information is available on\n", + "these methods in a numerical analysis book such as  @burden-faires)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Other Methods \n", + "\n", + "The choice of methods is made even greater by two other classes of\n", + "schemes:\n", + "\n", + " **Runge-Kutta methods:**\n", + "\n", + "- We have already seen two examples of the Runge-Kutta family of integrators: Forward Euler is a first order Runge-Kutta, and the midpoint method is second order Runge-Kutta. Fourth and fifth order Runge-Kutta algorithms will be described in Labs #4 and #5\n", + "\n", + "**Multi-step methods:**\n", + " \n", + "- These use values of the solution at more than one previous time step in order to increase the accuracy. Compare these to one-step schemes, such as forward Euler, which use the solution only at one previous step.\n", + "\n", + "More can be found on these (and other) methods in  Burden and Faires (1981) and Newman (2013)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Accuracy Summary\n", + "\n", + "In this section, you’ve been given a short overview of the accuracy of\n", + "difference schemes for first order ordinary differential equations.\n", + "We’ve seen that accuracy can be improved by either decreasing the grid\n", + "spacing, or by choosing a higher order scheme from one of several\n", + "classes of methods. When using a higher order scheme, it is important to\n", + "realize that the cost of the computation usually rises due to an added\n", + "number of function evaluations (especially for multi-step and\n", + "Runge-Kutta methods). When selecting a numerical scheme, it is important\n", + "to keep in mind this trade-off between accuracy and cost.\n", + "\n", + "However, there is another important aspect of discretization that we\n", + "have pretty much ignored. The next section will take a look at schemes\n", + "of various orders from a different light, namely that of *stability*." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-16T05:49:29.341551Z", + "start_time": "2022-01-16T05:49:29.266183Z" + } + }, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stability of Difference Approximations \n", + "\n", + "The easiest way to introduce the concept of stability is for you to see\n", + "it yourself.\n", + "\n", + "### Problem Stability\n", + "\n", + "This example is a slight modification of [Problem accuracy](#Problem-Accuracy) from the previous section on accuracy. We will add one scheme (backward euler) and drop the 4th order Runge-Kutta, and change the focus from error to stability. The value of $\\lambda$ is assumed a constant, so that the backward Euler scheme results in an explicit method, and we’ll also compute a bit further in time, so that any instability manifests itself more clearly. Run the [stability2.ipynb](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/stability2.ipynb) notebook in numlabs/lab2 with $\\lambda= -8\\ s^{-1}$, with $\\Delta t$ values that just straddle the stability condition for the forward euler scheme\n", + "($\\Delta t < \\frac{-2}{\\lambda}$, derived below). Create plots that show that \n", + "1) the stability condition does in fact predict the onset of the instablity in the euler scheme, and \n", + "\n", + "2) determine whether the backward euler and leap-frog are stable or unstable for the same $\\Delta t$ values. (you should run out to longer than tend=10 seconds to see if there is a delayed instability.)\n", + "\n", + "and provide comments/markdown code explaining what you see in the plots.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Determining Stability Properties\n", + "The heat conduction problem, as you saw in Lab \\#1, has solutions that are stable when $\\lambda<0$. It is clear from\n", + "[Problem stability](#Problem-Stability) above that some higher order schemes (namely, the leap-frog scheme) introduce a spurious oscillation not present in the continuous solution. This is called a *computational* or *numerical instability*, because it is an artifact of the discretization process only. This instability is not a characteristic of the heat conduction problem alone, but is present in other problems where such schemes are used. Furthermore, as we will see below, even a scheme such as forward Euler can be unstable for certain problems and choices of the time step.\n", + "\n", + "There is a way to determine the stability properties of a scheme, and that is to apply the scheme to the *test equation* $$\\frac{dz}{dt} = \\lambda z$$ where $\\lambda$ is a complex constant.\n", + "\n", + "The reason for using this equation may not seem very clear. But if you think in terms of $\\lambda z$ as being the linearization of some more complex right hand side, then the solution to is $z=e^{\\lambda t}$, and so $z$ represents, in some sense, a Fourier mode of the solution to the linearized ODE problem. We expect that the behaviour of the simpler, linearized problem should mimic that of the original problem.\n", + "\n", + "Applying the forward Euler scheme to this test equation, results in the following difference formula $$z_{i+1} = z_i+(\\lambda \\Delta t)z_i$$ which is a formula that we can apply iteratively to $z_i$ to obtain\n", + "$$\\begin{aligned}\n", + "z_{i+1} &=& (1+\\lambda \\Delta t)z_{i} \\\\\n", + " &=& (1+\\lambda \\Delta t)^2 z_{i-1} \\\\\n", + " &=& \\cdots \\\\\n", + " &=& (1+\\lambda \\Delta t)^{i+1} z_{0}.\\end{aligned}$$ The value of $z_0$ is fixed by the initial conditions, and so this difference equation for $z_{i+1}$ will “blow up” as $i$ gets bigger, if the factor in front of $z_0$ is greater than 1 in magnitude – this is a sign of instability. Hence, this analysis has led us to the conclusion that if\n", + "$$|1+\\lambda\\Delta t| < 1,$$ then the forward Euler method is stable. For *real* values of $\\lambda<0$, this inequality can be shown to be equivalent to the *stability condition* $$\\Delta t < \\frac{-2}{\\lambda},$$ which is a restriction on how large the time step can be so that the numerical solution is stable." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2023-08-15T23:56:36.411227Z", + "start_time": "2023-08-15T23:56:36.256588Z" + } + }, + "source": [ + "### Problem Backward Euler\n", + "\n", + "Perform a similar analysis for the backward Euler formula, and show that it is *always stable* when $\\lambda$ is real and negative. Confirm this using plots using similar code to Problem: Stability (i.e. using stability2.ipynb if you haven't gone through Problem: Stability yet)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### Example: leap-frog\n", + "\n", + "*Now, what about the leap-frog scheme?*\n", + "\n", + "Applying the test equation to the leap-frog scheme results in the\n", + "difference equation $$z_{i+1} = z_{i-1} + 2 \\lambda \\Delta t z_i.$$\n", + "Difference formulas such as this one are typically solved by looking for\n", + "a solution of the form $z_i = w^i$ which, when substituted into this\n", + "equation, yields $$w^2 - 2\\lambda\\Delta t w - 1 = 0,$$ a quadratic\n", + "equation with solution\n", + "$$w = \\lambda \\Delta t \\left[ 1 \\pm \\sqrt{1+\\frac{1}{(\\lambda\n", + " \\Delta t)^2}} \\right].$$ The solution to the original difference\n", + "equation, $z_i=w^i$ is stable only if all solutions to this quadratic\n", + "satisfy $|w|<1$, since otherwise, $z_i$ will blow up as $i$ gets large.\n", + "\n", + "The mathematical details are not important here – what is important is\n", + "that there are two (possibly complex) roots to the quadratic equation\n", + "for $w$, and one is *always* greater than 1 in magnitude *unless*\n", + "$\\lambda$ is pure imaginary ( has real part equal to zero), *and*\n", + "$|\\lambda \\Delta t|<1$. For the heat conduction equation in [Problem Stability](#Stability) (which is already of the same form as the test equation ), $\\lambda$ is clearly not imaginary, which explains the presence of the instability for the leap-frog scheme.\n", + "\n", + "Nevertheless, the leap-frog scheme is still useful for computations. In fact, it is often used in geophysical applications, as you will see later on when discretizing." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "An example of where the leap-frog scheme is superior to the other first\n", + "order schemes is for undamped periodic motion (which arose in the\n", + "weather balloon example from Lab \\#1 ). This corresponds to the system\n", + "of ordinary differential equations (with the damping parameter, $\\beta$,\n", + "taken to be zero): $$\\frac{dy}{dt} = u,$$\n", + "$$\\frac{du}{dt} = - \\frac{\\gamma}{m} y.$$ You’ve already discretized\n", + "this problem using the forward difference formula, and the same can be\n", + "done with the second order centered formula. We can then compare the\n", + "forward Euler and leap-frog schemes applied to this problem. We code this\n", + "in the module\n", + "\n", + "Solution plots are given in [Figure oscilator](#fig_oscillator) below, for\n", + "parameters $\\gamma/m=1$, $\\Delta t=0.25$, $y(0)=0.0$ and $u(0)=1.0$, and\n", + "demonstrate that the leap-frog scheme is stable, while forward Euler is\n", + "unstable. This can easily be explained in terms of the stability\n", + "criteria we derived for the two schemes when applied to the test\n", + "equation. The undamped oscillator problem is a linear problem with pure\n", + "imaginary eigenvalues, so as long as $|\\sqrt{\\gamma/m}\\Delta t|<1$, the\n", + "leap-frog scheme is stable, which is obviously true for the parameter\n", + "values we are given. Furthermore, the forward Euler stability condition\n", + "$|1+\\lambda\\Delta\n", + " t|<1$ is violated for any choice of time step (when $\\lambda$ is pure\n", + "imaginary) and so this scheme is always unstable for the undamped\n", + "oscillator. The github link to the oscillator module is [oscillator.py](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab2/oscillator.py)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-16T08:14:30.453763Z", + "start_time": "2022-01-16T08:14:30.035238Z" + } + }, + "outputs": [], + "source": [ + "import numlabs.lab2.oscillator as os\n", + "the_times=np.linspace(0,20.,80)\n", + "yvec_init=[0,1]\n", + "output_euler=os.euler(the_times,yvec_init)\n", + "output_mid=os.midpoint(the_times,yvec_init)\n", + "output_leap=os.leapfrog(the_times,yvec_init)\n", + "answer=np.sin(the_times)\n", + "plt.style.use('ggplot')\n", + "fig,ax=plt.subplots(1,1,figsize=(7,7))\n", + "ax.plot(the_times,(output_euler[0,:]-answer),label='euler')\n", + "ax.plot(the_times,(output_mid[0,:]-answer),label='midpoint')\n", + "ax.plot(the_times,(output_leap[0,:]-answer),label='leapfrog')\n", + "ax.set(ylim=[-2,2],xlim=[0,20],title='global error between sin(t) and approx. for three schemes',\n", + " xlabel='time',ylabel='exact - approx')\n", + "ax.legend(loc='best');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure numerical**: Numerical solution to the undamped harmonic oscillator problem, using the forward Euler and leap-frog schemes. Parameter values: $\\gamma / m=1.0$, $\\Delta t=0.25$, $y(0)=0$, $u(0)=1.0$. The exact solution is a sinusoidal wave.\n", + "\n", + "Had we taken a larger time step (such as $\\Delta t=2.0$, for example), then even the leap-frog scheme is unstable. Furthermore, if we add damping ($\\beta\\neq 0$), then the eigenvalues are no longer pure imaginary, and the leap-frog scheme is unstable no matter what time step we use." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stiff Equations \n", + "\n", + "This Lab has dealt only with ODE’s (and systems of\n", + "ODE’s) that are *non-stiff*. *Stiff equations* are equations that have\n", + "solutions with at least two widely varying times scales over which the\n", + "solution changes. An example of stiff solution behaviour is a problem\n", + "with solutions that have rapid, transitory oscillations, which die out\n", + "over a short time scale, after which the solution slowly decays to an\n", + "equilibrium. A small time step is required in the initial transitory\n", + "region in order to capture the rapid oscillations. However, a larger\n", + "time step can be taken in the non-oscillatory region where the solution\n", + "is smoother. Hence, using a very small time step will result in very\n", + "slow and inefficient computations.\n", + "\n", + "There are also many other numerical schemes designed specifically for\n", + "stiff equations, most of which are implicit schemes. We will not\n", + "describe any of them here – you can find more information in a numerical\n", + "analysis text such as  @burden-faires." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Difference Approximations of Higher Derivatives\n", + "\n", + "Higher derivatives can be discretized in a similar way to what we did\n", + "for first derivatives. Let’s consider for now only the second\n", + "derivative, for which one possible approximation is the second order\n", + "centered formula: $$\\frac{y(t_{i+1})-2y(t_i)+y(t_{i-1})}{(\\Delta t)^2} = \n", + " y^{\\prime\\prime}(t_i) + {\\cal O}((\\Delta t)^2),$$ There are, of course,\n", + "many other possible formulae that we might use, but this is the most\n", + "commonly used." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Taylor Series \n", + "(Hand in png file)\n", + "\n", + "- Use Taylor series to derive the second order centered formula above\n", + "\n", + "- For more practice (although not required), try deriving a higher order approximation as well. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Summary \n", + "\n", + "This lab has discussed the accuracy and stability of difference schemes\n", + "for simple first order ODEs. The results of the problems should have\n", + "made it clear to you that choosing an accurate and stable discretization\n", + "for even a very simple problem is not straightforward. One must take\n", + "into account not only the considerations of accuracy and stability, but\n", + "also the cost or complexity of the scheme. Selecting a numerical method\n", + "for a given problem can be considered as an art in itself.\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes\n", + "\n", + "### Taylor Polynomials and Taylor Series\n", + "\n", + "\n", + "Taylor Series are of fundamental importance in numerical analysis. They\n", + "are the most basic tool for talking about the approximation of\n", + "functions. Consider a function $f(x)$ that is smooth – when we say\n", + "“smooth”, what we mean is that its derivatives exist and are bounded\n", + "(for the following discussion, we need $f$ to have $(n+1)$ derivatives).\n", + "We would like to approximate $f(x)$ near the point $x=x_0$, and we can\n", + "do it as follows:\n", + "$$f(x) = \\underbrace{P_n(x)}_{\\mbox{Taylor polynomial}} +\n", + " \\underbrace{R_n(x)}_{\\mbox{remainder term}},$$ where\n", + "$$P_n(x)=f(x_0)+ f^\\prime(x_0)(x-x_0) +\n", + " \\frac{f^{\\prime\\prime}(x_0)}{2!}(x-x_0)^2 + \\cdots + \n", + " \\frac{f^{(n)}(x_0)}{n!}(x-x_0)^n$$ is the *$n$th order Taylor\n", + "polynomial* of $f$ about $x_0$, and\n", + "$$R_n(x)=\\frac{f^{(n+1)}(\\xi(x))}{(n+1)!}(x-x_0)^{n+1}$$ is the\n", + "*remainder term* or *truncation error*. The point $\\xi(x)$ in the error\n", + "term lies somewhere between the points $x_0$ and $x$. If we look at the\n", + "infinite sum ( let $n\\rightarrow\\infty$), then the resulting infinite\n", + "sum is called the *Taylor series of $f(x)$ about $x=x_0$*. This result\n", + "is also know as *Taylor’s Theorem*.\n", + "\n", + "Remember that we assumed that $f(x)$ is smooth (in particular, that its\n", + "derivatives up to order $(n+1)$ exist and are finite). That means that\n", + "all of the derivatives appearing in $P_n$ and $R_n$ are bounded.\n", + "Therefore, there are two ways in which we can think of the Taylor\n", + "polynomial $P_n(x)$ as an approximation of $f(x)$:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. First of all, let us fix $n$. Then, we can improve the approximation\n", + " by letting $x$ approach $x_0$, since as $(x-x_0)$ gets small, the\n", + " error term $R_n(x)$ goes to zero ($n$ is considered fixed and all\n", + " terms depending on $n$ are thus constant). Therefore, the\n", + " approximation improves when $x$ gets closer and closer to $x_0$.\n", + "\n", + "2. Alternatively, we can think of fixing $x$. Then, we can improve the\n", + " approximation by taking more and more terms in the series. When $n$\n", + " is increased, the factorial in the denominator of the error term\n", + " will eventually dominate the $(x-x_0)^{n+1}$ term (regardless of how\n", + " big $(x-x_0)$ is), and thus drive the error to zero.\n", + "\n", + "In summary, we have two ways of improving the Taylor polynomial\n", + "approximation to a function: by evaluating it at points closer to the\n", + "point $x_0$; and by taking more terms in the series.\n", + "\n", + "This latter property of the Taylor expansion can be seen by a simple example.\n", + "Consider the Taylor polynomial for the function $f(x)=\\sin(x)$ about the\n", + "point $x_0=0$. All of the even terms are zero since they involve $sin(0)$, \n", + "so that if we take $n$\n", + "odd ( $n=2k+1$), then the $n$th order Taylor polynomial for $sin(x)$ is\n", + "\n", + "$$P_{2k+1}(x)=x - \\frac{x^3}{3!}+\\frac{x^5}{5!} -\\frac{x^7}{7!}+\\cdots\n", + " +\\frac{x^{2k+1}}{(2k+1)!}.\\ eq: taylor$$\n", + "\n", + "The plot in Figure: Taylor illustrates quite clearly\n", + "how the approximation improves both as $x$ approaches 0, and as $n$ is\n", + "increased." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure: Taylor** -- Plot of $\\sin(x)$ compared to its Taylor polynomial approximations\n", + "about $x_0=0$, for various values of $n=2k +1$ in eq: taylor." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Consider a specific Taylor polynomial, say $P_3(x)$ ( fix $n=3$). Notice\n", + "that for $x$ far away from the origin, the polynomial is nowhere near\n", + "the function $\\sin(x)$. However, it approximates the function quite well\n", + "near the origin. On the other hand, we could take a specific point,\n", + "$x=5$, and notice that the Taylor series of orders 1 through 7 do not\n", + "approximate the function very well at all. Nevertheless the\n", + "approximation improves as $n$ increases, as is shown by the 15th order\n", + "Taylor polynomial." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Floating Point Representation of Numbers\n", + "\n", + "Unlike a mathematician, who can deal with real numbers having infinite\n", + "precision, a computer can represent numbers with only a finite number of\n", + "digits. The best way to understand how a computer stores a number is to\n", + "look at its *floating-point form*, in which a number is written as\n", + "$$\\pm 0.d_1 d_2 d_3 \\ldots d_k \\times 10^n,$$ where each digit, $d_i$ is\n", + "between 0 and 9 (except $d_1$, which must be non-zero). Floating point\n", + "form is commonly used in the physical sciences to represent numerical\n", + "values; for example, the Earth’s radius is approximately 6,400,000\n", + "metres, which is more conveniently written in floating point form as\n", + "$0.64\\times 10^7$ (compare this to the general form above).\n", + "\n", + "Computers actually store numbers in *binary form* (i.e. in base-2\n", + "floating point form, as compared to the decimal or base-10 form shown\n", + "above). However, it is more convenient to use the decimal form in order\n", + "to illustrate the basic idea of computer arithmetic. For a good\n", + "discussion of the binary representation of numbers, see Burden & Faires\n", + " [sec. 1.2] or Newman section 4.2.\n", + "\n", + "For the remainder of this discussion, assume that we’re dealing with a\n", + "computer that can store numbers with up to 8 *significant digits*\n", + "(i.e. $k=8$) and exponents in the range $-38 \\leq n \\leq 38$. Based on\n", + "these values, we can make a few observations regarding the numbers that\n", + "can be represented:\n", + "\n", + "- The largest number that can be represented is about $1.0\\times\n", + " 10^{+38}$, while the smallest is $1.0\\times 10^{-38}$.\n", + "\n", + "- These numbers have a lot of *holes*, where real numbers are missed.\n", + " For example, consider the two consecutive floating point numbers\n", + " $$0.13391482 \\times 10^5 \\;\\;\\; {\\rm and} \\;\\;\\; 0.13391483 \\times 10^5,$$\n", + " or 13391.482 and 13391.483. Our floating-point number system cannot\n", + " represent any numbers between these two values, and hence any number\n", + " in between 13391.482 and 13391.483 must be approximated by one of\n", + " the two values. Another way of thinking of this is to observe that\n", + " $0.13391482 \\times 10^5$ does not represent just a single real\n", + " number, but a whole *range* of numbers.\n", + "\n", + "- Notice that the same amount of floating-point numbers can be\n", + " represented between $10^{-6}$ and $10^{-5}$ as are between $10^{20}$\n", + " and $10^{21}$. Consequently, the density of floating points numbers\n", + " increases as their magnitude becomes smaller. That is, there are\n", + " more floating-point numbers close to zero than there are far away.\n", + " This is illustrated in the figure below.\n", + "\n", + " The floating-point numbers (each represented by a $\\times$) are\n", + " more dense near the origin.\n", + " \n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The values $k=8$ and $-38\\leq n \\leq 38$ correspond to what is known as\n", + "*single precision arithmetic*, in which 4 bytes (or units of memory in a\n", + "computer) are used to store each number. It is typical in many\n", + "programming languages, including $C++$, to allow the use of higher\n", + "precision, or *double precision*, using 8 bytes for each number,\n", + "corresponding to values of $k=16$ and $-308\\leq n \\leq 308$, thereby\n", + "greatly increasing the range and density of numbers that can be\n", + "represented. When doing numerical computations, it is customary to use\n", + "double-precision arithmetic, in order to minimize the effects of\n", + "round-off error (in a $C++$ program, you can define a variable ` x` to\n", + "be double precision using the declaration ` double x;`).\n", + "\n", + "Sometimes, double precision arithmetic may help in eliminating round-off\n", + "error problems in a computation. On the minus side, double precision\n", + "numbers require more storage than their single precision counterparts,\n", + "and it is sometimes (but not always) more costly to compute in double\n", + "precision. Ultimately, though, using double precision should not be\n", + "expected to be a cure-all against the difficulties of round-off errors.\n", + "The best approach is to use an algorithm that is not unstable with\n", + "respect to round-off error. For an example where increasing precision\n", + "will not help, see the section on Gaussian elimination in Lab \\#3." + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "218.7px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab3/01-lab3.html b/notebooks/lab3/01-lab3.html new file mode 100644 index 0000000..ddb9a4e --- /dev/null +++ b/notebooks/lab3/01-lab3.html @@ -0,0 +1,1700 @@ + + + + + + + + Laboratory 3: Linear Algebra (Sept. 12, 2017) — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Laboratory 3: Linear Algebra (Sept. 12, 2017)

+

Grace Yung

+
+

List of Problems

+ +
+
+

Objectives

+

The object of this lab is to familiarize you with some of the common techniques used in linear algebra. You can use the Python software package to solve some of the problems presented in the lab. There are examples of using the Python commands as you go along in the lab. In particular, after finishing the lab, you will be able to

+
    +
  • Define: condition number, ill-conditioned matrix, singular matrix, LU decomposition, Dirichlet, Neumann and periodic boundary conditions.

  • +
  • Find by hand or using Python: eigenvalues, eigenvectors, transpose, inverse of a matrix, determinant.

  • +
  • Find using Python: condition numbers.

  • +
  • Explain: pivoting.

  • +
+

There is a description of using Numpy and Python for Matrices at the end of the lab. It includes a brief description of how to use the built-in functions introduced in this lab. Just look for the paw prints:

+

🐾

+

Numpy and Python with Matrices

+

when you are not sure what functions to use, and this will lead you to the mini-manual.

+
+
+

Prerequisites

+

You should have had an introductory course in linear algebra.

+
+
[ ]:
+
+
+
# import the context to find files
+import context
+# import the quiz script
+from numlabs.lab3 import quiz3
+# import numpy
+import numpy as np
+
+
+
+
+
+

Linear Systems

+

In this section, the concept of a matrix will be reviewed. The basic operations and methods in solving a linear system are introduced as well.

+

Note: Whenever you see a term you are not familiar with, you can find a definition in the glossary.

+
+

What is a Matrix?

+

Before going any further on how to solve a linear system, you need to know what a linear system is. A set of m linear equations with n unknowns:

+

(System of Equations)

+
+\[\begin{split}\begin{array}{ccccccc} +a_{11}x_{1} & + & \ldots & + & a_{1n}x_{n} & = & b_{1} \\ +a_{21}x_{1} & + & \ldots & + & a_{2n}x_{n} & = & b_{2} \\ + & & \vdots & & & & \vdots \\ +a_{m1}x_{1} & + & \ldots & + & a_{mn}x_{n} & = & b_{m} +\end{array}\end{split}\]
+

can be represented as an augmented matrix:

+
+\[\begin{split}\left[ +\begin{array}{cc} + \begin{array}{ccccc} + a_{11} & & \ldots & & a_{1n} \\ + \vdots & & \ddots & & \vdots \\ + a_{m1} & & \ldots & & a_{mn} + \end{array} +& + \left| + \begin{array}{rc} + & b_{1} \\ & \vdots \\ & b_{m} + \end{array} + \right. +\end{array} +\right]\end{split}\]
+

Column 1 through n of this matrix contain the coefficients \(a_{ij}\) of the unknowns in the set of linear equations. The right most column is the augmented column, which is made up of the coefficients of the right hand side \(b_i\).

+
+
+

Quiz on Matrices

+

Which matrix matches this system of equations?

+
+\[\begin{split}\begin{array}{lcr} +2x + 3y + 6z &=& 19\\ +3x + 6y + 9z &=& 21\\ +x + 5y + 10z &=& 0 +\end{array}\end{split}\]
+
    +
  1. +
+
+\[\begin{split}\left[ \begin{array}{ccc} +2 & 3 & 1 \\ +3 & 6 & 5 \\ +6 & 9 & 10\\ +19 & 21 & 0 +\end{array} +\right]\end{split}\]
+
    +
  1. +
+
+\[\begin{split}\left[ \begin{array}{ccc} +2 & 3 & 6\\ +3 & 6 & 9\\ +1 & 5 & 10 +\end{array} +\right]\end{split}\]
+
    +
  1. +
+
+\[\begin{split}\left[ \begin{array}{ccc|c} +1 & 5 & 10 & 0\\ +2 & 3 & 6 & 19\\ +3 & 6 & 9 & 21 +\end{array} +\right]\end{split}\]
+
    +
  1. +
+
+\[\begin{split}\left[ \begin{array}{ccc|c} +2 & 3 & 6 & -19\\ +3 & 6 & 9 & -21\\ +1 & 5 & 10 & 0 +\end{array} +\right]\end{split}\]
+

In the following, replace ‘x’ by ‘A’, ‘B’, ‘C’, or ‘D’ and run the cell.

+
+
[ ]:
+
+
+
print(quiz3.matrix_quiz(answer = 'xxx'))
+
+
+
+
+
+

Quick Review

+

Here is a review on the basic matrix operations, including addition, subtraction and multiplication. These are important in solving linear systems.

+

Here is a short exercise to see how much you remember:

+
+
Let :math:`x = left[ begin{array}{r} 2 \ 2 \ 7 end{array} right] ,

y = left[ begin{array}{r} -5 \ 1 \ 3 end{array} right] , +A = left[ begin{array}{rrr} 3 & -2 & 10 \

+
+
+

-6 & 7 & -4

+
+

end{array} right],

+
+
+
B = left[ begin{array}{rr} -6 & 4 \ 7 & -1 \ 2 & 9

end{array} right]`

+
+
+
+
+

Calculate the following:

+
    +
  1. \(x + y\)

  2. +
  3. \(x^{T}y\)

  4. +
  5. \(y-x\)

  6. +
  7. \(Ax\)

  8. +
  9. \(y^{T}A\)

  10. +
  11. \(AB\)

  12. +
  13. \(BA\)

  14. +
  15. \(AA\)

  16. +
+

The solutions to these exercises are available here

+

After solving the questions by hand, you can also use Python to check your answers.

+

🐾

+

Numpy and Python with Matrices

+
+
[ ]:
+
+
+
## A cell to use Python to check your answers
+x = np.array([2, 2, 7])
+y = np.array([-5, 1, 3])
+A = np.array([[3, -2, 10],
+             [-6, 7, -4]])
+B = np.array([[-6, 4],
+            [7, -1],
+            [2, 9]])
+print(f'(x+y) is {x+y}')
+print(f'x^T y is {np.dot(x, y)}')
+
+## you do the rest!
+
+
+
+
+
+

Gaussian Elimination

+

The simplest method for solving a linear system is Gaussian elimination, which uses three types of elementary row operations:

+
    +
  • Multiplying a row by a non-zero constant (\(kE_{ij}\))

  • +
  • Adding a multiple of one row to another (\(E_{ij} + kE_{kj}\))

  • +
  • Exchanging two rows (\(E_{ij} \leftrightarrow E_{kj}\))

  • +
+

Each row operation corresponds to a step in the solution of the System of Equations where the equations are combined together. It is important to note that none of those operations changes the solution. There are two parts to this method: elimination and back-substitution. The purpose of the process of elimination is to eliminate the matrix entries below the main diagonal, using row operations, to obtain a upper triangular matrix with the augmented column. Then, you will +be able to proceed with back-substitution to find the values of the unknowns.

+

Try to solve this set of linear equations:

+
+\[\begin{split}\begin{array}{lrcrcrcr} + E_{1j}: & 2x_{1} & + & 8x_{2} & - & 5x_{3} & = & 53 \\ + E_{2j}: & 3x_{1} & - & 6x_{2} & + & 4x_{3} & = & -48 \\ + E_{3j}: & x_{1} & + & 2x_{2} & - & x_{3} & = & 13 +\end{array}\end{split}\]
+

The solution to this problem is available here

+

After solving the system by hand, you can use Python to check your answer.

+

🐾

+

Numpy and Python with Matrices

+
+
[ ]:
+
+
+

+
+
+
+
+
+

Decomposition

+

Any invertible, square matrix, \(A\), can be factored out into a product of a lower and an upper triangular matrices, \(L\) and \(U\), respectively, so that \(A\) = \(LU\). The \(LU\)- decomposition is closely linked to the process of Gaussian elimination.

+
+

Example One

+
+

Using the matrix from the system of the previous section (Sec Gaussian Elimination), we have:

+
+
+
+\[\begin{split}A = \left[ \begin{array}{rrr} 2 & 8 & -5 \\ +3 & -6 & 4 \\ +1 & 2 & -1 \end{array} \right]\end{split}\]
+

The upper triangular matrix \(U\) can easily be calculated by applying Gaussian elimination to \(A\):

+
+
+
+\[\begin{split}\begin{array}{cl} +\begin{array}{c} E_{2j}-\frac{3}{2}E_{1j} \\ +E_{3j}-\frac{1}{2}E_{1j} \\ +\rightarrow \end{array} +& +\left[ \begin{array}{rrr} 2 & 8 & -5 \\ +0 & -18 & \frac{23}{2} \\ +0 & -2 & \frac{3}{2} +\end{array} \right] \\ \\ +\begin{array}{c} E_{3j}-\frac{1}{9}E_{2j} \\ +\rightarrow \end{array} +& +\left[ \begin{array}{rrr} 2 & 8 & -5 \\ +0 & -18 & \frac{23}{2} \\ +0 & 0 & \frac{2}{9} +\end{array} \right] = U +\end{array}\end{split}\]
+

Note that there is no row exchange.

+
+
+

The lower triangular matrix \(L\) is calculated with the steps which lead us from the original matrix to the upper triangular matrix, i.e.:

+
+\[\begin{split}\begin{array}{c} E_{2j}-\frac{3}{2}E_{1j} \\ \\ +E_{3j}-\frac{1}{2}E_{1j} \\ \\ +E_{3j}-\frac{1}{9}E_{2j} +\end{array}\end{split}\]
+
+
+

Note that each step is a multiple \(\ell\) of equation \(m\) subtracted from equation \(n\). Each of these steps, in fact, can be represented by an elementary matrix. \(U\) can be obtained by multiplying \(A\) by this sequence of elementary matrices.

+

Each of the elementary matrices is composed of an identity matrix the size of \(A\) with \(-\ell\) in the (\(m,n\)) entry. So the steps become:

+
+
+
+\[\begin{split}\begin{array}{ccc} +E_{2j}-\frac{3}{2}E_{1j} & +\rightarrow & +\left[ \begin{array}{rcc} 1 & 0 & 0 \\ +-\frac{3}{2} & 1 & 0 \\ +0 & 0 & 1 +\end{array} \right] = R \\ \\ +E_{3j}-\frac{1}{2}E_{1j} & +\rightarrow & +\left[ \begin{array}{rcc} 1 & 0 & 0 \\ +0 & 1 & 0 \\ +-\frac{1}{2} & 0 & 1 +\end{array} \right] = S \\ \\ +E_{3j}-\frac{1}{9}E_{2j} & +\rightarrow & +\left[ \begin{array}{crc} 1 & 0 & 0 \\ +0 & 1 & 0 \\ +0 & -\frac{1}{9} & 1 +\end{array} \right] = T +\end{array}\end{split}\]
+

and \(TSRA\) = \(U\). Check this with Python.

+
+
+

To get back from \(U\) to \(A\), the inverse of \(R\), \(S\) and \(T\) are multiplied onto \(U\):

+
+\[\begin{split}\begin{array}{rcl} +T^{-1}TSRA & = & T^{-1}U \\ +S^{-1}SRA & = & S^{-1}T^{-1}U \\ +R^{-1}RA & = & R^{-1}S^{-1}T^{-1}U \\ +\end{array}\end{split}\]
+
+
+

So \(A\) = \(R^{-1}S^{-1}T^{-1}U\). Recall that \(A\) = \(LU\). If \(R^{-1}S^{-1}T^{-1}\) is a lower triangular matrix, then it is \(L\).

+

The inverse of the elementary matrix is the same matrix with only one difference, and that is, \(\ell\) is in the \(a_{mn}\) entry instead of \(-\ell\). So:

+
+
+
+\[\begin{split}\begin{array}{rcl} +R^{-1} & = & \left[ \begin{array}{rrr} 1 & 0 & 0 \\ +\frac{3}{2} & 1 & 0 \\ +0 & 0 & 1 +\end{array} \right] \\ \\ +S^{-1} & = & \left[ \begin{array}{rrr} 1 & 0 & 0 \\ +0 & 1 & 0 \\ +\frac{1}{2} & 0 & 1 +\end{array} \right] \\ \\ +T^{-1} & = & \left[ \begin{array}{rrr} 1 & 0 & 0 \\ +0 & 1 & 0 \\ +0 & \frac{1}{9} & 1 +\end{array} \right] +\end{array}\end{split}\]
+

Multiplying \(R^{-1}S^{-1}T^{-1}\) together, we have:

+
+
+
+\[\begin{split}\begin{array}{rcl} R^{-1}S^{-1}T^{-1} +& = & +\left[ \begin{array}{ccc} 1 & 0 & 0 \\ +\frac{3}{2} & 1 & 0 \\ +\frac{1}{2} & \frac{1}{9} & 1 +\end{array} \right] = L +\end{array}\end{split}\]
+

So \(A\) is factored into two matrices \(L\) and \(U\), where

+
+
+
+\[\begin{split}\begin{array}{ccc} +L = \left[ \begin{array}{ccc} 1 & 0 & 0 \\ +\frac{3}{2} & 1 & 0 \\ +\frac{1}{2} & \frac{1}{9} & 1 +\end{array} \right] +& \mbox{ and } & +U = \left[ \begin{array}{ccc} 2 & 8 & -5 \\ +0 & -18 & \frac{23}{2} \\ +0 & 0 & \frac{2}{9} +\end{array} \right] +\end{array}\end{split}\]
+

Use Python to confirm that \(LU\) = \(A\).

+
+

The reason decomposition is introduced here is not because of Gaussian elimination \(-\) one seldom explicitly computes the \(LU\) decomposition of a matrix. However, the idea of factoring a matrix is important for other direct methods of solving linear systems (of which Gaussian elimination is only one) and for methods for finding eigenvalues (Characteristic Equation).

+

🐾

+

Numpy and Python with Matrices

+
+
+
+

Round-off Error

+

When a number is represented in its floating point form, i.e. an approximation of the number, the resulting error is the round-off error. The floating-point representation of numbers  and the consequent effects of round-off error were discussed already in Lab #2.

+

When round-off errors are present in the matrix \(A\) or the right hand side \(b\), the linear system \(Ax = b\) may or may not give a solution that is close to the real answer. When a matrix \(A\) “magnifies” the effects of round-off errors in this way, we say that \(A\) is an ill-conditioned matrix.

+
+

Example Two

+
+

Let’s see an example:

+
+
+

Suppose

+
+\[\begin{split}A = \left[ \begin{array}{cc} 1 & 1 \\ 1 & 1.0001 +\end{array} \right]\end{split}\]
+
+
+

and consider the system:

+

(Ill-conditioned version one):

+
+\[\begin{split}\left[ \begin{array}{cc} +\begin{array}{cc} 1 & 1 \\ 1 & 1.0001 \end{array} +& +\left| \begin{array}{c} 2 \\ 2 \end{array} \right] +\end{array} \right.\end{split}\]
+
+
+

The condition number, \(K\), of a matrix, defined in Condition Number, is a measure of how well-conditioned a matrix is. If \(K\) is large, then the matrix is ill-conditioned, and Gaussian elimination will magnify the round-off errors. The condition number of \(A\) is 40002. You can use Python to check this number.

+

The solution to this is \(x_1\) = 2 and \(x_2\) = 0. However, if the system is altered a little as follows:

+
+
+

(Ill-conditioned version two):

+
+\[\begin{split}\left[ \begin{array}{cc} +\begin{array}{cc} 1 & 1 \\ 1 & 1.0001 \end{array} +& +\left| \begin{array}{c} 2 \\ 2.0001 \end{array} \right] +\end{array} \right.\end{split}\]
+

Then, the solution becomes \(x_1\) = 1 and \(x_2\) = 1. A change in the fifth significant digit was amplified to the point where the solution is not even accurate to the first significant digit. \(A\) is an ill-conditioned matrix. You can set up the systems Ill-conditioned version one and Ill-conditioned version two in Python, and check the answers yourself.

+
+
+
[ ]:
+
+
+

+
+
+
+
+
+

Example Three

+
+

Use Python to try the following example. First solve the system \(A^{\prime}x = b\); then solve \(A^{\prime}x = b2\). Find the condition number of \(A^{\prime}\).

+
+
+
+\[\begin{split}\begin{array}{ccccc} +A^{\prime} = \left[ \begin{array}{cc} 0.0001 & 1 \\ 1 & 1 +\end{array} \right] +& , & +b = \left[ \begin{array}{c} 1 \\ 2 \end{array} \right] +& \mbox{and} & +b2 = \left[ \begin{array}{c} 1 \\ 2.0001 \end{array} \right] . +\end{array}\end{split}\]
+

You will find that the solution for \(A^{\prime}x = b\) is \(x_1\) = 1.0001 and \(x_2\) = 0.9999, and the solution for \(A^{\prime}x = b2\) is \(x_1\) = 1.0002 and \(x_2\) = 0.9999 . So a change in \(b\) did not result in a large change in the solution. Therefore, \(A^{\prime}\) is a well-conditioned matrix. In fact, the condition number is approximately 2.6.

+
+
+

Nevertheless, even a well conditioned system like \(A^{\prime}x = +b\) leads to inaccuracy if the wrong solution method is used, that is, an algorithm which is sensitive to round-off error. If you use Gaussian elimination to solve this system, you might be misled that \(A^{\prime}\) is ill-conditioned. Using Gaussian elimination to solve \(A^{\prime}x=b\):

+
+\[\begin{split}\begin{array}{cl} +& \left[ +\begin{array}{cc} +\begin{array}{cc} 0.0001 & 1 \\ 1 & 1 \end{array} +& +\left| +\begin{array}{c} 1 \\ 2 \end{array} \right] +\end{array} \right. \\ \\ +\begin{array}{c} 10,000E_{1j} \\ \rightarrow \end{array} & +\left[ +\begin{array}{cc} +\begin{array}{cc} 1 & 10,000 \\ 1 & 1 \end{array} +& +\left| +\begin{array}{c} 10,000 \\ 2 \end{array} \right] +\end{array} \right. \\ \\ +\begin{array}{c} E_{2j}-E_{1j} \\ \rightarrow \end{array} & +\left[ +\begin{array}{cc} +\begin{array}{cc} 1 & 10,000 \\ 0 & -9,999 \end{array} +& +\left| +\begin{array}{c} 10,000 \\ -9,998 \end{array} \right] +\end{array} \right. +\end{array}\end{split}\]
+
+
+

At this point, if you continue to solve the system as is, you will get the expected answers. You can check this with Python. However, if you make changes to the matrix here by rounding -9,999 and -9,998 to -10,000, the final answers will be different:

+
+\[\begin{split}\begin{array}{cl} +& \left[ +\begin{array}{cc} +\begin{array}{cc} 1 & 10,000 \\ 0 & -10,000 \end{array} +& +\left| +\begin{array}{c} 10,000 \\ -10,000 \end{array} \right] +\end{array} \right. +\end{array}\end{split}\]
+
+
+

The result is \(x_1\) = 0 and \(x_2\) = 1, which is quite different from the correct answers. So Gaussian elimination might mislead you to think that a matrix is ill-conditioned by giving an inaccurate solution to the system. In fact, the problem is that Gaussian elimination on its own is a method that is unstable in the presence of round-off error, even for well-conditioned matrices. Can this be fixed?

+

You can try the example with Python.

+
+

🐾

+

Numpy and Python with Matrices

+
+
[ ]:
+
+
+

+
+
+
+
+
+

Partial Pivoting

+

There are a number of ways to avoid inaccuracy, one of which is applying partial pivoting to the Gaussian elimination.

+

Consider the example from the previous section. In order to avoid multiplying by 10,000, another pivot is desired in place of 0.0001. The goal is to examine all the entries in the first column, find the entry that has the largest value, and exchange the first row with the row that contains this element. So this entry becomes the pivot. This is partial pivoting. Keep in mind that switching rows is an elementary operation and has no effect on the solution.

+

In the original Gaussian elimination algorithm, row exchange is done only if the pivot is zero. In partial pivoting, row exchange is done so that the largest value in a certain column is the pivot. This helps to reduce the amplification of round-off error.

+
+
+

Example Four

+
+

In the matrix \(A^{\prime}\) from Example Two, 0.0001 from column one is the first pivot. Looking at this column, the entry, 1, in the second row is the only other choice in this column. Obviously, 1 is greater than 0.0001. So the two rows are exchanged.

+
+
+
+\[\begin{split}\begin{array}{cl} +& \left[ \begin{array}{cc} +\begin{array}{cc} 0.0001 & 1 \\ 1 & 1 \end{array} +& \left| +\begin{array}{c} 1 \\ 2 \end{array} \right] +\end{array} \right. \\ \\ +\begin{array}{c} E_{1j} \leftrightarrow E_{2j} \\ \rightarrow +\end{array} +& \left[ \begin{array}{cc} +\begin{array}{cc} 1 & 1 \\ 0.0001 & 1 \end{array} +& \left| +\begin{array}{c} 2 \\ 1 \end{array} \right] +\end{array} \right. \\ \\ +\begin{array}{c} E_{2j}-0.0001E_{1j} \\ \rightarrow \end{array} +& \left[ \begin{array}{cc} +\begin{array}{cc} 1 & 1 \\ 0 & 0.9999 \end{array} +& \left| +\begin{array}{c} 2 \\ 0.9998 \end{array} \right] +\end{array} \right. +\end{array}\end{split}\]
+

The same entries are rounded off:

+
+
+
+\[\begin{split}\begin{array}{cl} +& \left[ \begin{array}{cc} +\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array} +& +\left| +\begin{array}{c} 2 \\ 1 \end{array} \right] +\end{array} \right. \\ \\ +\begin{array}{c} E_{1j}-E_{2j} \\ \rightarrow \end{array} +& \left[ \begin{array}{cc} +\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} +& +\left| +\begin{array}{c} 1 \\ 1 \end{array} \right] +\end{array} \right. +\end{array}\end{split}\]
+

So the solution is \(x_1\) = 1 and \(x_2\) = 1, and this is a close approximation to the original solution, \(x_1\) = 1.0001 and \(x_2\) = 0.9999.

+
+
+

You can try the example with Python.

+
+

🐾

+

Numpy and Python with Matrices

+

Note: This section has described row pivoting. The same process can be applied to columns, with the resulting procedure being called column pivoting.

+
+
[ ]:
+
+
+

+
+
+
+
+
+

Full Pivoting

+

Another way to get around inaccuracy (Example Three) is to use Gaussian elimination with full pivoting. Sometimes, even partial pivoting can lead to problems. With full pivoting, in addition to row exchange, columns will be exchanged as well. The purpose is to use the largest entries in the whole matrix as the pivots.

+
+
+

Example Five

+
+

Given the following:

+
+
+
+\[\begin{split} \begin{array}{cccc} +& A^{''} = \left[ \begin{array}{ccc} 0.0001 & 0.0001 & 0.5 \\ +0.5 & 1 & 1 \\ +0.0001 & 1 & 0.0001 +\end{array} +\right] +& \ \ \ & +b^{'} = \left[ \begin{array}{c} 1 \\ 0 \\ 1 +\end{array} +\right] \\ \\ +\rightarrow & +\left[ \begin{array}{cc} +\begin{array}{ccc} 0.0001 & 0.0001 & 0.5 \\ +0.5 & 1 & 1 \\ +0.0001 & 1 & 0.0001 +\end{array} +& \left| \begin{array}{c} 1 \\ 0 \\ 1 +\end{array} +\right] +\end{array} +\right. & & +\end{array}\end{split}\]
+

Use Python to find the condition number of \(A^{''}\) and the solution to this system.

+
+
+

Looking at the system, if no rows are exchanged, then taking 0.0001 as the pivot will magnify any errors made in the elements in column 1 by a factor of 10,000. With partial pivoting, the first two rows can be exchanged (as below):

+
+\[\begin{split}\begin{array}{cl} +\begin{array}{c} \\ E_{1j} \leftrightarrow E_{2j} \\ \rightarrow +\end{array} +& \begin{array}{ccccc} +& x_1 & x_2 & x_3 & \\ +\left[ \begin{array}{c} \\ \\ \\ \end{array} \right. +& \begin{array}{c} 0.5 \\ 0.0001 \\ 0.0001 \end{array} +& \begin{array}{c} 1 \\ 0.0001 \\ 1 \end{array} +& \begin{array}{c} 1 \\ 0.5 \\ 0.0001 \end{array} +& \left| \begin{array}{c} 0 \\ 1 \\ 1 \end{array} \right] +\end{array} +\end{array}\end{split}\]
+
+
+

and the magnification by 10,000 is avoided. Now the matrix will be expanded by a factor of 2. However, if the entry 1 is used as the pivot, then the matrix does not need to be expanded by 2 either. The only way to put 1 in the position of the first pivot is to perform a column exchange between columns one and two, or between columns one and three. This is full pivoting.

+

Note that when columns are exchanged, the variables represented by the columns are switched as well, i.e. when columns one and two are exchanged, the new column one represents \(x_2\) and the new column two represents \(x_1\). So, we must keep track of the columns when performing column pivoting.

+
+
+

So the columns one and two are exchanged, and the matrix becomes:

+
+\[\begin{split}\begin{array}{cl} +\begin{array}{c} \\ E_{i1} \leftrightarrow E_{i2} \\ \rightarrow +\end{array} +& \begin{array}{ccccc} +& x_2 & x_1 & x_3 & \\ +\left. \begin{array}{c} \\ \\ \\ \end{array} \right[ +& \begin{array}{c} 1 \\ 0.0001 \\ 1 \end{array} +& \begin{array}{c} 0.5 \\ 0.0001 \\ 0.0001 \end{array} +& \begin{array}{c} 1 \\ 0.5 \\ 0.0001 \end{array} +& \left| \begin{array}{c} 0 \\ 1 \\ 1 \end{array} \right] +\end{array} \\ \\ +\begin{array}{c} \\ E_{2j}-0.0001E_{1j} \\ E_{3j}-E_{1j} \\ \rightarrow +\end{array} +& \begin{array}{ccccc} +& x_2 & x_1 & x_3 & \\ +\left. \begin{array}{c} \\ \\ \\ \end{array} \right[ +& \begin{array}{c} 1 \\ 0 \\ 0 \end{array} +& \begin{array}{c} 0.5 \\ 0.00005 \\ -0.4999 \end{array} +& \begin{array}{c} 1 \\ 0.4999 \\ -0.9999 \end{array} +& \left| \begin{array}{c} 0 \\ 1 \\ 1 \end{array} \right] +\end{array} +\end{array}\end{split}\]
+
+
+

If we assume rounding is performed, then the entries are rounded off:

+
+\[\begin{split}\begin{array}{cl} +& \begin{array}{ccccc} +& x_2 & x_1 & x_3 & \\ +\left. \begin{array}{c} \\ \\ \\ \end{array} \right[ +& \begin{array}{c} 1 \\ 0 \\ 0 \end{array} +& \begin{array}{c} 0.5 \\ 0.00005 \\ -0.5 \end{array} +& \begin{array}{c} 1 \\ 0.5 \\ -1 \end{array} +& \left| \begin{array}{c} 0 \\ 1 \\ 1 \end{array} \right] +\end{array} \\ \\ +\begin{array}{c} \\ -E_{3j} \\ +E_{2j} \leftrightarrow E_{3j} \\ +E_{i2} \leftrightarrow E_{i3} \\ +\rightarrow +\end{array} +& \begin{array}{ccccc} +& x_2 & x_3 & x_1 & \\ +\left. \begin{array}{c} \\ \\ \\ \end{array} \right[ +& \begin{array}{c} 1 \\ 0 \\ 0 \end{array} +& \begin{array}{c} 1 \\ 1 \\ 0.5 \end{array} +& \begin{array}{c} 0.5 \\ 0.5 \\ 0.00005 \end{array} +& \left| \begin{array}{r} 0 \\ -1 \\ 1 \end{array} \right] +\end{array} \\ \\ +\begin{array}{c} E_{1j}-E_{2j} \\ +E_{3j}-0.5 E_{2j} \\ +\rightarrow +\end{array} +& \begin{array}{ccccc} +& x_2 & x_3 & x_1 & \\ +\left. \begin{array}{c} \\ \\ \\ \end{array} \right[ +& \begin{array}{c} 1 \\ 0 \\ 0 \end{array} +& \begin{array}{c} 0 \\ 1 \\ 0 \end{array} +& \begin{array}{c} 0 \\ 0.5 \\ -0.24995 \end{array} +& \left| \begin{array}{r} 1 \\ -1 \\ 1.5 \end{array} \right] +\end{array} +\end{array}\end{split}\]
+
+
+

Rounding off the matrix again:

+
+\[\begin{split}\begin{array}{cl} +\begin{array}{c} \rightarrow \end{array} +& \begin{array}{ccccc} +& x_2 & x_3 & x_1 & \\ +\left. \begin{array}{c} \\ \\ \\ \end{array} \right[ +& \begin{array}{c} 1 \\ 0 \\ 0 \end{array} +& \begin{array}{c} 0 \\ 1 \\ 0 \end{array} +& \begin{array}{c} 0 \\ 0.5 \\ -0.25 \end{array} +& \left| \begin{array}{r} 1 \\ -1 \\ 1.5 \end{array} \right] +\end{array} \\ \\ +\begin{array}{c} E_{2j}-2E_{3j} \\ +4E_{3j} \\ +\rightarrow \end{array} +& \begin{array}{ccccc} +& x_2 & x_3 & x_1 & \\ +\left. \begin{array}{c} \\ \\ \\ \end{array} \right[ +& \begin{array}{c} 1 \\ 0 \\ 0 \end{array} +& \begin{array}{c} 0 \\ 1 \\ 0 \end{array} +& \begin{array}{c} 0 \\ 0 \\ 1 \end{array} +& \left| \begin{array}{r} 1 \\ 2 \\ -6 \end{array} \right] +\end{array} +\end{array}\end{split}\]
+
+
+

So reading from the matrix, \(x_1\) = -6, \(x_2\) = 1 and \(x_3\) = 2. Compare this with the answer you get with Python, which is \(x_1 +\approx\) -6.0028, \(x_2 \approx\) 1.0004 and \(x_3 \approx\) 2.0010 .

+

Using full pivoting with Gaussian elimination, expansion of the error by large factors is avoided. In addition, the approximated solution, using rounding (which is analogous to the use of floating point approximations), is close to the correct answer.

+
+
+

You can try the example with Python.

+
+

🐾

+

Numpy and Python with Matrices

+
+
[ ]:
+
+
+

+
+
+
+
+
+

Summary

+

In a system \(Ax = b\), if round-off errors in \(A\) or \(b\) affect the system such that it may not give a solution that is close to the real answer, then \(A\) is ill-conditioned, and it has a very large condition number.

+

Sometimes, due to a poor algorithm, such as Gaussian elimination without pivoting, a matrix may appear to be ill-conditioned, even though it is not. By applying partial pivoting, this problem is reduced, but partial pivoting will not always eliminate the effects of round-off error. An even better way is to apply full pivoting. Of course, the drawback of this method is that the computation is more expensive than plain Gaussian elimination.

+

An important point to remember is that partial and full pivoting minimize the effects of round-off error for well-conditioned matrices. If a matrix is ill-conditioned, these methods will not provide a solution that approximates the real answer. As an exercise, you can try to apply full pivoting to the ill-conditioned matrix \(A\) seen at the beginning of this section (Example Two](#Example-Two)). You will find that the solution is still inaccurate.

+
+
+
+
+

Matrix Inversion

+

Given a square matrix \(A\). If there is a matrix that will cancel \(A\), then it is the inverse of \(A\). In other words, the matrix, multiplied by its inverse, will give the identity matrix \(I\).

+

Try to find the inverse for the following matrices:

+
    +
  1. +\[\begin{split}A = \left[ \begin{array}{rrr} 1 & -2 & 1 \\ + 3 & 1 & -1 \\ + -1 & 9 & -5 \end{array} \right]\end{split}\]
    +
  2. +
  3. +\[\begin{split}B = \left[ \begin{array}{rrr} 5 & -2 & 4 \\ + -3 & 1 & -5 \\ + 2 & -1 & 3 \end{array} \right]\end{split}\]
    +
  4. +
+

The solutions to these exercises are available here

+

After solving the questions by hand, you can use Python to check your answer.

+

🐾

+

Numpy and Python with Matrices

+
+
[ ]:
+
+
+

+
+
+
+
+

Determinant

+

Every square matrix \(A\) has a scalar associated with it. This number is the determinant of the matrix, represented as \(\det(A)\). Its absolute value is the volume of the parallelogram that can be generated from the rows of \(A\).

+

A few special properties to remember about determinants:

+
    +
  1. \(A\) must be a square matrix.

  2. +
  3. If \(A\) is singular, \(\det(A) =0\), i.e. \(A\) does not have an inverse.

  4. +
  5. The determinant of a \(2 \times 2\) matrix is just the difference between the products of the diagonals, i.e.

    +
    +\[\begin{split}\begin{array}{ccc} +\left[ \begin{array}{cc} a & b \\ c & d \end{array} \right] & += & +\begin{array}{ccc} ad & - & bc \end{array} +\end{array}\end{split}\]
    +
  6. +
  7. For any diagonal, upper triangular or lower triangular matrix \(A\), \(\det(A)\) is the product of all the entries on the diagonal,

  8. +
+
+

Example Six

+
+
+\[\begin{split}\begin{array}{cl} +& \det \left[ +\begin{array}{rrr} +2 & 5 & -8 \\ +0 & 1 & 7 \\ +0 & 0 & -4 +\end{array} \right] \\ \\ += & 2 \times 1 \times -4 \\ += & -8 +\end{array}\end{split}\]
+
+
+

Graphically, the parallelogram looks as follows:

+
+

d693feb4d36544d2980c82f9e6702807

+

The basic procedure in finding a determinant of a matrix larger than 2 \(\times\) 2 is to calculate the product of the non-zero entries in each of the permutations of the matrix, and then add them together. A permutation of a matrix \(A\) is a matrix of the same size with one element from each row and column of \(A\). The sign of each permutation is \(+\) or \(-\) depending on whether the permutation is odd or even. This is illustrated in the following example …

+
+
+

Example Seven

+
+
+\[\begin{split}I = \left[ \begin{array}{ccc} 1 & 2 & 3 \\ +4 & 5 & 6 \\ +7 & 8 & 9 \end{array} \right]\end{split}\]
+
+
+

will have the following permutations:

+
+\[\begin{split}\begin{array}{cccccc} ++\left[ \begin{array}{ccc} 1 & & \\ +& 5 & \\ +& & 9 \end{array} \right] +& , & ++\left[ \begin{array}{ccc} & 2 & \\ +& & 6 \\ +7 & & \end{array} \right] +& , & ++\left[ \begin{array}{ccc} & & 3 \\ +4 & & \\ +& 8 & \end{array} \right] +& , \\ \\ +-\left[ \begin{array}{ccc} 1 & & \\ +& & 6 \\ +& 8 & \end{array} \right] +& , & +-\left[ \begin{array}{ccc} & 2 & \\ +4 & & \\ +& & 9 \end{array} \right] +& , & +-\left[ \begin{array}{ccc} & & 3 \\ +& 5 & \\ +7 & & \end{array} \right] +& . +\end{array}\end{split}\]
+
+
+

The determinant of the above matrix is then given by

+
+\[\begin{split}\begin{array}{ccl} +det(A) & = & +1\cdot 5\cdot 9 + 2 \cdot 6 \cdot 7 + 3 \cdot 4 \cdot 8 - 1 +\cdot 6 \cdot 8 - 2 \cdot 4 \cdot 9 - 3 \cdot 5 \cdot 7 \\ +& = & 0\end{array}\end{split}\]
+
+

For each of the following matrices, determine whether or not it has an inverse:

+
    +
  1. +\[\begin{split}A = \left[ \begin{array}{rrr} 3 & -2 & 1 \\ + 1 & 5 & -1 \\ + -1 & 0 & 0 \end{array} \right]\end{split}\]
    +
  2. +
  3. +\[\begin{split}B = \left[ \begin{array}{rrr} 4 & -6 & 1 \\ + 1 & -3 & 1 \\ + 2 & 0 & -1 \end{array} \right]\end{split}\]
    +
  4. +
  5. Try to solve this by yourself first, and use Python to check your answer:

    +
    +\[\begin{split}C = \left[ \begin{array}{rrrr} 4 & -2 & -7 & 6 \\ + -3 & 0 & 1 & 0 \\ + -1 & -1 & 5 & -1 \\ + 0 & 1 & -5 & 3 +\end{array} \right]\end{split}\]
    +
  6. +
+

The solutions to these exercises are available here

+

After solving the questions by hand, you can use Python to check your answer.

+

🐾

+

Numpy and Python with Matrices

+
+
[ ]:
+
+
+

+
+
+
+
+
+
+

Computational cost of Gaussian elimination

+

Although Gaussian elimination is a basic and relatively simple technique to find the solution of a linear system, it is a costly algorithm. Here is an operation count of this method:

+

For a \(n \times n\) matrix, there are two kinds of operations to consider:

+
    +
  1. division ( div) - to find the multiplier from a chosen pivot

  2. +
  3. multiplication-subtraction ( mult/sub ) - to calculate new entries for the matrix

  4. +
+

Note that an addition or subtraction operation has negligible cost in relation to a multiplication or division operation. So the subtraction in this case can be treated as one with the multiplication operation. The first pivot is selected from the first row in the matrix. For each of the remaining rows, one div and \((n-1)\) mult/sub operations are used to find the new entries. So there are \(n\) operations performed on each row. With \((n-1)\) rows, there are a total of +\(n(n-1) = n^{2}-n\) operations associated with this pivot.

+
+
Since the subtraction operation has negligible cost in relation to the multiplication operation, there are \((n-1)\) operations instead of \(2(n-1)\) operations. For the second pivot, which is selected from the second row of the matrix, similar analysis is applied. With the remaining \((n-1) +\times (n-1)\) matrix, each row has one div and \((n-2)\) mult/sub operations. For the whole process, there are a total of \((n-1)(n-2) = +(n-1)^{2} - (n-1)\) operations.
+
+
+
For the rest of the pivots, the number of operations for a remaining \(k \times k\) matrix is \(k^{2} - k\).
+
+

The following is obtained when all the operations are added up:

+
+\[\begin{split}\begin{array}{l} +(1^{2}+\ldots +n^{2}) - (1+\ldots +n) \\ \\ += \frac{n(n+1)(2n+1)}{6} - \frac{n(n+1)}{2} \\ \\ += \frac{n^{3}-n}{3} \\ \\ +\approx O(n^{3}) +\end{array}\end{split}\]
+

As one can see, the Gaussian elimination is an \(O(n^{3})\) algorithm. For large matrices, this can be prohibitively expensive. There are other methods which are more efficient, e.g. see Iterative Methods.

+
+

Problem One

+

Consider a very simple three box model of the movement of a pollutant in the atmosphere, fresh-water and ocean. The mass of the atmosphere is MA (5600 x 10\(^{12}\) tonnes), the mass of the fresh-water is MF (360 x 10\(^{12}\)tonnes) and the mass of the upper layers of the ocean is MO (50,000 x 10\(^{12}\) tonnes). The amount of pollutant in the atmosphere is A, the amount in the fresh water is F and the amount in the ocean is O. So A, F, and O have units of tonnes.

+

The pollutant is going directly into the atmosphere at a rate P1 = 1000 tonnes/year and into the fresh-water system at a rate P2 = 2000 tonnes/year. The pollutant diffuses between the atmosphere and ocean at a rate depending linearly on the difference in concentration with a diffusion constant L1 = 200 tonnes/year. The diffusion between the fresh-water system and the atmosphere is faster as the fresh water is shallower, L2 = 500 tonnes/year. The fresh-water system empties into the ocean at the +rate of Q = 36 x 10\(^{12}\) tonnes/year. Lastly the pollutant decays (like radioactivity) at a rate L3 = 0.05 /year.

+

See the graphical presentation of the cycle described above in Figure Box Model Schematic for Problem 1. Set up a notebook to answer this question. When you have finished, you can print it to pdf.

+
    +
    1. +
    2. Consider the steady state. There is no change in A, O, or F. Write down the three linear governing equations in a text cell. Write the equations as an augmented matrix in a text cell. Then use a computational cell to find the solution.

    3. +
    +
  • +
    1. +
    2. Show mathematically that there is no solution to this problem with L3 = 0. Explain in a text file why, physically, is there no solution.

    3. +
    +
  • +
    1. +
    2. Show mathematically that there is an infinite number of solutions if L3 = 0 and P1 = P2 = 0. Explain in a text file why this is true from a physical argument.

    3. +
    +
  • +
    1. +
    2. For part c) above, explain in a text cell what needs to be specified in order to determine a single physical solution. Explain in a text cell how would you put this in the matrix equation.

    3. +
    +
  • +
+

f77f3ce99e5f4820ab2f0f6ba3e4599a

+

Figure Box Model: Schematic for Problem One.

+
+
+
+
+

Eigenvalue Problems

+

This section is a review of eigenvalues and eigenvectors.

+
+

Characteristic Equation

+

The basic equation for eigenvalue problems is the characteristic equation, which is:

+
+\[\det( A - \lambda I ) = 0\]
+

where \(A\) is a square matrix, I is an identity the same size as \(A\), and \(\lambda\) is an eigenvalue of the matrix \(A\).

+

In order for a number to be an eigenvalue of a matrix, it must satisfy the characteristic equation, for example:

+
+

Example Eight

+
+

Given

+
+
+
+\[\begin{split}A = \left[ \begin{array}{rr} 3 & -2 \\ -4 & 5 \end{array} \right]\end{split}\]
+

To find the eigenvalues of \(A\), you need to solve the characteristic equation for all possible \(\lambda\).

+
+
+
+\[\begin{split}\begin{array}{ccl} +0 & = & \det (A - \lambda I) \\ +& = & \begin{array}{cccc} +\det & \left( +\left[ \begin{array}{rr} 3 & -2 \\ -4 & 5 \end{array} \right] \right. & +- & +\lambda \left. \left[ \begin{array}{rr} 1 & 0 \\ 0 & 1 \end{array} \right] \right) +\end{array} \\ \\ +& = & \begin{array}{cc} +\det & +\left[ \begin{array}{cc} 3-\lambda & -2 \\ -4 & 5-\lambda +\end{array} \right] +\end{array} \\ \\ +& = & \begin{array}{ccc} (3-\lambda)(5-\lambda) & - & (-2)(-4) +\end{array} \\ \\ +& = & (\lambda - 1)(\lambda - 7) \\ \\ +\end{array}\end{split}\]
+

So, \(\lambda = 1 \mbox{ or } 7\), i.e. the eigenvalues of the matrix \(A\) are 1 and 7.

+
+
+

You can use Python to check this answer.

+
+

Find the eigenvalues of the following matrix:

+
+\[\begin{split}B = \left[ + \begin{array}{ccc} 3 & 2 & 4 \\ 2 & 0 & 2 \\ 4 & 2 & 3 + \end{array} \right]\end{split}\]
+

The solution to this problem is available here

+

After solving the questions by hand, you can use Python to check your answer.

+

🐾

+

Numpy and Python with Matrices

+
+
[ ]:
+
+
+

+
+
+
+
+
+

Condition Number

+

The eigenvalues of a matrix \(A\) can be used to calculate an approximation to the condition number \(K\) of the matrix, i.e.

+
+\[K \approx \left| \frac{\lambda_{max}}{\lambda_{min}} \right|\]
+

where \(\lambda_{max}\) and \(\lambda_{min}\) are the maximum and minimum eigenvalues of \(A\). When \(K\) is large, i.e. the \(\lambda_{max}\) and \(\lambda_{min}\) are far apart, then \(A\) is ill-conditioned.

+

The mathematical definition of \(K\) is

+
+\[K = \|A\|\|A^{-1}\|\]
+

where \(\|\cdot\|\) represents the norm of a matrix.

+

There are a few norms which can be chosen for the formula. The default one used in Python for finding \(K\) is the 2-norm of the matrix. To see how to compute the norm of a matrix, see a linear algebra text. Nevertheless, the main concern here is the formula, and the fact that this can be very expensive to compute. Actually, the computing of \(A^{-1}\) is the costly operation.

+

Note: In Python, the results from the function cond(\(A\)) can have round-off errors.

+

For the matrices in this section (A from Example 8 and B just below it) for which you have found the eigenvalues, use the built-in Python function np.linalg.cond(\(A\)) to find \(K\), and compare this result with the \(K\) approximated from the eigenvalues.

+
+
+
+

Eigenvectors

+

Another way to look at the characteristic equation is using vectors instead of determinant. For a number to be an eigenvalue of a matrix, it must satisfy this equation:

+
+\[( A - \lambda I ) x = 0\]
+

where \(A\) is a \(n \times n\) square matrix, \(I\) is an identity matrix the same size as \(A\), \(\lambda\) is an eigenvalue of \(A\), and \(x\) is a non-zero vector associated with the particular eigenvalue that will make this equation true. This vector is the eigenvector. The eigenvector is not necessarily unique for an eigenvalue. This will be further discussed below after the example.

+

The above equation can be rewritten as:

+
+\[A x = \lambda x\]
+

For each eigenvalue of \(A\), there is a corresponding eigenvector. Below is an example.

+
+

Example Nine

+
+

Following the example from the previous section:

+
+
+
+\[\begin{split}A = \left[ \begin{array}{rr} 3 & -2 \\ -4 & 5 \end{array} \right]\end{split}\]
+

The eigenvalues, \(\lambda\), for this matrix are 1 and 7. To find the eigenvectors for the eigenvalues, you need to solve the equation:

+
+
+
+\[( A - \lambda I ) x = 0.\]
+

This is just a linear system \(A^{\prime}x = b\), where \(A^{\prime} = ( A - \lambda I )\), \(b = 0\). To find the eigenvectors, you need to find the solution to this augmented matrix for each \(\lambda\) respectively,

+
+
+
+\[\begin{split}\begin{array}{cl} +& ( A - \lambda I ) x = 0 \ \ \ \ \ \ \ \ {\rm where} \ \ \ \ \ \ \lambda = 1 \\ +\; & \; \\ +\rightarrow & +\left( \begin{array}{ccc} +\left[ \begin{array}{rr} 3 & -2 \\ -4 & 5 \end{array} \right] +& - & +1 \left[ \begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array} \right] +\end{array} \right) x = 0 \\ \\ +\rightarrow & +\left[ \begin{array}{cc} +\begin{array}{rr} 2 & -2 \\ -4 & 4 \end{array} +& \left| \begin{array}{c} 0 \\ 0 \end{array} \right] +\end{array} \right. \\ \\ +\rightarrow & +\left[ \begin{array}{cc} +\begin{array}{rr} 1 & -1 \\ 0 & 0 \end{array} +& \left| \begin{array}{c} 0 \\ 0 \end{array} \right] +\end{array} \right. +\end{array}\end{split}\]
+

Reading from the matrix,

+
+
+
+\[\begin{split}\begin{array}{ccccc} x_1 & - & x_2 & = & 0 \\ +& & 0 & = & 0 \end{array}\end{split}\]
+
+
As mentioned before, the eigenvector is not unique for a given eigenvalue. As seen here, the solution to the matrix is a description of the direction of the vectors that will satisfy \(Ax = \lambda x\). Letting \(x_1 = 1\), then \(x_2 = 1\). So the vector (1, 1) is an eigenvector for the matrix \(A\) when \(\lambda = 1\). (So is (-1,-1), (2, 2), etc)
+
+
+
+

In the same way for \(\lambda = 7\), the solution is

+
+\[\begin{split}\begin{array}{ccccc} 2 x_1 & + & x_2 & = & 0 \\ +& & 0 & = & 0 \end{array}\end{split}\]
+
+
+

So an eigenvector here is x = (1, -2).

+
+
+
[ ]:
+
+
+
# Using Python:
+
+A = np.array([[3, -2], [-4, 5]])
+lamb, x = np.linalg.eig(A)
+print(lamb)
+print (x)
+
+
+
+
+

Matrix \(x\) is the same size a \(A\) and vector \(lamb\) is the size of one dimension of \(A\). Each column of \(x\) is a unit eigenvector of \(A\), and \(lamb\) values are the eigenvalues of \(A\). Reading from the result, for \(\lambda\) = 1, the corresponding unit eigenvector is (-0.70711, -0.70711). The answer from working out the example by hand is (1, 1), which is a multiple of the unit eigenvector from Python.

+
+
+

(The unit eigenvector is found by dividing the eigenvector by its magnitude. In this case, \(\mid\)(1,1)\(\mid\) = \(\sqrt{1^2 + +1^2}\) = \(\sqrt{2}\), and so the unit eigenvector is (\(\frac{1}{\sqrt{2}}, \frac{1}{\sqrt{2}}\)) ).

+

Remember that the solution for an eigenvector is not the unique answer; it only represents a direction for an eigenvector corresponding to a given eigenvalue.

+
+

What are the eigenvectors for the matrix \(B\) from the previous section?

+
+\[\begin{split}B = \left[ + \begin{array}{ccc} 3 & 2 & 4 \\ 2 & 0 & 2 \\ 4 & 2 & 3 + \end{array} \right]\end{split}\]
+

The solution to this problem is available here

+

After solving the questions by hand, you can use Python to check your answer.

+

🐾

+

Numpy and Python with Matrices

+

Although the method used here to find the eigenvalues is a direct way to find the solution, it is not very efficient, especially for large matrices. Typically, iterative methods such as the Power Method or the QR algorithm are used (see a linear algebra text such as Strang (1988) for more details).

+
+
[ ]:
+
+
+

+
+
+
+
+
+
+
+

Iterative Methods

+

So far, the only method we’ve seen for solving systems of linear equations is Gaussian Elimination (with its pivoting variants), which is only one of a class of direct methods. This name derives from the fact that the Gaussian Elimination algorithm computes the exact solution directly, in a finite number of steps. Other commonly-used direct methods are based on matrix decomposition or factorizations different from the \(LU\) decomposition (see Section Decomposition); +for example, the \(LDL^T\) and Choleski factorizations of a matrix. When the system to be solved is not too large, it is usually most efficient to employ a direct technique that minimizes the effects of round-off error (for example, Gaussian elimination with full pivoting).

+

However, the matrices that occur in the discretization of differential equations are typically very large and sparse – that is, a large proportion of the entries in the matrix are zero. In this case, a direct algorithm, which has a cost on the order of \(N^3\) multiplicative operations, will be spending much of its time inefficiently, operating on zero entries. In fact, there is another class of solution algorithms called iterative methods which exploit the sparsity of such systems to +reduce the cost of the solution procedure, down to order \(N^2\) for Jacobi’s method, the simplest of the iterative methods (see Lab #8 ) and as low as order \(N\) (the optimal order) for multigrid methods (which we will not discuss here).

+

Iterative methods are based on the principle of computing an approximate solution to a system of equations, where an iterative procedure is applied to improve the approximation at every iteration. While the exact answer is never reached, it is hoped that the iterative method will approach the answer more rapidly than a direct method. For problems arising from differential equations, this is often possible since these methods can take advantage of the presence of a large number of zeroes in the +matrix. Even more importantly, most differential equations are only approximate models of real physical systems in the first place, and so in many cases, an approximation of the solution is sufficient!!!

+

None of the details of iterative methods will be discussed in this Lab. For now it is enough to know that they exist, and what type of problems they are used for. Neither will we address the questions: How quickly does an iterative method converge to the exact solution?, Does it converge at all?, and When are they more efficient than a direct method? Iterative methods will be discussed in more detail in Lab #8 , when a large, sparse system appears in the discretization of a PDE describing +the flow of water in the oceans.

+

For even more details on iterative methods, you can also look at Strang (1988) [p. 403ff.], or one of the several books listed in the Readings section from Lab #8 .

+
+
+

Solution of an ODE Using Linear Algebra

+

So far, we’ve been dealing mainly with matrices with dimensions \(4\times 4\) at the greatest. If this was the largest linear system that we ever had to solve, then there would be no need for computers – everything could be done by hand! Nevertheless, even very simple differential equations lead to large systems of equations.

+

Consider the problem of finding the steady state heat distribution in a one-dimensional rod, lying along the \(x\)-axis between 0 and 1. We saw in Lab #1 that the temperature, \(u(x)\), can be described by a boundary value problem, consisting of the ordinary differential equation

+
+\[u_{xx} = f(x),\]
+

along with boundary values

+
+\[u(0)=u(1) = 0.\]
+

The only difference between this example and the one from Lab #1 is that the right hand side function, \(f(x)\), is non-zero, which corresponds to a heat source being applied along the length of the rod. The boundary conditions correspond to the ends of the rod being held at constant (zero) temperature – this type of condition is known as a fixed or Dirichlet boundary condition.

+

If we discretize this equation at \(N\) discrete points, \(x_i=id\), \(i=0,1,\dots,N\), where \(d = 1/N\) is the grid spacing, then the ordinary differential equation can be approximated at a point \(x_i\) by the following system of linear equations:

+

(Discrete Differential Equation)

+
+\[\frac{u_{i+1} - 2u_i+u_{i-1}}{d^2} = f_i,\]
+

where \(f_i=f(x_i)\), and \(u_i\approx u(x_i)\) is an approximation of the steady state temperature at the discrete points. If we write out all of the equations, for the unknown values \(i=1,\dots,N-1\), along with the boundary conditions at \(i=0,N\), we obtain the following set of \(N+1\) equations in \(N+1\) unknowns:

+

(Differential System)

+
+\[\begin{split}\begin{array}{ccccccccccc} + u_0 & & & & & & & & &=& 0 \\ + u_0 &-&2u_1 &+& u_2 & & & & &=& d^2 f_1\\ + & & u_1 &-& 2u_2 &+& u_3 & & &=& d^2f_2\\ + & & & & & & \dots & & &=& \\ + & & & &u_{N-2}&-& 2u_{N-1}&+& u_N &=& d^2f_{N-1}\\ + & & & & & & & & u_N &=& 0 +\end{array}\end{split}\]
+

Remember that this system, like any other linear system, can be written in matrix notation as

+

(Differential System Matrix)

+
+\[\begin{split}\underbrace{\left[ + \begin{array}{ccccccccc} + 1& 0 & & \dots & & & & & 0 \\ + 1& {-2} & {1} & {0} & {\dots} & && & \\ + 0& {1} & {-2} & {1} & {0} & {\dots} & & & \\ + & {0} & {1} & {-2} & {1} & {0} & {\dots} & & \\ + & & & & & & & & \\ + \vdots & & & {\ddots} & {\ddots} & {\ddots} & {\ddots} & {\ddots} & \vdots \\ + & & & & & & & & \\ + & & & {\dots} & {0} & {1} & {-2} & {1} & 0 \\ + & & & &{\dots} & {0} & {1} & {-2} & 1 \\ + 0& & & & & \dots & & 0 & 1 + \end{array} + \right] + }_{A_1} + \underbrace{\left[ + \begin{array}{c} + u_0 \\ {u_1} \\ {u_2} \\ {u_3} \\ \ \\ {\vdots} \\ \ + \\ {u_{N-2}} \\ {u_{N-1}} \\ u_N + \end{array} + \right] + }_{U} + = + \underbrace{\left[ + \begin{array}{c} + 0 \\ {d^2 f_1} \\ {d^2 f_2} \\ {d^2 f_3} \\ \ \\ + {\vdots} \\ \ \\ {d^2 f_{N-2}} \\ {d^2 f_{N-1}} \\ 0 + \end{array} + \right] + }_{F}\end{split}\]
+

or, simply \(A_1 U = F\).

+

One question we might ask is: How well-conditioned is the matrix :math:`A_1`? or, in other words, How easy is this system to solve? To answer this question, we leave the right hand side, and consider only the matrix and its condition number. The size of the condition number is a measure of how expensive it will be to invert the matrix and hence solve the discrete boundary value problem.

+
+

Problem Two

+
+
    +
  1. Using Python, compute the condition number for the matrix \(A_1\) from Equation Differential System Matrix for several values of \(N\) between 5 and 50. ( Hint: This will be much easier if you write a small Python function that outputs the matrix \(A\) for a given value of \(N\).)

  2. +
+
+
+
    +
  1. Can you conjecture how the condition number of \(A_1\) depends on \(N\)?

  2. +
  3. Another way to write the system of equations is to substitute the boundary conditions into the equations, and thereby reduce size of the problem to one of \(N-1\) equations in \(N-1\) unknowns. The corresponding matrix is simply the \(N-1\) by \(N-1\) submatrix of \(A_1\) from Equation Differential System Matrix

  4. +
+
+\[\begin{split}A_2 = \left[ +\begin{array}{ccccccc} +-2 & 1 & 0 & \dots & && 0 \\ +1 & -2 & 1 & 0 & \dots & & \\ +0 & 1 & -2 & 1 & 0 & \dots & \\ +& & & & & & \\ +\vdots & & \ddots & \ddots& \ddots & \ddots & \vdots\\ +& & & & & & 0 \\ +& & \dots & 0 & 1 & -2 & 1 \\ +0& & &\dots & 0 & 1 & -2 \\ +\end{array} +\right]\end{split}\]
+

Does this change in the matrix make a significant difference in the condition number?

+
+

So far, we’ve only considered zero Dirichlet boundary values, \(u_0=0=u_N\). Let’s look at a few more types of boundary values …

+

Fixed (non-zero) boundary conditions:

+
+

If we fixed the solution at the boundary to be some non-zero values, say by holding one end at temperature \(u_0=a\), and the other at temperature \(u_N=b\), then the matrix itself is not affected. The only thing that changes in Equation Differential System is that a term \(a\) is subtracted from the right hand side of the second equation, and a term \(b\) is subtracted from the RHS of the second-to-last equation. It is clear from what we’ve just said +that non-zero Dirichlet boundary conditions have no effect at all on the matrix \(A_1\) (or \(A_2\)) since they modify only the right hand side.

+
+

No-flow boundary conditions:

+
+

These are conditions on the first derivative of the temperature

+
+\[u_x(0) = 0,\]
+

+
+\[u_x(1) = 0,\]
+

which are also known as Neumann boundary conditions. The requirement that the first derivative of the temperature be zero at the ends corresponds physically to the situation where the ends of the rod are insulated; that is, rather than fixing the temperature at the ends of the rod (as we did with the Dirichlet problem), we require instead that there is no heat flow in or out of the rod through the ends.

+
+
+

There is still one thing that is missing in the mathematical formulation of this problem: since only derivatives of \(u\) appear in the equations and boundary conditions, the solution is determined only up to a constant, and for there to be a unique solution, we must add an extra condition. For example, we could set

+
+\[u \left(\frac{1}{2} \right) = constant,\]
+

or, more realistically, say that the total heat contained in the rod is constant, or

+
+\[\int_0^1 u(x) dx = constant.\]
+

Now, let us look at the discrete formulation of the above problem …

+
+
+

The discrete equations do not change, except for that discrete equations at \(i=0,N\) replace the Dirichlet conditions in Equation Differential System: (Neumann Boundary Conditions)

+
+\[u_{-1} - 2u_0 +u_{1} = d^2f_0 \quad {\rm and} \quad +u_{N-1} - 2u_N +u_{N+1} = d^2f_N\]
+

where we have introduced the additional ghost points, or fictitious points \(u_{-1}\) and \(u_{N+1}\), lying outside the boundary. The temperature at these ghost points can be determined in terms of values in the interior using the discrete version of the Neumann boundary conditions

+
+\[\frac{u_{-1} - u_1}{2d} = 0 \;\; \Longrightarrow \;\; u_{-1} = u_1,\]
+
+\[\frac{u_{N+1} - u_{N-1}}{2d} = 0 \;\; \Longrightarrow \;\; u_{N+1} = u_{N-1}.\]
+

Substitute these back into the Neumann Boundary Conditions to obtain

+
+\[- 2u_0 + 2 u_1 =d^2 f_0 \quad {\rm and} \quad ++ 2u_{N-1} - 2 u_N =d^2 f_N .\]
+

In this case, the matrix is an \(N+1\) by \(N+1\) matrix, almost identical to Equation Differential System Matrix, but with the first and last rows slightly modified

+
+\[\begin{split}A_3 = \left[ +\begin{array}{ccccccc} +-2 & 2 & 0 & \dots & && 0 \\ +1 & -2 & 1 & 0 & \dots & & 0\\ +0 & 1 & -2 & 1 & 0 & \dots & 0\\ +& & & & & & \\ +& & & \ddots& \ddots & \ddots & \\ +& & & & & & \\ +0 & & \dots & 0 & 1 & -2 & 1 \\ +0 & & &\dots & 0 & 2 & -2 +\end{array} +\right]\end{split}\]
+

This system is not solvable; that is, the \(A_3\) above is singular ( *try it in Python to check for yourself … this should be easy by modifying the code from Problem 2). This is a discrete analogue of the fact that the continuous solution is not unique. The only way to overcome this problem is to add another equation for the unknown temperatures.

+
+

Physically the reason the problem is not unique is that we don’t know how hot the rod is. If we think of the full time dependent problem:

+
    +
  1. given fixed temperatures at the end points of the rod (Dirichlet), whatever the starting temperature of the rod, eventually the rod will reach equilibrium. with a temperature smoothly varying between the values (given) at the end points.

  2. +
  3. However, if the rod is insulated (Neumann), no heat can escape and the final temperature will be related to the initial temperature. To solve this problem we need to know the steady state,

  4. +
+
    +
  1. the initial temperature of the rod,

  2. +
  3. the total heat of the rod,

  4. +
  5. or a final temperature at some point of the rod.

  6. +
+
+
+

Problem Three

+
+

How can we make the discrete Neumann problem solvable? Think in terms of discretizing the solvability conditions \(u(\frac{1}{2}) = c\) (condition c) above), or \(\int_0^1 u(x) dx = c\) (condition b) above), (the integral condition can be thought of as an average over the domain, in which case we can approximate it by the discrete average \(\frac{1}{N}(u_0+u_1+\dots+u_N)=c\)).

+
+
+
    +
  1. Derive the matrix corresponding to the linear system to be solved in both of these cases.

  2. +
  3. How does the conditioning of the resulting matrix depend on the the size of the system?

  4. +
+
+
+
    +
  1. Is it better or worse than for Dirichlet boundary conditions?

  2. +
+
+

Periodic boundary conditions:

+
+

This refers to the requirement that the temperature at both ends remains the same:

+
+\[u(0) = u(1).\]
+

Physically, you can think of this as joining the ends of the rod together, so that it is like a ring. From what we’ve seen already with the other boundary conditions, it is not hard to see that the discrete form of the one-dimensional diffusion problem with periodic boundary conditions leads to an \(N\times N\) matrix of the form

+
+\[\begin{split}A_4 = \left[ +\begin{array}{ccccccc} +-2 & 1 & 0 & \dots & && 1 \\ +1 & -2 & 1 & 0 & \dots & & 0\\ +0 & 1 & -2 & 1 & 0 & \dots & 0\\ +& & & & & & \\ +& & & \ddots& \ddots & \ddots & \\ +& & & & & & \\ +0 & & \dots & 0 & 1 & -2 & 1 \\ +1 & & &\dots & 0 & 1 & -2 +\end{array} +\right],\end{split}\]
+

where the unknown temperatures are now \(u_i\), \(i=0,1,\dots, N-1\). The major change to the form of the matrix is that the elements in the upper right and lower left corners are now 1 instead of 0. Again the same problem of the invertibility of the matrix comes up. This is a symptom of the fact that the continuous problem does not have a unique solution. It can also be remedied by tacking on an extra condition, such as in the Neumann problem above.

+
+
+
+

Problem Four

+
+
    +
  1. Derive the matrix \(A_4\) above using the discrete form Discrete Differential Equation of the differential equation and the periodic boundary condition.

  2. +
+
+
+
    +
  1. For the periodic problem (with the extra integral condition on the temperature) how does the conditioning of the matrix compare to that for the other two discrete problems?

  2. +
+
+
+
+

Summary

+

As you will have found in these problems, the boundary conditions can have an influence on the conditioning of a discrete problem. Furthermore, the method of discretizing the boundary conditions may or may not have a large effect on the condition number. Consequently, we must take care when discretizing a problem in order to obtain an efficient numerical scheme.

+
+
+
+

References

+

Strang, G., 1986: Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA.

+

Strang, G., 1988: Linear Algebra and its Applications. Harcourt Brace Jovanovich, San Diego, CA, 2nd edition.

+
+
+

Numpy and Python with Matrices

+

To start, import numpy,

+

Enter:

+
+

import numpy as np

+
+

To enter a matrix,

+
+\[\begin{split}A = \left[ \begin{array}{ccc} a, & b, & c \\ + d, & e, & f \end{array} \right]\end{split}\]
+

Enter:

+
+

A = np.array([[a, b, c], [d, e, f]])

+
+

To add two matrices,

+
+\[C = A + B\]
+

Enter:

+
+

C = A + B

+
+

To multiply two matrices,

+
+\[C = A \cdot B\]
+

Enter:

+
+

C = np.dot(A, B)

+
+

To find the tranpose of a matrix,

+
+\[C = A^{T}\]
+

Enter:

+
+

C = A.tranpose()

+
+

To find the condition number of a matrix,

+
+

K = np.linalg.cond(A)

+
+

To find the inverse of a matrix,

+
+\[C = A^{-1}\]
+

Enter:

+
+

C = np.linalg.inv(A)

+
+

To find the determinant of a matrix,

+
+\[K = |A|\]
+

Enter:

+
+

K = np.linalg.det(A)

+
+

To find the eigenvalues of a matrix,

+

Enter:

+
+

lamb = np.linalg.eigvals(A)

+
+

To find the eigenvalues (lamb) and eigenvectors (x) of a matrix,

+

Enter:

+
+

lamb, x = np.linalg.eig(A)

+
+

lamb[i] are the eigenvalues and x[:, i] are the eigenvectors.

+

To print a matrix,

+
+\[C\]
+

Enter:

+
+

print (C)

+
+
+
+

Glossary

+

A

+

augmented matrix

+
+

The \(m \times (n+1)\) matrix representing a linear system, \(Ax = b\), with the right hand side vector appended to the coefficient matrix:

+
+\[\begin{split}\left[ +\begin{array}{cc} +\begin{array}{ccccc} +a_{11} & & \ldots & & a_{1n} \\ +\vdots & & \ddots & & \vdots \\ +a_{m1} & & \ldots & & a_{mn} +\end{array} +& +\left| +\begin{array}{rc} +& b_{1} \\ & \vdots \\ & b_{m} +\end{array} +\right. +\end{array} +\right]\end{split}\]
+
+
+

The right most column is the right hand side vector or augmented column.

+
+

C

+

characteristic equation

+
+

The equation:

+
+\[\det(A - \lambda I) = 0 , \ \ \ \ or \ \ \ \ Ax = \lambda x\]
+

where \(A\) is a square matrix, \(I\) is the identity matrix, \(\lambda\) is an eigenvalue of \(A\), and \(x\) is the corresponding eigenvector of \(\lambda\).

+
+

coefficient matrix

+
+

A \(m \times n\) matrix made up with the coefficients \(a_{ij}\) of the \(n\) unknowns from the \(m\) equations of a set of linear equations, where \(i\) is the row index and \(j\) is the column index:

+
+\[\begin{split}\left[ +\begin{array}{ccccccc} +& a_{11} & & \ldots & & a_{1n} & \\ +& \vdots & & \ddots & & \vdots & \\ +& a_{m1} & & \ldots & & a_{mn} & +\end{array} +\right]\end{split}\]
+
+

condition number

+
+

A number, \(K\), that refers to the sensitivity of a nonsingular matrix, \(A\), i.e. given a system \(Ax = b\), \(K\) reflects whether small changes in \(A\) and \(b\) will have any effect on the solution. The matrix is well-conditioned if \(K\) is close to one. The number is described as:

+
+\[K(A) = \|A\| \|A^{-1}\| +\ \ \ \ or \ \ \ \ +K(A) = \frac{\lambda_{max}}{\lambda_{min}}\]
+

where \(\lambda_{max}\) and \(\lambda_{min}\) are largest and smallest eigenvalues of \(A\) respectively.

+
+

D

+

decomposition

+
+

Factoring a matrix, \(A\), into two factors, e.g., the Gaussian elimination amounts to factoring \(A\) into a product of two matrices. One is the lower triangular matrix, \(L\), and the other is the upper triangular matrix, \(U\).

+
+

diagonal matrix

+
+

A square matrix with the entries $a_{ij} = 0 $ whenever \(i \neq j\).

+
+

E

+

eigenvalue

+
+

A number, \(\lambda\), that must satisfy the characteristic equation \(\det(A - \lambda I) = 0.\)

+
+

eigenvector

+
+

A vector, \(x\), which corresponds to an eigenvalue of a square matrix \(A\), satisfying the characteristic equation:

+
+\[Ax = \lambda x .\]
+
+

H

+

homogeneous equations

+
+

A set of linear equations, \(Ax = b\) with the zero vector on the right hand side, i.e. \(b = 0\).

+
+

I

+

inhomogeneous equations

+
+

A set of linear equations, \(Ax = b\) such that \(b \neq 0\).

+
+

identity matrix

+
+

A diagonal matrix with the entries \(a_{ii} = 1\):

+
+\[\begin{split}\left[ \begin{array}{ccccccc} +& 1 & 0 & \ldots & \ldots & 0 & \\ +& 0 & 1 & \ddots & & \vdots & \\ +& \vdots & \ddots & \ddots & \ddots & \vdots \\ +& \vdots & & \ddots & 1 & 0 & \\ +& 0 & \ldots & \ldots & 0 & 1 & +\end{array} \right]\end{split}\]
+
+

ill-conditioned matrix

+
+

A matrix with a large condition number, i.e., the matrix is not well-behaved, and small errors to the matrix will have great effects to the solution.

+
+

invertible matrix

+
+

A square matrix, \(A\), such that there exists another matrix, \(A^{-1}\), which satisfies:

+
+\[AA^{-1} = I \ \ \ \ and \ \ \ \ A^{-1}A = I\]
+
+
+

The matrix, \(A^{-1}\), is the inverse of \(A\). An invertible matrix is nonsingular.

+
+

L

+

linear system

+
+

A set of \(m\) equations in \(n\) unknowns:

+
+\[\begin{split}\begin{array}{ccccccc} +a_{11}x_{1} & + & \ldots & + & a_{1n}x_{n} & = & b_{1} \\ +a_{21}x_{1} & + & \ldots & + & a_{2n}x_{n} & = & b_{2} \\ +& & \vdots & & & & \vdots \\ +a_{m1}x_{1} & + & \ldots & + & a_{mn}x_{n} & = & b_{m} +\end{array}\end{split}\]
+

with unknowns \(x_{i}\) and coefficients \(a_{ij}, b_{j}\).

+
+

lower triangular matrix

+
+

A square matrix, \(L\), with the entries \(l_{ij} = 0\), whenever \(j > i\):

+
+\[\begin{split}\left[ +\begin{array}{ccccccc} +& * & 0 & \ldots & \ldots & 0 & \\ +& * & * & \ddots & & \vdots & \\ +& \vdots & & \ddots & \ddots & \vdots & \\ +& \vdots & & & * & 0 & \\ +& * & \ldots & \ldots & \ldots & * & +\end{array} +\right]\end{split}\]
+
+

N

+

nonsingular matrix

+
+

A square matrix,\(A\), that is invertible, i.e. the system \(Ax = b\) has a unique solution.

+
+

S

+

singular matrix

+
+

A \(n \times n\) matrix that is degenerate and does not have an inverse (refer to invertible), i.e., the system \(Ax = b\) does not have a unique solution.

+
+

sparse matrix

+
+

A matrix with a high percentage of zero entries.

+
+

square matrix

+
+

A matrix with the same number of rows and columns.

+
+

T

+

transpose

+
+

A \(n \times m\) matrix, \(A^{T}\), that has the columns of a \(m \times n\) matrix, \(A\), as its rows, and the rows of \(A\) as its columns, i.e. the entry \(a_{ij}\) in \(A\) becomes \(a_{ji}\) in \(A^{T}\), e.g.

+
+
+
+\[\begin{split}A = +\left[ \begin{array}{ccc} 1 & 2 & 3 \\ 4 & 5 & 6 \end{array} \right] +\ \ \rightarrow \ \ +A^{T} = +\left[ \begin{array}{cc} 1 & 4 \\ 2 & 5 \\ 3 & 6 \end{array} \right]\end{split}\]
+
+

tridiagonal matrix

+
+

A square matrix with the entries \(a_{ij} = 0\), $| i-j | > 1 $:

+
+
+
+\[\begin{split}\left[ +\begin{array}{cccccccc} +& * & * & 0 & \ldots & \ldots & 0 & \\ +& * & * & * & \ddots & & \vdots & \\ +& 0 & * & \ddots & \ddots & \ddots & \vdots & \\ +& \vdots & \ddots & \ddots & \ddots & * & 0 & \\ +& \vdots & & \ddots & * & * & * & \\ +& 0 & \ldots & \ldots & 0 & * & * & +\end{array} +\right]\end{split}\]
+
+

U

+

unique solution

+
+

There is only solution, \(x\), that satisfies a particular linear system, \(Ax = b\), for the given \(A\). That is, this linear system has exactly one solution. The matrix \(A\) of the system is invertible or nonsingular.

+
+

upper triangular matrix

+
+

A square matrix, \(U\), with the entries \(u_{ij} = 0\) whenever \(i > j\):

+
+
+
+\[\begin{split}\left[ +\begin{array}{ccccccc} +& * & \ldots & \ldots & \ldots & * & \\ +& 0 & * & & & \vdots & \\ +& \vdots & \ddots & \ddots & & \vdots & \\ +& \vdots & & \ddots & * & * & \\ +& 0 & \ldots & \ldots & 0 & * & +\end{array} +\right]\end{split}\]
+
+
+
[ ]:
+
+
+

+
+
+
+
+ + + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab3/01-lab3.ipynb b/notebooks/lab3/01-lab3.ipynb new file mode 100644 index 0000000..97420e6 --- /dev/null +++ b/notebooks/lab3/01-lab3.ipynb @@ -0,0 +1,2303 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 3: Linear Algebra (Sept. 12, 2017)\n", + "\n", + "Grace Yung" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## List of Problems\n", + "\n", + "- [Problem One](#Problem-One): Pollution Box Model\n", + "- [Problem Two](#Problem-Two): Condition number for Dirichlet problem\n", + "- [Problem Three](#Problem-Three): Condition number for Neumann problem\n", + "- [Problem Four](#Problem-Four): Condition number for periodic problem" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "The object of this lab is to familiarize you with some of the common\n", + "techniques used in linear algebra. You can use the Python software\n", + "package to solve some of the problems presented in the lab. There are\n", + "examples of using the Python commands as you go along in the lab. In\n", + "particular, after finishing the lab, you will be able to\n", + "\n", + "- Define: condition number, ill-conditioned matrix, singular matrix,\n", + " LU decomposition, Dirichlet, Neumann and periodic boundary\n", + " conditions.\n", + "\n", + "- Find by hand or using Python: eigenvalues, eigenvectors, transpose,\n", + " inverse of a matrix, determinant.\n", + "\n", + "- Find using Python: condition numbers.\n", + "\n", + "- Explain: pivoting.\n", + "\n", + "There is a description of using Numpy and Python for Matrices at the end of the lab. It includes a brief description of how to use the built-in functions introduced in\n", + "this lab. Just look for the paw prints:\n", + "\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n", + "\n", + "when you are not sure what\n", + "functions to use, and this will lead you to the mini-manual." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Prerequisites\n", + "\n", + "You should have had an introductory course in linear algebra." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# import the context to find files\n", + "import context\n", + "# import the quiz script\n", + "from numlabs.lab3 import quiz3\n", + "# import numpy\n", + "import numpy as np" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "## Linear Systems\n", + "\n", + "In this section, the concept of a matrix will be reviewed. The basic\n", + "operations and methods in solving a linear system are introduced as\n", + "well.\n", + "\n", + "Note: **Whenever you see a term you are not familiar with, you can find\n", + "a definition in the [glossary](#Glossary).**\n", + "\n", + "### What is a Matrix?\n", + "\n", + "Before going any further on how to solve a linear system, you need to\n", + "know what a linear system is. A set of m linear equations with n\n", + "unknowns: \n", + "\n", + "
(System of Equations)
\n", + "\n", + "$$\\begin{array}{ccccccc}\n", + "a_{11}x_{1} & + & \\ldots & + & a_{1n}x_{n} & = & b_{1} \\\\\n", + "a_{21}x_{1} & + & \\ldots & + & a_{2n}x_{n} & = & b_{2} \\\\\n", + " & & \\vdots & & & & \\vdots \\\\\n", + "a_{m1}x_{1} & + & \\ldots & + & a_{mn}x_{n} & = & b_{m}\n", + "\\end{array}\n", + "$$\n", + "\n", + "\n", + "can be represented as an augmented matrix:\n", + "\n", + "$$\\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{ccccc}\n", + " a_{11} & & \\ldots & & a_{1n} \\\\\n", + " \\vdots & & \\ddots & & \\vdots \\\\\n", + " a_{m1} & & \\ldots & & a_{mn}\n", + " \\end{array}\n", + "&\n", + " \\left|\n", + " \\begin{array}{rc}\n", + " & b_{1} \\\\ & \\vdots \\\\ & b_{m}\n", + " \\end{array}\n", + " \\right.\n", + "\\end{array}\n", + "\\right]$$\n", + "\n", + "Column 1 through n of this matrix contain the coefficients $a_{ij}$ of\n", + "the unknowns in the set of linear equations. The right most column is\n", + "the *augmented column*, which is made up of the coefficients of the\n", + "right hand side $b_i$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Quiz on Matrices\n", + "Which matrix matches this system of equations?\n", + "\n", + "$$\\begin{array}{lcr}\n", + "2x + 3y + 6z &=& 19\\\\\n", + "3x + 6y + 9z &=& 21\\\\\n", + "x + 5y + 10z &=& 0\n", + "\\end{array}$$\n", + "(A)\n", + "$$\\left[ \\begin{array}{ccc}\n", + "2 & 3 & 1 \\\\\n", + "3 & 6 & 5 \\\\\n", + "6 & 9 & 10\\\\\n", + "19 & 21 & 0\n", + "\\end{array}\n", + "\\right]$$\n", + "(B)\n", + "$$\\left[ \\begin{array}{ccc}\n", + "2 & 3 & 6\\\\\n", + "3 & 6 & 9\\\\\n", + "1 & 5 & 10\n", + "\\end{array}\n", + "\\right]$$\n", + "(C)\n", + "$$\\left[ \\begin{array}{ccc|c}\n", + "1 & 5 & 10 & 0\\\\\n", + "2 & 3 & 6 & 19\\\\\n", + "3 & 6 & 9 & 21\n", + "\\end{array}\n", + "\\right]$$\n", + "(D)\n", + "$$\\left[ \\begin{array}{ccc|c}\n", + "2 & 3 & 6 & -19\\\\\n", + "3 & 6 & 9 & -21\\\\\n", + "1 & 5 & 10 & 0 \n", + "\\end{array}\n", + "\\right]$$\n", + "\n", + "In the following, replace 'x' by 'A', 'B', 'C', or 'D' and run the cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print(quiz3.matrix_quiz(answer = 'xxx'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Quick Review\n", + "\n", + "[lab3:sec:quick]: (#Quick-Review)\n", + "\n", + "Here is a review on the basic matrix operations, including addition,\n", + "subtraction and multiplication. These are important in solving linear\n", + "systems.\n", + "\n", + "Here is a short exercise to see how much you remember:\n", + "\n", + "Let $x = \\left[ \\begin{array}{r} 2 \\\\ 2 \\\\ 7 \\end{array} \\right] , \n", + " y = \\left[ \\begin{array}{r} -5 \\\\ 1 \\\\ 3 \\end{array} \\right] , \n", + " A = \\left[ \\begin{array}{rrr} 3 & -2 & 10 \\\\ \n", + " -6 & 7 & -4 \n", + " \\end{array} \\right],\n", + " B = \\left[ \\begin{array}{rr} -6 & 4 \\\\ 7 & -1 \\\\ 2 & 9 \n", + " \\end{array} \\right]$\n", + "\n", + "Calculate the following:\n", + "\n", + "1. $x + y$\n", + "\n", + "2. $x^{T}y$\n", + "\n", + "3. $y-x$\n", + "\n", + "4. $Ax$\n", + "\n", + "5. $y^{T}A$\n", + "\n", + "6. $AB$\n", + "\n", + "7. $BA$\n", + "\n", + "8. $AA$\n", + "\n", + "The solutions to these exercises are available [here](lab3_files/quizzes/quick/quick.html)\n", + "\n", + "After solving the questions by hand, you can also use Python to check\n", + "your answers.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "## A cell to use Python to check your answers\n", + "x = np.array([2, 2, 7])\n", + "y = np.array([-5, 1, 3])\n", + "A = np.array([[3, -2, 10],\n", + " [-6, 7, -4]])\n", + "B = np.array([[-6, 4],\n", + " [7, -1],\n", + " [2, 9]])\n", + "print(f'(x+y) is {x+y}')\n", + "print(f'x^T y is {np.dot(x, y)}')\n", + "\n", + "## you do the rest!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Gaussian Elimination\n", + "\n", + "[lab3:sec:gaus]: <#Gaussian-Elimination> \"Gaussian Elimination\"\n", + "\n", + "The simplest method for solving a linear system is Gaussian elimination,\n", + "which uses three types of *elementary row operations*:\n", + "\n", + "- Multiplying a row by a non-zero constant ($kE_{ij}$)\n", + "\n", + "- Adding a multiple of one row to another ($E_{ij} + kE_{kj}$)\n", + "\n", + "- Exchanging two rows ($E_{ij} \\leftrightarrow E_{kj}$)\n", + "\n", + "Each row operation corresponds to a step in the solution of the [System\n", + "of Equations](#lab3:eq:system) where the equations are combined\n", + "together. *It is important to note that none of those operations changes\n", + "the solution.* There are two parts to this method: elimination and\n", + "back-substitution. The purpose of the process of elimination is to\n", + "eliminate the matrix entries below the main diagonal, using row\n", + "operations, to obtain a upper triangular matrix with the augmented\n", + "column. Then, you will be able to proceed with back-substitution to find\n", + "the values of the unknowns.\n", + "\n", + "Try to solve this set of linear equations:\n", + "\n", + "$$\\begin{array}{lrcrcrcr}\n", + " E_{1j}: & 2x_{1} & + & 8x_{2} & - & 5x_{3} & = & 53 \\\\\n", + " E_{2j}: & 3x_{1} & - & 6x_{2} & + & 4x_{3} & = & -48 \\\\\n", + " E_{3j}: & x_{1} & + & 2x_{2} & - & x_{3} & = & 13\n", + "\\end{array}$$\n", + "\n", + "The solution to this problem is available [here](lab3_files/quizzes/gaus/gaus.html)\n", + "\n", + "After solving the system by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Decomposition \n", + "[lab3:sec:decomp]: <#Decomposition> \"Decomposition\"\n", + "\n", + "Any invertible, square matrix, $A$, can be factored out into a product\n", + "of a lower and an upper triangular matrices, $L$ and $U$, respectively,\n", + "so that $A$ = $LU$. The $LU$- *decomposition* is closely linked to the\n", + "process of Gaussian elimination.\n", + "\n", + "#### Example One\n", + "\n", + "> Using the matrix from the system of the previous section (Sec\n", + "[Gaussian Elimination](#Gaussian-Elimination)), we have:\n", + "\n", + "> $$A = \\left[ \\begin{array}{rrr} 2 & 8 & -5 \\\\\n", + " 3 & -6 & 4 \\\\\n", + " 1 & 2 & -1 \\end{array} \\right]$$\n", + "\n", + "> The upper triangular matrix $U$ can easily be calculated by applying\n", + "Gaussian elimination to $A$:\n", + "\n", + "> $$\\begin{array}{cl}\n", + " \\begin{array}{c} E_{2j}-\\frac{3}{2}E_{1j} \\\\\n", + " E_{3j}-\\frac{1}{2}E_{1j} \\\\\n", + " \\rightarrow \\end{array}\n", + " & \n", + " \\left[ \\begin{array}{rrr} 2 & 8 & -5 \\\\\n", + " 0 & -18 & \\frac{23}{2} \\\\\n", + " 0 & -2 & \\frac{3}{2} \n", + " \\end{array} \\right] \\\\ \\\\\n", + " \\begin{array}{c} E_{3j}-\\frac{1}{9}E_{2j} \\\\\n", + " \\rightarrow \\end{array}\n", + " &\n", + " \\left[ \\begin{array}{rrr} 2 & 8 & -5 \\\\\n", + " 0 & -18 & \\frac{23}{2} \\\\\n", + " 0 & 0 & \\frac{2}{9}\n", + " \\end{array} \\right] = U\n", + "\\end{array}$$\n", + "\n", + "> Note that there is no row exchange.\n", + "\n", + "> The lower triangular matrix $L$ is calculated with the steps which lead\n", + "us from the original matrix to the upper triangular matrix, i.e.:\n", + "\n", + "> $$\\begin{array}{c} E_{2j}-\\frac{3}{2}E_{1j} \\\\ \\\\\n", + " E_{3j}-\\frac{1}{2}E_{1j} \\\\ \\\\\n", + " E_{3j}-\\frac{1}{9}E_{2j} \n", + "\\end{array}$$\n", + "\n", + "> Note that each step is a multiple $\\ell$ of equation $m$ subtracted from\n", + "equation $n$. Each of these steps, in fact, can be represented by an\n", + "elementary matrix. $U$ can be obtained by multiplying $A$ by this\n", + "sequence of elementary matrices.\n", + "\n", + "> Each of the elementary matrices is composed of an identity matrix the\n", + "size of $A$ with $-\\ell$ in the ($m,n$) entry. So the steps become:\n", + "\n", + "> $$\\begin{array}{ccc} \n", + " E_{2j}-\\frac{3}{2}E_{1j} & \n", + " \\rightarrow &\n", + " \\left[ \\begin{array}{rcc} 1 & 0 & 0 \\\\\n", + " -\\frac{3}{2} & 1 & 0 \\\\\n", + " 0 & 0 & 1\n", + " \\end{array} \\right] = R \\\\ \\\\\n", + " E_{3j}-\\frac{1}{2}E_{1j} &\n", + " \\rightarrow &\n", + " \\left[ \\begin{array}{rcc} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " -\\frac{1}{2} & 0 & 1 \n", + " \\end{array} \\right] = S \\\\ \\\\\n", + " E_{3j}-\\frac{1}{9}E_{2j} &\n", + " \\rightarrow &\n", + " \\left[ \\begin{array}{crc} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " 0 & -\\frac{1}{9} & 1\n", + " \\end{array} \\right] = T\n", + "\\end{array}$$\n", + "\n", + "> and $TSRA$ = $U$. Check this with Python.\n", + "\n", + "> To get back from $U$ to $A$, the inverse of $R$, $S$ and $T$ are\n", + "multiplied onto $U$:\n", + "\n", + "> $$\\begin{array}{rcl}\n", + " T^{-1}TSRA & = & T^{-1}U \\\\\n", + " S^{-1}SRA & = & S^{-1}T^{-1}U \\\\ \n", + " R^{-1}RA & = & R^{-1}S^{-1}T^{-1}U \\\\\n", + "\\end{array}$$\n", + "\n", + "> So $A$ = $R^{-1}S^{-1}T^{-1}U$. Recall that $A$ = $LU$. If\n", + "$R^{-1}S^{-1}T^{-1}$ is a lower triangular matrix, then it is $L$.\n", + "\n", + "> The inverse of the elementary matrix is the same matrix with only one\n", + "difference, and that is, $\\ell$ is in the $a_{mn}$ entry instead of\n", + "$-\\ell$. So:\n", + "\n", + "> $$\\begin{array}{rcl}\n", + " R^{-1} & = & \\left[ \\begin{array}{rrr} 1 & 0 & 0 \\\\\n", + " \\frac{3}{2} & 1 & 0 \\\\\n", + " 0 & 0 & 1\n", + " \\end{array} \\right] \\\\ \\\\\n", + " S^{-1} & = & \\left[ \\begin{array}{rrr} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " \\frac{1}{2} & 0 & 1\n", + " \\end{array} \\right] \\\\ \\\\\n", + " T^{-1} & = & \\left[ \\begin{array}{rrr} 1 & 0 & 0 \\\\\n", + " 0 & 1 & 0 \\\\\n", + " 0 & \\frac{1}{9} & 1\n", + " \\end{array} \\right] \n", + "\\end{array}$$\n", + "\n", + "> Multiplying $R^{-1}S^{-1}T^{-1}$ together, we have:\n", + "\n", + "> $$\\begin{array}{rcl} R^{-1}S^{-1}T^{-1} \n", + " & = &\n", + " \\left[ \\begin{array}{ccc} 1 & 0 & 0 \\\\\n", + " \\frac{3}{2} & 1 & 0 \\\\\n", + " \\frac{1}{2} & \\frac{1}{9} & 1\n", + " \\end{array} \\right] = L\n", + "\\end{array}$$\n", + "\n", + "> So $A$ is factored into two matrices $L$ and $U$, where\n", + "\n", + "> $$\\begin{array}{ccc}\n", + " L = \\left[ \\begin{array}{ccc} 1 & 0 & 0 \\\\\n", + " \\frac{3}{2} & 1 & 0 \\\\\n", + " \\frac{1}{2} & \\frac{1}{9} & 1\n", + " \\end{array} \\right]\n", + "& \\mbox{ and } &\n", + " U = \\left[ \\begin{array}{ccc} 2 & 8 & -5 \\\\\n", + " 0 & -18 & \\frac{23}{2} \\\\\n", + " 0 & 0 & \\frac{2}{9}\n", + " \\end{array} \\right]\n", + "\\end{array}$$\n", + "\n", + "> Use Python to confirm that $LU$ = $A$.\n", + "\n", + "The reason decomposition is introduced here is not because of Gaussian\n", + "elimination $-$ one seldom explicitly computes the $LU$ decomposition of\n", + "a matrix. However, the idea of factoring a matrix is important for other\n", + "direct methods of solving linear systems (of which Gaussian elimination\n", + "is only one) and for methods for finding eigenvalues ([Characteristic Equation](#Characteristic-Equation)).\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Round-off Error\n", + "\n", + "[lab3:sec:round-off-error]: <#Round-off-Error> \"Round-off-Error\"\n", + "\n", + "When a number is represented in its floating point form, i.e. an\n", + "approximation of the number, the resulting error is the *round-off\n", + "error*. The floating-point representation of numbers  and the consequent\n", + "effects of round-off error were discussed already in Lab \\#2.\n", + "\n", + "When round-off errors are present in the matrix $A$ or the right hand\n", + "side $b$, the linear system $Ax = b$ may or may not give a solution that\n", + "is close to the real answer. When a matrix $A$ “magnifies” the effects\n", + "of round-off errors in this way, we say that $A$ is an ill-conditioned\n", + "matrix.\n", + "\n", + "#### Example Two\n", + "[lab3:eg:round]: <#Example-Two> \"Example Two\"\n", + "\n", + "> Let’s see an example:\n", + "\n", + "> Suppose\n", + "\n", + "> $$A = \\left[ \\begin{array}{cc} 1 & 1 \\\\ 1 & 1.0001 \n", + " \\end{array} \\right]$$\n", + "\n", + "> and consider the system:\n", + "\n", + ">
\n", + "(Ill-conditioned version one):\n", + " $$\\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 1 & 1.0001 \\end{array} \n", + "&\n", + " \\left| \\begin{array}{c} 2 \\\\ 2 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "$$\n", + "
\n", + "\n", + "> The condition number, $K$, of a matrix, defined in [Condition Number](#Condition-Number), is a measure of how well-conditioned a matrix is. If\n", + "$K$ is large, then the matrix is ill-conditioned, and Gaussian\n", + "elimination will magnify the round-off errors. The condition number of\n", + "$A$ is 40002. You can use Python to check this number.\n", + "\n", + "> The solution to this is $x_1$ = 2 and $x_2$ = 0. However, if the system\n", + "is altered a little as follows:\n", + "\n", + ">
\n", + "(Ill-conditioned version two):\n", + " $$\\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 1 & 1.0001 \\end{array}\n", + "&\n", + " \\left| \\begin{array}{c} 2 \\\\ 2.0001 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "$$\n", + "
\n", + "\n", + "> Then, the solution becomes $x_1$ = 1 and $x_2$ = 1. A change in the\n", + "fifth significant digit was amplified to the point where the solution is\n", + "not even accurate to the first significant digit. $A$ is an\n", + "ill-conditioned matrix. You can set up the systems [Ill-conditioned version one](#lab3:eq:illbefore)\n", + "and [Ill-conditioned version two](#lab3:eq:illafter) in Python, and check the answers yourself." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Three\n", + "\n", + "[lab3:eg:inacc]: <#Example-Three> \"Example Three\"\n", + "\n", + "> Use Python to try the following example. First solve the\n", + "system $A^{\\prime}x = b$; then solve $A^{\\prime}x = b2$. Find the\n", + "condition number of $A^{\\prime}$.\n", + "\n", + ">$$\\begin{array}{ccccc}\n", + "A^{\\prime} = \\left[ \\begin{array}{cc} 0.0001 & 1 \\\\ 1 & 1 \n", + " \\end{array} \\right]\n", + "& , & \n", + "b = \\left[ \\begin{array}{c} 1 \\\\ 2 \\end{array} \\right]\n", + "& \\mbox{and} &\n", + "b2 = \\left[ \\begin{array}{c} 1 \\\\ 2.0001 \\end{array} \\right] .\n", + "\\end{array}$$\n", + "\n", + "> You will find that the solution for $A^{\\prime}x = b$ is $x_1$ = 1.0001\n", + "and $x_2$ = 0.9999, and the solution for $A^{\\prime}x = b2$ is $x_1$ =\n", + "1.0002 and $x_2$ = 0.9999 . So a change in $b$ did not result in a large\n", + "change in the solution. Therefore, $A^{\\prime}$ is a well-conditioned\n", + "matrix. In fact, the condition number is approximately 2.6.\n", + "\n", + "> Nevertheless, even a well conditioned system like $A^{\\prime}x =\n", + "b$ leads to inaccuracy if the wrong solution method is used, that is, an\n", + "algorithm which is sensitive to round-off error. If you use Gaussian\n", + "elimination to solve this system, you might be misled that $A^{\\prime}$\n", + "is ill-conditioned. Using Gaussian elimination to solve $A^{\\prime}x=b$:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& \\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 0.0001 & 1 \\\\ 1 & 1 \\end{array} \n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 1 \\\\ 2 \\end{array} \\right]\n", + "\\end{array} \\right. \\\\ \\\\\n", + "\\begin{array}{c} 10,000E_{1j} \\\\ \\rightarrow \\end{array} &\n", + "\\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 10,000 \\\\ 1 & 1 \\end{array} \n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 10,000 \\\\ 2 \\end{array} \\right]\n", + "\\end{array} \\right. \\\\ \\\\\n", + "\\begin{array}{c} E_{2j}-E_{1j} \\\\ \\rightarrow \\end{array} &\n", + "\\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 10,000 \\\\ 0 & -9,999 \\end{array} \n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 10,000 \\\\ -9,998 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> At this point, if you continue to solve the system as is, you will get\n", + "the expected answers. You can check this with Python. However, if you\n", + "make changes to the matrix here by rounding -9,999 and -9,998 to -10,000, \n", + "the final answers will be different:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& \\left[\n", + "\\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 10,000 \\\\ 0 & -10,000 \\end{array}\n", + "&\n", + " \\left|\n", + " \\begin{array}{c} 10,000 \\\\ -10,000 \\end{array} \\right]\n", + "\\end{array} \\right. \n", + "\\end{array}$$\n", + "\n", + "> The result is $x_1$ = 0 and $x_2$ = 1, which is quite different from the\n", + "correct answers. So Gaussian elimination might mislead you to think that\n", + "a matrix is ill-conditioned by giving an inaccurate solution to the\n", + "system. In fact, the problem is that Gaussian elimination on its own is\n", + "a method that is unstable in the presence of round-off error, even for\n", + "well-conditioned matrices. Can this be fixed?\n", + "\n", + "> You can try the example with Python.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Partial Pivoting\n", + "\n", + "There are a number of ways to avoid inaccuracy, one of which is applying\n", + "partial pivoting to the Gaussian elimination.\n", + "\n", + "\n", + "Consider the example from the previous section. In order to avoid\n", + "multiplying by 10,000, another pivot is desired in place of 0.0001. The\n", + "goal is to examine all the entries in the first column, find the entry\n", + "that has the largest value, and exchange the first row with the row that\n", + "contains this element. So this entry becomes the pivot. This is partial\n", + "pivoting. Keep in mind that switching rows is an elementary operation\n", + "and has no effect on the solution.\n", + "\n", + "\n", + "In the original Gaussian elimination algorithm, row exchange is done\n", + "only if the pivot is zero. In partial pivoting, row exchange is done so\n", + "that the largest value in a certain column is the pivot. This helps to\n", + "reduce the amplification of round-off error." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Four\n", + "\n", + "> In the matrix $A^{\\prime}$ from [Example Two](#Example-Two), 0.0001 from\n", + "column one is the first pivot. Looking at this column, the entry, 1, in\n", + "the second row is the only other choice in this column. Obviously, 1 is\n", + "greater than 0.0001. So the two rows are exchanged.\n", + "\n", + "> $$\\begin{array}{cl}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 0.0001 & 1 \\\\ 1 & 1 \\end{array} \n", + " & \\left|\n", + " \\begin{array}{c} 1 \\\\ 2 \\end{array} \\right]\n", + " \\end{array} \\right. \\\\ \\\\\n", + " \\begin{array}{c} E_{1j} \\leftrightarrow E_{2j} \\\\ \\rightarrow\n", + " \\end{array}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 0.0001 & 1 \\end{array} \n", + " & \\left|\n", + " \\begin{array}{c} 2 \\\\ 1 \\end{array} \\right] \n", + " \\end{array} \\right. \\\\ \\\\\n", + " \\begin{array}{c} E_{2j}-0.0001E_{1j} \\\\ \\rightarrow \\end{array} \n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 0 & 0.9999 \\end{array} \n", + " & \\left|\n", + " \\begin{array}{c} 2 \\\\ 0.9998 \\end{array} \\right]\n", + " \\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> The same entries are rounded off:\n", + "\n", + "> $$\\begin{array}{cl}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 1 \\\\ 0 & 1 \\end{array}\n", + " &\n", + " \\left|\n", + " \\begin{array}{c} 2 \\\\ 1 \\end{array} \\right]\n", + " \\end{array} \\right. \\\\ \\\\\n", + " \\begin{array}{c} E_{1j}-E_{2j} \\\\ \\rightarrow \\end{array}\n", + " & \\left[ \\begin{array}{cc}\n", + " \\begin{array}{cc} 1 & 0 \\\\ 0 & 1 \\end{array}\n", + " &\n", + " \\left|\n", + " \\begin{array}{c} 1 \\\\ 1 \\end{array} \\right]\n", + " \\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> So the solution is $x_1$ = 1 and $x_2$ = 1, and this is a close\n", + "approximation to the original solution, $x_1$ = 1.0001 and $x_2$ =\n", + "0.9999.\n", + "\n", + "\n", + "> You can try the example with Python.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n", + "\n", + "\n", + "\n", + "Note: This section has described row pivoting. The same process can be\n", + "applied to columns, with the resulting procedure being called column\n", + "pivoting." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Full Pivoting\n", + "\n", + "Another way to get around inaccuracy ([Example Three](#Example-Three)) is to\n", + "use Gaussian elimination with full pivoting. Sometimes, even partial\n", + "pivoting can lead to problems. With full pivoting, in addition to row\n", + "exchange, columns will be exchanged as well. The purpose is to use the\n", + "largest entries in the whole matrix as the pivots.\n", + "\n", + "#### Example Five\n", + "\n", + "> Given the following:\n", + "\n", + "> $$ \\begin{array}{cccc}\n", + "& A^{''} = \\left[ \\begin{array}{ccc} 0.0001 & 0.0001 & 0.5 \\\\\n", + " 0.5 & 1 & 1 \\\\\n", + " 0.0001 & 1 & 0.0001 \n", + " \\end{array} \n", + " \\right] \n", + "& \\ \\ \\ &\n", + "b^{'} = \\left[ \\begin{array}{c} 1 \\\\ 0 \\\\ 1 \n", + " \\end{array} \n", + " \\right] \\\\ \\\\\n", + "\\rightarrow &\n", + "\\left[ \\begin{array}{cc} \n", + " \\begin{array}{ccc} 0.0001 & 0.0001 & 0.5 \\\\\n", + " 0.5 & 1 & 1 \\\\\n", + " 0.0001 & 1 & 0.0001\n", + " \\end{array}\n", + " & \\left| \\begin{array}{c} 1 \\\\ 0 \\\\ 1 \n", + " \\end{array} \n", + " \\right]\n", + " \\end{array} \n", + "\\right. & & \n", + "\\end{array}$$\n", + "\n", + "> Use Python to find the condition number of $A^{''}$ and the solution to\n", + "this system.\n", + "\n", + "> Looking at the system, if no rows are exchanged, then taking 0.0001 as\n", + "the pivot will magnify any errors made in the elements in column 1 by a\n", + "factor of 10,000. With partial pivoting, the first two rows can be\n", + "exchanged (as below):\n", + "\n", + "> $$\\begin{array}{cl}\n", + "\\begin{array}{c} \\\\ E_{1j} \\leftrightarrow E_{2j} \\\\ \\rightarrow\n", + "\\end{array} \n", + "& \\begin{array}{ccccc}\n", + " & x_1 & x_2 & x_3 & \\\\\n", + " \\left[ \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right. \n", + " & \\begin{array}{c} 0.5 \\\\ 0.0001 \\\\ 0.0001 \\end{array} \n", + " & \\begin{array}{c} 1 \\\\ 0.0001 \\\\ 1 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 0.5 \\\\ 0.0001 \\end{array} \n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + " \\end{array}\n", + "\\end{array} $$ \n", + "\n", + "\n", + "> and the magnification by 10,000 is avoided. Now the matrix will be\n", + "expanded by a factor of 2. However, if the entry 1 is used as the pivot,\n", + "then the matrix does not need to be expanded by 2 either. The only way\n", + "to put 1 in the position of the first pivot is to perform a column\n", + "exchange between columns one and two, or between columns one and three.\n", + "This is full pivoting.\n", + "\n", + "> Note that when columns are exchanged, the variables represented by the\n", + "columns are switched as well, i.e. when columns one and two are\n", + "exchanged, the new column one represents $x_2$ and the new column two\n", + "represents $x_1$. So, we must keep track of the columns when performing\n", + "column pivoting.\n", + "\n", + ">So the columns one and two are exchanged, and the matrix becomes:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "\\begin{array}{c} \\\\ E_{i1} \\leftrightarrow E_{i2} \\\\ \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_1 & x_3 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0.0001 \\\\ 1 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.0001 \\\\ 0.0001 \\end{array} \n", + " & \\begin{array}{c} 1 \\\\ 0.5 \\\\ 0.0001 \\end{array} \n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\\n", + "\\begin{array}{c} \\\\ E_{2j}-0.0001E_{1j} \\\\ E_{3j}-E_{1j} \\\\ \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_1 & x_3 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.00005 \\\\ -0.4999 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 0.4999 \\\\ -0.9999 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array}\n", + "\\end{array}$$\n", + "\n", + "> If we assume rounding is performed, then the entries are rounded off:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_1 & x_3 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.00005 \\\\ -0.5 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 0.5 \\\\ -1 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\\n", + "\\begin{array}{c} \\\\ -E_{3j} \\\\\n", + " E_{2j} \\leftrightarrow E_{3j} \\\\ \n", + " E_{i2} \\leftrightarrow E_{i3} \\\\ \n", + " \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 1 \\\\ 1 \\\\ 0.5 \\end{array}\n", + " & \\begin{array}{c} 0.5 \\\\ 0.5 \\\\ 0.00005 \\end{array}\n", + " & \\left| \\begin{array}{r} 0 \\\\ -1 \\\\ 1 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\\n", + "\\begin{array}{c} E_{1j}-E_{2j} \\\\ \n", + " E_{3j}-0.5 E_{2j} \\\\\n", + " \\rightarrow\n", + "\\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 0.5 \\\\ -0.24995 \\end{array}\n", + " & \\left| \\begin{array}{r} 1 \\\\ -1 \\\\ 1.5 \\end{array} \\right]\n", + "\\end{array}\n", + "\\end{array}$$\n", + "\n", + "> Rounding off the matrix again:\n", + "\n", + "> $$\\begin{array}{cl}\n", + "\\begin{array}{c} \\rightarrow \\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 0.5 \\\\ -0.25 \\end{array}\n", + " & \\left| \\begin{array}{r} 1 \\\\ -1 \\\\ 1.5 \\end{array} \\right]\n", + "\\end{array} \\\\ \\\\ \n", + "\\begin{array}{c} E_{2j}-2E_{3j} \\\\ \n", + " 4E_{3j} \\\\\n", + " \\rightarrow \\end{array}\n", + "& \\begin{array}{ccccc}\n", + " & x_2 & x_3 & x_1 & \\\\\n", + " \\left. \\begin{array}{c} \\\\ \\\\ \\\\ \\end{array} \\right[\n", + " & \\begin{array}{c} 1 \\\\ 0 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 1 \\\\ 0 \\end{array}\n", + " & \\begin{array}{c} 0 \\\\ 0 \\\\ 1 \\end{array}\n", + " & \\left| \\begin{array}{r} 1 \\\\ 2 \\\\ -6 \\end{array} \\right]\n", + "\\end{array}\n", + "\\end{array}$$\n", + "\n", + "> So reading from the matrix, $x_1$ = -6, $x_2$ = 1 and $x_3$ = 2. Compare\n", + "this with the answer you get with Python, which is $x_1\n", + "\\approx$ -6.0028, $x_2 \\approx$ 1.0004 and $x_3 \\approx$ 2.0010 .\n", + "\n", + "> Using full pivoting with Gaussian elimination, expansion of the error by\n", + "large factors is avoided. In addition, the approximated solution, using\n", + "rounding (which is analogous to the use of floating point\n", + "approximations), is close to the correct answer.\n", + "\n", + "> You can try the example with Python.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Summary\n", + "\n", + "In a system $Ax = b$, if round-off errors in $A$ or $b$ affect the\n", + "system such that it may not give a solution that is close to the real\n", + "answer, then $A$ is ill-conditioned, and it has a very large condition\n", + "number.\n", + "\n", + "Sometimes, due to a poor algorithm, such as Gaussian elimination without\n", + "pivoting, a matrix may appear to be ill-conditioned, even though it is\n", + "not. By applying partial pivoting, this problem is reduced, but partial\n", + "pivoting will not always eliminate the effects of round-off error. An\n", + "even better way is to apply full pivoting. Of course, the drawback of\n", + "this method is that the computation is more expensive than plain\n", + "Gaussian elimination.\n", + "\n", + "An important point to remember is that partial and full pivoting\n", + "minimize the effects of round-off error for well-conditioned matrices.\n", + "If a matrix is ill-conditioned, these methods will not provide a\n", + "solution that approximates the real answer. As an exercise, you can try\n", + "to apply full pivoting to the ill-conditioned matrix $A$ seen at the\n", + "beginning of this section (Example Two](#Example-Two)). You will find that\n", + "the solution is still inaccurate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Matrix Inversion\n", + "\n", + "Given a square matrix $A$. If there is a matrix that will cancel $A$,\n", + "then it is the *inverse* of $A$. In other words, the matrix, multiplied\n", + "by its inverse, will give the identity matrix $I$.\n", + "\n", + "Try to find the inverse for the following matrices:\n", + "\n", + "1. $$A = \\left[ \\begin{array}{rrr} 1 & -2 & 1 \\\\\n", + " 3 & 1 & -1 \\\\\n", + " -1 & 9 & -5 \\end{array} \\right]$$\n", + "\n", + "2. $$B = \\left[ \\begin{array}{rrr} 5 & -2 & 4 \\\\\n", + " -3 & 1 & -5 \\\\\n", + " 2 & -1 & 3 \\end{array} \\right]$$\n", + "\n", + "The solutions to these exercises are available [here](lab3_files/quizzes/inverse/inverse.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Determinant\n", + "\n", + "Every square matrix $A$ has a scalar associated with it. This number is\n", + "the determinant of the matrix, represented as $\\det(A)$. Its absolute\n", + "value is the volume of the parallelogram that can be generated from the\n", + "rows of $A$.\n", + "\n", + "A few special properties to remember about determinants:\n", + "\n", + "1. $A$ must be a square matrix.\n", + "\n", + "2. If $A$ is singular, $\\det(A) =0$, i.e. $A$ does not have an inverse.\n", + "\n", + "3. The determinant of a $2 \\times 2$ matrix is just the difference\n", + " between the products of the diagonals, i.e.\n", + "\n", + " $$\\begin{array}{ccc}\n", + " \\left[ \\begin{array}{cc} a & b \\\\ c & d \\end{array} \\right] &\n", + " = &\n", + " \\begin{array}{ccc} ad & - & bc \\end{array}\n", + " \\end{array}$$\n", + "\n", + "4. For any diagonal, upper triangular or lower triangular matrix $A$,\n", + " $\\det(A)$ is the product of all the entries on the diagonal,\n", + " \n", + "#### Example Six\n", + "\n", + "> $$\\begin{array}{cl}\n", + " & \\det \\left[\n", + " \\begin{array}{rrr}\n", + " 2 & 5 & -8 \\\\\n", + " 0 & 1 & 7 \\\\\n", + " 0 & 0 & -4 \n", + " \\end{array} \\right] \\\\ \\\\\n", + " = & 2 \\times 1 \\times -4 \\\\\n", + " = & -8\n", + " \\end{array}$$ \n", + " \n", + "> Graphically, the parallelogram looks as follows:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The basic procedure in finding a determinant of a matrix larger than 2\n", + "$\\times$ 2 is to calculate the product of the non-zero entries in each\n", + "of the *permutations* of the matrix, and then add them together. A\n", + "permutation of a matrix $A$ is a matrix of the same size with one\n", + "element from each row and column of $A$. The sign of each permutation is\n", + "$+$ or $-$ depending on whether the permutation is odd or even. This is\n", + "illustrated in the following example …" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Example Seven\n", + "\n", + "> $$I = \\left[ \\begin{array}{ccc} 1 & 2 & 3 \\\\\n", + " 4 & 5 & 6 \\\\\n", + " 7 & 8 & 9 \\end{array} \\right]$$\n", + "\n", + "> will have the following permutations:\n", + "\n", + "> $$\\begin{array}{cccccc}\n", + "+\\left[ \\begin{array}{ccc} 1 & & \\\\\n", + " & 5 & \\\\\n", + " & & 9 \\end{array} \\right]\n", + "& , &\n", + "+\\left[ \\begin{array}{ccc} & 2 & \\\\\n", + " & & 6 \\\\\n", + " 7 & & \\end{array} \\right]\n", + "& , &\n", + "+\\left[ \\begin{array}{ccc} & & 3 \\\\\n", + " 4 & & \\\\\n", + " & 8 & \\end{array} \\right]\n", + "& , \\\\ \\\\\n", + "-\\left[ \\begin{array}{ccc} 1 & & \\\\\n", + " & & 6 \\\\\n", + " & 8 & \\end{array} \\right]\n", + "& , &\n", + "-\\left[ \\begin{array}{ccc} & 2 & \\\\\n", + " 4 & & \\\\\n", + " & & 9 \\end{array} \\right]\n", + "& , &\n", + "-\\left[ \\begin{array}{ccc} & & 3 \\\\\n", + " & 5 & \\\\\n", + " 7 & & \\end{array} \\right]\n", + "& .\n", + "\\end{array}$$\n", + "\n", + "> The determinant of the above matrix is then given by \n", + "$$\\begin{array}{ccl}\n", + " det(A) & = & +1\\cdot 5\\cdot 9 + 2 \\cdot 6 \\cdot 7 + 3 \\cdot 4 \\cdot 8 - 1\n", + "\\cdot 6 \\cdot 8 - 2 \\cdot 4 \\cdot 9 - 3 \\cdot 5 \\cdot 7 \\\\\n", + "& = & 0\\end{array}$$\n", + "\n", + "For each of the following matrices, determine whether or not it has an\n", + "inverse:\n", + "\n", + "1. $$A = \\left[ \\begin{array}{rrr} 3 & -2 & 1 \\\\\n", + " 1 & 5 & -1 \\\\\n", + " -1 & 0 & 0 \\end{array} \\right]$$\n", + "\n", + "2. $$B = \\left[ \\begin{array}{rrr} 4 & -6 & 1 \\\\\n", + " 1 & -3 & 1 \\\\\n", + " 2 & 0 & -1 \\end{array} \\right]$$\n", + "\n", + "3. Try to solve this by yourself first, and use Python to check your\n", + " answer:\n", + "\n", + " $$C = \\left[ \\begin{array}{rrrr} 4 & -2 & -7 & 6 \\\\\n", + " -3 & 0 & 1 & 0 \\\\\n", + " -1 & -1 & 5 & -1 \\\\\n", + " 0 & 1 & -5 & 3\n", + " \\end{array} \\right]$$\n", + "\n", + "The solutions to these exercises are available [here](lab3_files/quizzes/det/det.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Computational cost of Gaussian elimination\n", + "\n", + "[lab3:sec:cost]: <#Computational-cost-of-Gaussian-elimination> \"Computational Cost\"\n", + "\n", + "Although Gaussian elimination is a basic and relatively simple technique\n", + "to find the solution of a linear system, it is a costly algorithm. Here\n", + "is an operation count of this method:\n", + "\n", + "For a $n \\times n$ matrix, there are two kinds of operations to\n", + "consider:\n", + "\n", + "1. division ( *div*) - to find the multiplier from a\n", + " chosen pivot\n", + "\n", + "2. multiplication-subtraction ( *mult/sub* ) - to\n", + " calculate new entries for the matrix\n", + "\n", + "Note that an addition or subtraction operation has negligible cost in\n", + "relation to a multiplication or division operation. So the subtraction\n", + "in this case can be treated as one with the multiplication operation.\n", + "The first pivot is selected from the first row in the matrix. For each\n", + "of the remaining rows, one div and $(n-1)$ mult/sub operations are used\n", + "to find the new entries. So there are $n$ operations performed on each\n", + "row. With $(n-1)$ rows, there are a total of $n(n-1) = n^{2}-n$\n", + "operations associated with this pivot.\n", + "\n", + "Since the subtraction operation has negligible cost in relation to the\n", + "multiplication operation, there are $(n-1)$ operations instead of\n", + "$2(n-1)$ operations.\n", + "For the second pivot, which is selected from the second row of the\n", + "matrix, similar analysis is applied. With the remaining $(n-1)\n", + "\\times (n-1)$ matrix, each row has one div and $(n-2)$ mult/sub\n", + "operations. For the whole process, there are a total of $(n-1)(n-2) =\n", + "(n-1)^{2} - (n-1)$ operations.\\\n", + "\n", + "For the rest of the pivots, the number of operations for a remaining\n", + "$k \\times k$ matrix is $k^{2} - k$.\\\n", + "\n", + "The following is obtained when all the operations are added up:\n", + "\n", + "$$\\begin{array}{l} \n", + "(1^{2}+\\ldots +n^{2}) - (1+\\ldots +n) \\\\ \\\\\n", + "= \\frac{n(n+1)(2n+1)}{6} - \\frac{n(n+1)}{2} \\\\ \\\\\n", + "= \\frac{n^{3}-n}{3} \\\\ \\\\\n", + "\\approx O(n^{3}) \n", + "\\end{array}$$\n", + "\n", + "As one can see, the Gaussian elimination is an $O(n^{3})$ algorithm. For\n", + "large matrices, this can be prohibitively expensive. There are other\n", + "methods which are more efficient, e.g. see [Iterative Methods](#Iterative-Methods). " + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "#### Problem One\n", + "\n", + "[lab3:sec2:carbon]:<#Problem-One>\n", + "\n", + "Consider a very simple three box model of the movement of a pollutant in\n", + "the atmosphere, fresh-water and ocean. The mass of the atmosphere is MA\n", + "(5600 x 10$^{12}$ tonnes), the mass of the fresh-water is MF (360 x\n", + "10$^{12}$tonnes) and the mass of the upper layers of the ocean is MO\n", + "(50,000 x 10$^{12}$ tonnes). The amount of pollutant in the atmosphere\n", + "is A, the amount in the fresh water is F and the amount in the ocean is\n", + "O. So A, F, and O have units of tonnes.\n", + "\n", + "The pollutant is going directly into the atmosphere at a rate P1 = 1000\n", + "tonnes/year and into the fresh-water system at a rate P2 = 2000\n", + "tonnes/year. The pollutant diffuses between the atmosphere and ocean at\n", + "a rate depending linearly on the difference in concentration with a\n", + "diffusion constant L1 = 200 tonnes/year. The diffusion between the\n", + "fresh-water system and the atmosphere is faster as the fresh water is\n", + "shallower, L2 = 500 tonnes/year. The fresh-water system empties into the\n", + "ocean at the rate of Q = 36 x 10$^{12}$ tonnes/year. Lastly the\n", + "pollutant decays (like radioactivity) at a rate L3 = 0.05 /year.\n", + "\n", + "See the graphical presentation of the cycle described above in\n", + "Figure [Box Model](#Figure-Box-Model) Schematic for Problem 1.\n", + "Set up a notebook to answer this question. When you have finished, you can print it to pdf.\n", + "\n", + "- a\\) Consider the steady state. There is no change in A, O, or F. Write\n", + " down the three linear governing equations in a text cell. Write the equations as an\n", + " augmented matrix in a text cell. Then use a computational cell to find the solution.\n", + "\n", + "- b\\) Show mathematically that there is no solution to this problem with L3\n", + " = 0. Explain in a text file why, physically, is there no solution.\n", + "\n", + "- c\\) Show mathematically that there is an infinite number of solutions if\n", + " L3 = 0 and P1 = P2 = 0. Explain in a text file why this is true from a physical argument.\n", + "\n", + "- d\\) For part c) above, explain in a text cell what needs to be specified in order to determine a\n", + " single physical solution. Explain in a text cell how would you put this in the matrix equation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Box Model: Schematic for Problem One.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Eigenvalue Problems\n", + "\n", + "This section is a review of eigenvalues and eigenvectors.\n", + "\n", + "### Characteristic Equation\n", + "[lab3:sec:eigval]: <#Characteristic-Equation>\n", + "\n", + "The basic equation for eigenvalue problems is the characteristic\n", + "equation, which is:\n", + "\n", + "$$\\det( A - \\lambda I ) = 0$$\n", + "\n", + "where $A$ is a square matrix, I is an identity the same size as $A$, and\n", + "$\\lambda$ is an eigenvalue of the matrix $A$.\n", + "\n", + "In order for a number to be an eigenvalue of a matrix, it must satisfy\n", + "the characteristic equation, for example:\n", + "\n", + "#### Example Eight\n", + "\n", + "> Given\n", + "\n", + "> $$A = \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right]$$\n", + "\n", + "> To find the eigenvalues of $A$, you need to solve the characteristic\n", + "equation for all possible $\\lambda$.\n", + "\n", + "> $$\\begin{array}{ccl}\n", + "0 & = & \\det (A - \\lambda I) \\\\\n", + "& = & \\begin{array}{cccc}\n", + " \\det & \\left( \n", + " \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right] \\right. &\n", + " - &\n", + " \\lambda \\left. \\left[ \\begin{array}{rr} 1 & 0 \\\\ 0 & 1 \\end{array} \\right] \\right)\n", + " \\end{array} \\\\ \\\\\n", + "& = & \\begin{array}{cc}\n", + " \\det & \n", + " \\left[ \\begin{array}{cc} 3-\\lambda & -2 \\\\ -4 & 5-\\lambda\n", + " \\end{array} \\right]\n", + " \\end{array} \\\\ \\\\\n", + "& = & \\begin{array}{ccc} (3-\\lambda)(5-\\lambda) & - & (-2)(-4) \n", + " \\end{array} \\\\ \\\\\n", + "& = & (\\lambda - 1)(\\lambda - 7) \\\\ \\\\\n", + "\\end{array}$$\n", + "\n", + "> So, $\\lambda = 1 \\mbox{ or } 7$, i.e. the eigenvalues of the matrix $A$\n", + "are 1 and 7.\n", + "\n", + "> You can use Python to check this answer.\n", + "\n", + "Find the eigenvalues of the following matrix:\n", + "\n", + "$$B = \\left[\n", + " \\begin{array}{ccc} 3 & 2 & 4 \\\\ 2 & 0 & 2 \\\\ 4 & 2 & 3\n", + " \\end{array} \\right]$$\n", + "\n", + "The solution to this problem is available [here](lab3_files/quizzes/char/char.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Condition Number\n", + "[lab3.sec.cond]: \n", + "\n", + "The eigenvalues of a matrix $A$ can be used to calculate an\n", + "approximation to the condition number $K$ of the matrix, i.e.\n", + "\n", + "$$K \\approx \\left| \\frac{\\lambda_{max}}{\\lambda_{min}} \\right|$$\n", + "\n", + "where $\\lambda_{max}$ and $\\lambda_{min}$ are the maximum and minimum\n", + "eigenvalues of $A$. When $K$ is large, i.e. the $\\lambda_{max}$ and\n", + "$\\lambda_{min}$ are far apart, then $A$ is ill-conditioned.\n", + "\n", + "The mathematical definition of $K$ is\n", + "\n", + "$$K = \\|A\\|\\|A^{-1}\\|$$\n", + "\n", + "where $\\|\\cdot\\|$ represents the norm of a matrix.\n", + "\n", + "There are a few norms which can be chosen for the formula. The default one used\n", + "in Python for finding $K$ is the 2-norm of the matrix. To see how to\n", + "compute the norm of a matrix, see a linear algebra text. Nevertheless,\n", + "the main concern here is the formula, and the fact that this can be very\n", + "expensive to compute. Actually, the computing of $A^{-1}$ is the costly\n", + "operation.\n", + "\n", + "Note: In Python, the results from the function *cond*($A$) can have\n", + "round-off errors.\n", + "\n", + "For the matrices in this section (A from Example 8 and B just below it) for which you have \n", + "found the\n", + "eigenvalues, use the built-in Python function *np.linalg.cond*($A$) to find $K$,\n", + "and compare this result with the $K$ approximated from the eigenvalues.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Eigenvectors\n", + "\n", + "Another way to look at the characteristic equation is using vectors\n", + "instead of determinant. For a number to be an eigenvalue of a matrix, it\n", + "must satisfy this equation:\n", + "\n", + "$$( A - \\lambda I ) x = 0$$\n", + "\n", + "where $A$ is a $n \\times n$ square matrix, $I$ is an identity matrix the\n", + "same size as $A$, $\\lambda$ is an eigenvalue of $A$, and $x$ is a\n", + "non-zero vector associated with the particular eigenvalue that will make\n", + "this equation true. This vector is the eigenvector. The eigenvector is\n", + "not necessarily unique for an eigenvalue. This will be further discussed\n", + "below after the example.\n", + "\n", + "The above equation can be rewritten as:\n", + "\n", + "$$A x = \\lambda x$$\n", + "\n", + "For each eigenvalue of $A$, there is a corresponding eigenvector. Below\n", + "is an example.\n", + "\n", + "#### Example Nine\n", + "\n", + "> Following the example from the previous section:\n", + "\n", + "> $$A = \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right]$$\n", + "\n", + "> The eigenvalues, $\\lambda$, for this matrix are 1 and 7. To find the\n", + "eigenvectors for the eigenvalues, you need to solve the equation:\n", + "\n", + "> $$( A - \\lambda I ) x = 0.$$ \n", + "\n", + "> This is just a linear system $A^{\\prime}x = b$, where\n", + "$A^{\\prime} = ( A - \\lambda I )$, $b = 0$. To find the eigenvectors, you\n", + "need to find the solution to this augmented matrix for each $\\lambda$\n", + "respectively,\n", + "\n", + "> $$\\begin{array}{cl}\n", + "& ( A - \\lambda I ) x = 0 \\ \\ \\ \\ \\ \\ \\ \\ {\\rm where} \\ \\ \\ \\ \\ \\ \\lambda = 1 \\\\\n", + "\\; & \\; \\\\\n", + "\\rightarrow & \n", + "\\left( \\begin{array}{ccc} \n", + " \\left[ \\begin{array}{rr} 3 & -2 \\\\ -4 & 5 \\end{array} \\right] \n", + " & - &\n", + " 1 \\left[ \\begin{array}{cc} 1 & 0 \\\\ 0 & 1 \\end{array} \\right]\n", + "\\end{array} \\right) x = 0 \\\\ \\\\\n", + "\\rightarrow &\n", + "\\left[ \\begin{array}{cc}\n", + " \\begin{array}{rr} 2 & -2 \\\\ -4 & 4 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right]\n", + "\\end{array} \\right. \\\\ \\\\ \n", + "\\rightarrow &\n", + "\\left[ \\begin{array}{cc}\n", + " \\begin{array}{rr} 1 & -1 \\\\ 0 & 0 \\end{array}\n", + " & \\left| \\begin{array}{c} 0 \\\\ 0 \\end{array} \\right]\n", + "\\end{array} \\right.\n", + "\\end{array}$$\n", + "\n", + "> Reading from the matrix,\n", + "\n", + "> $$\\begin{array}{ccccc} x_1 & - & x_2 & = & 0 \\\\\n", + " & & 0 & = & 0 \\end{array}$$\n", + "\n", + "> As mentioned before, the eigenvector is not unique for a given\n", + "eigenvalue. As seen here, the solution to the matrix is a description of\n", + "the direction of the vectors that will satisfy $Ax = \\lambda x$. Letting\n", + "$x_1 = 1$, then $x_2 = 1$. So the vector (1, 1) is an eigenvector for\n", + "the matrix $A$ when $\\lambda = 1$. (So is (-1,-1), (2, 2), etc)\\\n", + "\n", + "> In the same way for $\\lambda = 7$, the solution is\n", + "\n", + "> $$\\begin{array}{ccccc} 2 x_1 & + & x_2 & = & 0 \\\\\n", + " & & 0 & = & 0 \\end{array}$$\n", + "\n", + "> So an eigenvector here is x = (1, -2)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# Using Python:\n", + "\n", + "A = np.array([[3, -2], [-4, 5]])\n", + "lamb, x = np.linalg.eig(A)\n", + "print(lamb)\n", + "print (x)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> Matrix $x$ is the same size a $A$ and vector $lamb$ is the size of one dimension of $A$. Each column of\n", + "$x$ is a unit eigenvector of $A$, and $lamb$ values\n", + "are the eigenvalues of $A$. Reading from the result, for $\\lambda$ = 1,\n", + "the corresponding unit eigenvector is (-0.70711, -0.70711). The answer\n", + "from working out the example by hand is (1, 1), which is a multiple of\n", + "the unit eigenvector from Python.\n", + "\n", + "> (The unit eigenvector is found by dividing the eigenvector by its\n", + "magnitude. In this case, $\\mid$(1,1)$\\mid$ = $\\sqrt{1^2 +\n", + " 1^2}$ = $\\sqrt{2}$, and so the unit eigenvector is\n", + "($\\frac{1}{\\sqrt{2}}, \\frac{1}{\\sqrt{2}}$) ).\n", + "\n", + "> Remember that the solution for an eigenvector is not the unique answer;\n", + "it only represents a *direction* for an eigenvector corresponding to a\n", + "given eigenvalue.\n", + "\n", + "What are the eigenvectors for the matrix $B$ from the previous section?\n", + "\n", + "$$B = \\left[\n", + " \\begin{array}{ccc} 3 & 2 & 4 \\\\ 2 & 0 & 2 \\\\ 4 & 2 & 3\n", + " \\end{array} \\right]$$\n", + "\n", + "The solution to this problem is available [here](lab3_files/quizzes/eigvec/eigvec.html)\n", + "\n", + "After solving the questions by hand, you can use Python to check your\n", + "answer.\n", + "\n", + "
\n", + "🐾 \n", + "
\n", + "\n", + "[Numpy and Python with Matrices](#Numpy-and-Python-with-Matrices)\n", + "\n", + "Although the method used here to find the eigenvalues is a direct way to\n", + "find the solution, it is not very efficient, especially for large\n", + "matrices. Typically, iterative methods such as the Power Method or the\n", + "QR algorithm are used (see a linear algebra text such as [Strang (1988)](#Ref:Strang88)\n", + "for more details)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Iterative Methods\n", + "[lab3:sec:iter]:<#Interative-Methods>\n", + "\n", + "So far, the only method we’ve seen for solving systems of linear\n", + "equations is Gaussian Elimination (with its pivoting variants), which is\n", + "only one of a class of *direct methods*. This name derives from the fact\n", + "that the Gaussian Elimination algorithm computes the exact solution\n", + "*directly*, in a finite number of steps. Other commonly-used direct\n", + "methods are based on matrix decomposition or factorizations different\n", + "from the $LU$ decomposition (see Section [Decomposition](#Decomposition)); for\n", + "example, the $LDL^T$ and Choleski factorizations of a matrix. When the\n", + "system to be solved is not too large, it is usually most efficient to\n", + "employ a direct technique that minimizes the effects of round-off error\n", + "(for example, Gaussian elimination with full pivoting).\n", + "\n", + "However, the matrices that occur in the discretization of differential\n", + "equations are typically *very large* and *sparse* – that is, a large\n", + "proportion of the entries in the matrix are zero. In this case, a direct\n", + "algorithm, which has a cost on the order of $N^3$ multiplicative\n", + "operations, will be spending much of its time inefficiently, operating\n", + "on zero entries. In fact, there is another class of solution algorithms\n", + "called *iterative methods* which exploit the sparsity of such systems to\n", + "reduce the cost of the solution procedure, down to order $N^2$ for\n", + "*Jacobi’s method*, the simplest of the iterative methods (see Lab \\#8 )\n", + "and as low as order $N$ (the optimal order) for *multigrid methods*\n", + "(which we will not discuss here).\n", + "\n", + "Iterative methods are based on the principle of computing an approximate\n", + "solution to a system of equations, where an iterative procedure is\n", + "applied to improve the approximation at every iteration. While the exact\n", + "answer is never reached, it is hoped that the iterative method will\n", + "approach the answer more rapidly than a direct method. For problems\n", + "arising from differential equations, this is often possible since these\n", + "methods can take advantage of the presence of a large number of zeroes\n", + "in the matrix. Even more importantly, most differential equations are\n", + "only approximate models of real physical systems in the first place, and\n", + "so in many cases, an approximation of the solution is sufficient!!!\n", + "\n", + "None of the details of iterative methods will be discussed in this Lab.\n", + "For now it is enough to know that they exist, and what type of problems\n", + "they are used for. Neither will we address the questions: *How quickly\n", + "does an iterative method converge to the exact solution?*, *Does it\n", + "converge at all?*, and *When are they more efficient than a direct\n", + "method?* Iterative methods will be discussed in more detail in Lab \\#8 ,\n", + "when a large, sparse system appears in the discretization of a PDE\n", + "describing the flow of water in the oceans.\n", + "\n", + "For even more details on iterative methods, you can also look at [Strang (1988)](#Ref:Strang88) [p. 403ff.], or one of the several books listed in the\n", + "Readings section from Lab \\#8 ." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Solution of an ODE Using Linear Algebra\n", + "[lab3:sec:prob]:<#Solution-of-an-ODE-Using-Linear-Algebra>\n", + "\n", + "So far, we’ve been dealing mainly with matrices with dimensions\n", + "$4\\times 4$ at the greatest. If this was the largest linear system that\n", + "we ever had to solve, then there would be no need for computers –\n", + "everything could be done by hand! Nevertheless, even very simple\n", + "differential equations lead to large systems of equations.\n", + "\n", + "Consider the problem of finding the steady state heat distribution in a\n", + "one-dimensional rod, lying along the $x$-axis between 0 and 1. We saw in\n", + "Lab \\#1 that the temperature, $u(x)$, can be described by a boundary\n", + "value problem, consisting of the ordinary differential equation\n", + "$$u_{xx} = f(x),$$ along with boundary values $$u(0)=u(1) = 0.$$ The\n", + "only difference between this example and the one from Lab \\#1 is that\n", + "the right hand side function, $f(x)$, is non-zero, which corresponds to\n", + "a heat source being applied along the length of the rod. The boundary\n", + "conditions correspond to the ends of the rod being held at constant\n", + "(zero) temperature – this type of condition is known as a fixed or\n", + "*Dirichlet* boundary condition.\n", + "\n", + "If we discretize this equation at $N$ discrete points, $x_i=id$,\n", + "$i=0,1,\\dots,N$, where $d = 1/N$ is the grid spacing, then the ordinary\n", + "differential equation can be approximated at a point $x_i$ by the\n", + "following system of linear equations:\n", + "
\n", + "(Discrete Differential Equation)\n", + "$$\\frac{u_{i+1} - 2u_i+u_{i-1}}{d^2} = f_i,$$ \n", + "
\n", + "where $f_i=f(x_i)$, and $u_i\\approx u(x_i)$\n", + "is an approximation of the steady state temperature at the discrete\n", + "points. If we write out all of the equations, for the unknown values\n", + "$i=1,\\dots,N-1$, along with the boundary conditions at $i=0,N$, we\n", + "obtain the following set of $N+1$ equations in $N+1$ unknowns:\n", + "
\n", + "(Differential System)\n", + "$$\\begin{array}{ccccccccccc}\n", + " u_0 & & & & & & & & &=& 0 \\\\\n", + " u_0 &-&2u_1 &+& u_2 & & & & &=& d^2 f_1\\\\\n", + " & & u_1 &-& 2u_2 &+& u_3 & & &=& d^2f_2\\\\\n", + " & & & & & & \\dots & & &=& \\\\\n", + " & & & &u_{N-2}&-& 2u_{N-1}&+& u_N &=& d^2f_{N-1}\\\\\n", + " & & & & & & & & u_N &=& 0\n", + "\\end{array}$$\n", + "
\n", + "\n", + "Remember that this system, like any other linear system, can be written\n", + "in matrix notation as\n", + "\n", + "
\n", + "(Differential System Matrix)\n", + "$$\\underbrace{\\left[\n", + " \\begin{array}{ccccccccc}\n", + " 1& 0 & & \\dots & & & & & 0 \\\\\n", + " 1& {-2} & {1} & {0} & {\\dots} & && & \\\\\n", + " 0& {1} & {-2} & {1} & {0} & {\\dots} & & & \\\\\n", + " & {0} & {1} & {-2} & {1} & {0} & {\\dots} & & \\\\\n", + " & & & & & & & & \\\\\n", + " \\vdots & & & {\\ddots} & {\\ddots} & {\\ddots} & {\\ddots} & {\\ddots} & \\vdots \\\\\n", + " & & & & & & & & \\\\\n", + " & & & {\\dots} & {0} & {1} & {-2} & {1} & 0 \\\\\n", + " & & & &{\\dots} & {0} & {1} & {-2} & 1 \\\\\n", + " 0& & & & & \\dots & & 0 & 1 \n", + " \\end{array}\n", + " \\right]\n", + " }_{A_1}\n", + " \\underbrace{\\left[\n", + " \\begin{array}{c}\n", + " u_0 \\\\ {u_1} \\\\ {u_2} \\\\ {u_3} \\\\ \\ \\\\ {\\vdots} \\\\ \\\n", + " \\\\ {u_{N-2}} \\\\ {u_{N-1}} \\\\ u_N\n", + " \\end{array}\n", + " \\right]\n", + " }_{U}\n", + " = \n", + " \\underbrace{\\left[\n", + " \\begin{array}{c}\n", + " 0 \\\\ {d^2 f_1} \\\\ {d^2 f_2} \\\\ {d^2 f_3} \\\\ \\ \\\\\n", + " {\\vdots} \\\\ \\ \\\\ {d^2 f_{N-2}} \\\\ {d^2 f_{N-1}} \\\\ 0 \n", + " \\end{array}\n", + " \\right] \n", + " }_{F}$$\n", + "
\n", + "\n", + "or, simply $A_1 U = F$.\n", + "\n", + "One question we might ask is: *How well-conditioned is the matrix\n", + "$A_1$?* or, in other words, *How easy is this system to solve?* To\n", + "answer this question, we leave the right hand side, and consider only\n", + "the matrix and its condition number. The size of the condition number is\n", + "a measure of how expensive it will be to invert the matrix and hence\n", + "solve the discrete boundary value problem." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "tags": [] + }, + "source": [ + "### Problem Two\n", + "[lab3:prob:dirichlet]:<#Problem-Two> \n", + "\n", + "> a) Using Python, compute the condition number for\n", + "the matrix $A_1$ from Equation [Differential System Matrix](#lab3:eq:dir-system) for several values of $N$\n", + "between 5 and 50. ( **Hint:** This will be much easier if you write a\n", + "small Python function that outputs the matrix $A$ for a given value of\n", + "$N$.)\n", + "\n", + "> b\\) Can you conjecture how the condition number of $A_1$ depends on $N$?\n", + "\n", + "> c\\) Another way to write the system of equations is to substitute the\n", + "boundary conditions into the equations, and thereby reduce size of the\n", + "problem to one of $N-1$ equations in $N-1$ unknowns. The corresponding\n", + "matrix is simply the $N-1$ by $N-1$ submatrix of $A_1$\n", + "from Equation [Differential System Matrix](#lab3:eq:dir-system) $$A_2 = \\left[\n", + " \\begin{array}{ccccccc}\n", + " -2 & 1 & 0 & \\dots & && 0 \\\\\n", + " 1 & -2 & 1 & 0 & \\dots & & \\\\\n", + " 0 & 1 & -2 & 1 & 0 & \\dots & \\\\\n", + " & & & & & & \\\\\n", + " \\vdots & & \\ddots & \\ddots& \\ddots & \\ddots & \\vdots\\\\\n", + " & & & & & & 0 \\\\\n", + " & & \\dots & 0 & 1 & -2 & 1 \\\\\n", + " 0& & &\\dots & 0 & 1 & -2 \\\\\n", + " \\end{array}\n", + " \\right]\n", + "$$ Does this change in the matrix make a significant difference in the\n", + "condition number?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "So far, we’ve only considered zero Dirichlet boundary values,\n", + "$u_0=0=u_N$. Let’s look at a few more types of boundary values …\n", + "\n", + " **Fixed (non-zero) boundary conditions:**\n", + "\n", + "> If we fixed the solution at the boundary to be some non-zero values,\n", + " say by holding one end at temperature $u_0=a$, and the other at\n", + " temperature $u_N=b$, then the matrix itself is not affected. The\n", + " only thing that changes in Equation [Differential System](#lab3:eq:dir-system) \n", + " is that\n", + " a term $a$ is subtracted from the right hand side of the second\n", + " equation, and a term $b$ is subtracted from the RHS of the\n", + " second-to-last equation. It is clear from what we’ve just said that\n", + " non-zero Dirichlet boundary conditions have no effect at all on the\n", + " matrix $A_1$ (or $A_2$) since they modify only the right hand side.\n", + "\n", + " **No-flow boundary conditions:**\n", + "\n", + "> These are conditions on the first derivative of the temperature\n", + " $$u_x(0) = 0,$$ $$u_x(1) = 0,$$\n", + " which are also known as *Neumann*\n", + " boundary conditions. The requirement that the first derivative of\n", + " the temperature be zero at the ends corresponds physically to the\n", + " situation where the ends of the rod are *insulated*; that is, rather\n", + " than fixing the temperature at the ends of the rod (as we did with\n", + " the Dirichlet problem), we require instead that there is no heat\n", + " flow in or out of the rod through the ends.\n", + "\n", + "> There is still one thing that is missing in the mathematical\n", + " formulation of this problem: since only derivatives of $u$ appear in\n", + " the equations and boundary conditions, the solution is determined\n", + " only up to a constant, and for there to be a unique solution, we\n", + " must add an extra condition. For example, we could set\n", + " $$u \\left(\\frac{1}{2} \\right) = constant,$$\n", + " or, more realistically,\n", + " say that the total heat contained in the rod is constant, or\n", + " $$\\int_0^1 u(x) dx = constant.$$\n", + " \n", + "> Now, let us look at the discrete formulation of the above problem …\n", + "\n", + "> The discrete equations do not change, except for that discrete\n", + " equations at $i=0,N$ replace the Dirichlet conditions in Equation [Differential System](#lab3:eq:diff-system):\n", + " \n", + " (Neumann Boundary Conditions)\n", + " $$u_{-1} - 2u_0 +u_{1} = d^2f_0 \\quad {\\rm and} \\quad\n", + " u_{N-1} - 2u_N +u_{N+1} = d^2f_N $$ \n", + " where we have introduced the\n", + " additional *ghost points*, or *fictitious points* $u_{-1}$ and\n", + " $u_{N+1}$, *lying outside the boundary*. The temperature at these\n", + " ghost points can be determined in terms of values in the interior\n", + " using the discrete version of the Neumann boundary conditions\n", + " $$\\frac{u_{-1} - u_1}{2d} = 0 \\;\\; \\Longrightarrow \\;\\; u_{-1} = u_1,$$\n", + " $$\\frac{u_{N+1} - u_{N-1}}{2d} = 0 \\;\\; \\Longrightarrow \\;\\; u_{N+1} = u_{N-1}.$$\n", + " Substitute these back into the [Neumann Boundary Conditions](#lab3:eq:neumann-over) to obtain\n", + " $$- 2u_0 + 2 u_1 =d^2 f_0 \\quad {\\rm and} \\quad\n", + " + 2u_{N-1} - 2 u_N =d^2 f_N .$$\n", + " In this case, the matrix is an\n", + " $N+1$ by $N+1$ matrix, almost identical to Equation [Differential System Matrix](#lab3:eq:dir-system),\n", + " but with the first and last rows slightly modified $$A_3 = \\left[\n", + " \\begin{array}{ccccccc}\n", + " -2 & 2 & 0 & \\dots & && 0 \\\\\n", + " 1 & -2 & 1 & 0 & \\dots & & 0\\\\\n", + " 0 & 1 & -2 & 1 & 0 & \\dots & 0\\\\\n", + " & & & & & & \\\\\n", + " & & & \\ddots& \\ddots & \\ddots & \\\\ \n", + " & & & & & & \\\\\n", + " 0 & & \\dots & 0 & 1 & -2 & 1 \\\\\n", + " 0 & & &\\dots & 0 & 2 & -2\n", + " \\end{array}\n", + " \\right]$$ \n", + " This system is *not solvable*; that is, the $A_3$ above is\n", + " singular ( *try it in Python to check for yourself … this should be\n", + " easy by modifying the code from [Problem 2](#Problem-Two)).\n", + " This is a discrete analogue of the fact that the continuous solution\n", + " is not unique. The only way to overcome this problem is to add\n", + " another equation for the unknown temperatures." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Physically the reason the problem is not unique is that we don’t know\n", + "how hot the rod is. If we think of the full time dependent problem:\n", + "\n", + "1\\) given fixed temperatures at the end points of the rod (Dirichlet),\n", + "whatever the starting temperature of the rod, eventually the rod will\n", + "reach equilibrium. with a temperature smoothly varying between the\n", + "values (given) at the end points.\n", + "\n", + "2\\) However, if the rod is insulated (Neumann), no heat can escape and\n", + "the final temperature will be related to the initial temperature. To\n", + "solve this problem we need to know the steady state,\n", + "\n", + "a\\) the initial temperature of the rod,\n", + "\n", + "b\\) the total heat of the rod,\n", + "\n", + "c\\) or a final temperature at some point of the rod.\n", + "\n", + "### Problem Three\n", + "\n", + "[lab3:prob:neumann]:<#Problem-Three>\n", + "\n", + "> How can we make the discrete Neumann problem\n", + "solvable? Think in terms of discretizing the *solvability conditions*\n", + "$u(\\frac{1}{2}) = c$ (condition c) above), or $\\int_0^1 u(x) dx = c$\n", + "(condition b) above), (the integral condition can be thought of as an\n", + "*average* over the domain, in which case we can approximate it by the\n", + "discrete average $\\frac{1}{N}(u_0+u_1+\\dots+u_N)=c$). \n", + "\n", + "> a) Derive the\n", + "matrix corresponding to the linear system to be solved in both of these\n", + "cases.\n", + "\n", + "> b\\) How does the conditioning of the resulting matrix depend on the the\n", + "size of the system?\n", + "\n", + "> c\\) Is it better or worse than for Dirichlet boundary conditions?\n", + "\n", + " **Periodic boundary conditions:**\n", + "\n", + "> This refers to the requirement that the temperature at both ends\n", + " remains the same: $$u(0) = u(1).$$ Physically, you can think of this\n", + " as joining the ends of the rod together, so that it is like a\n", + " *ring*. From what we’ve seen already with the other boundary\n", + " conditions, it is not hard to see that the discrete form of the\n", + " one-dimensional diffusion problem with periodic boundary conditions\n", + " leads to an $N\\times N$ matrix of the form $$A_4 = \\left[\n", + " \\begin{array}{ccccccc}\n", + " -2 & 1 & 0 & \\dots & && 1 \\\\\n", + " 1 & -2 & 1 & 0 & \\dots & & 0\\\\\n", + " 0 & 1 & -2 & 1 & 0 & \\dots & 0\\\\\n", + " & & & & & & \\\\\n", + " & & & \\ddots& \\ddots & \\ddots & \\\\ \n", + " & & & & & & \\\\\n", + " 0 & & \\dots & 0 & 1 & -2 & 1 \\\\\n", + " 1 & & &\\dots & 0 & 1 & -2\n", + " \\end{array}\n", + " \\right],\n", + " $$ where the unknown temperatures are now $u_i$, $i=0,1,\\dots, N-1$.\n", + " The major change to the form of the matrix is that the elements in\n", + " the upper right and lower left corners are now 1 instead of 0. Again\n", + " the same problem of the invertibility of the matrix comes up. This\n", + " is a symptom of the fact that the continuous problem does not have a\n", + " unique solution. It can also be remedied by tacking on an extra\n", + " condition, such as in the Neumann problem above.\n", + "\n", + "### Problem Four\n", + "[lab3:prob:periodic]: <#Problem-Four> \n", + "\n", + "> a) Derive the matrix $A_4$ above using the discrete\n", + "form [Discrete Differential Equation](#lab3:eq:diff-ode) of the differential equation and the periodic\n", + "boundary condition.\n", + "\n", + "> b) For the periodic problem (with the extra integral condition on the\n", + "temperature) how does the conditioning of the matrix compare to that for\n", + "the other two discrete problems?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Summary\n", + "\n", + "As you will have found in these problems, the boundary conditions can\n", + "have an influence on the conditioning of a discrete problem.\n", + "Furthermore, the method of discretizing the boundary conditions may or\n", + "may not have a large effect on the condition number. Consequently, we\n", + "must take care when discretizing a problem in order to obtain an\n", + "efficient numerical scheme." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References\n", + "
\n", + "Strang, G., 1986: Introduction to Applied Mathematics. Wellesley-Cambridge Press, Wellesley, MA.\n", + "
\n", + "
\n", + "Strang, G., 1988: Linear Algebra and its Applications. Harcourt Brace Jovanovich, San Diego, CA, 2nd edition.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Numpy and Python with Matrices\n", + "\n", + "**To start, import numpy,**\n", + "\n", + "Enter:\n", + "\n", + "> import numpy as np\n", + "\n", + "**To enter a matrix,** \n", + "$$ A = \\left[ \\begin{array}{ccc} a, & b, & c \\\\\n", + " d, & e, & f \\end{array} \\right] $$\n", + "Enter:\n", + "\n", + "> A = np.array([[a, b, c], [d, e, f]])\n", + "\n", + "**To add two matrices,**\n", + "$$ C = A + B$$\n", + "\n", + "Enter:\n", + "\n", + "> C = A + B\n", + "\n", + "**To multiply two matrices,**\n", + "$$ C = A \\cdot B $$\n", + "\n", + "Enter:\n", + "\n", + "> C = np.dot(A, B)\n", + "\n", + "**To find the tranpose of a matrix,**\n", + "$$ C = A^{T} $$\n", + "\n", + "Enter:\n", + "\n", + "> C = A.tranpose()\n", + "\n", + "**To find the condition number of a matrix,**\n", + "\n", + "> K = np.linalg.cond(A)\n", + "\n", + "**To find the inverse of a matrix,**\n", + "$$ C = A^{-1} $$\n", + "\n", + "Enter:\n", + "\n", + "> C = np.linalg.inv(A)\n", + "\n", + "**To find the determinant of a matrix,**\n", + "$$ K = |A|$$\n", + "\n", + "Enter:\n", + "\n", + "> K = np.linalg.det(A)\n", + "\n", + "**To find the eigenvalues of a matrix,**\n", + "\n", + "Enter:\n", + "\n", + "> lamb = np.linalg.eigvals(A)\n", + "\n", + "**To find the eigenvalues (lamb) and eigenvectors (x) of a matrix,**\n", + "\n", + "Enter:\n", + "\n", + "> lamb, x = np.linalg.eig(A)\n", + "\n", + "lamb[i] are the eigenvalues and x[:, i] are the eigenvectors.\n", + "\n", + "**To print a matrix,**\n", + "$$C$$\n", + "\n", + "Enter:\n", + "\n", + "> print (C)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary\n", + "[glossary.unnumbered]:<#Glossary>\n", + "\n", + "**A** \n", + "\n", + "augmented matrix\n", + "\n", + "> The $m \\times (n+1)$ matrix representing a linear system,\n", + " $Ax = b$, with the right hand side vector appended to the\n", + " coefficient matrix: $$\\left[ \n", + " \\begin{array}{cc}\n", + " \\begin{array}{ccccc} \n", + " a_{11} & & \\ldots & & a_{1n} \\\\\n", + " \\vdots & & \\ddots & & \\vdots \\\\\n", + " a_{m1} & & \\ldots & & a_{mn} \n", + " \\end{array}\n", + " &\n", + " \\left| \n", + " \\begin{array}{rc}\n", + " & b_{1} \\\\ & \\vdots \\\\ & b_{m}\n", + " \\end{array} \n", + " \\right. \n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "> The right most column is the right hand side vector or augmented\n", + " column.\n", + "\n", + "**C** \n", + "\n", + "characteristic equation\n", + "\n", + "> The equation:\n", + " $$\\det(A - \\lambda I) = 0 , \\ \\ \\ \\ or \\ \\ \\ \\ Ax = \\lambda x$$\n", + " where $A$ is a *square matrix*, $I$ is the *identity matrix*,\n", + " $\\lambda$ is an *eigenvalue* of $A$, and $x$ is the corresponding\n", + " *eigenvector* of $\\lambda$.\n", + "\n", + "coefficient matrix\n", + "\n", + "> A $m \\times n$ matrix made up with the coefficients $a_{ij}$ of the\n", + " $n$ unknowns from the $m$ equations of a set of linear equations,\n", + " where $i$ is the row index and $j$ is the column index: $$\\left[\n", + " \\begin{array}{ccccccc}\n", + " & a_{11} & & \\ldots & & a_{1n} & \\\\\n", + " & \\vdots & & \\ddots & & \\vdots & \\\\\n", + " & a_{m1} & & \\ldots & & a_{mn} &\n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "condition number\n", + "\n", + "> A number, $K$, that refers to the sensitivity of a *nonsingular*\n", + " matrix, $A$, i.e. given a system $Ax = b$, $K$ reflects whether\n", + " small changes in $A$ and $b$ will have any effect on the solution.\n", + " The matrix is well-conditioned if $K$ is close to one. The number is\n", + " described as: $$K(A) = \\|A\\| \\|A^{-1}\\| \n", + " \\ \\ \\ \\ or \\ \\ \\ \\\n", + " K(A) = \\frac{\\lambda_{max}}{\\lambda_{min}}$$ where $\\lambda_{max}$\n", + " and $\\lambda_{min}$ are largest and smallest *eigenvalues* of $A$\n", + " respectively.\n", + "\n", + "**D** \n", + "\n", + "decomposition\n", + "\n", + "> Factoring a matrix, $A$, into two factors, e.g., the Gaussian\n", + " elimination amounts to factoring $A$ into a product of two matrices.\n", + " One is the lower triangular matrix, $L$, and the other is the upper\n", + " triangular matrix, $U$.\n", + "\n", + "diagonal matrix\n", + "\n", + "> A square matrix with the entries $a_{ij} = 0 $ whenever $i \\neq j$.\n", + "\n", + "**E**\n", + "\n", + "eigenvalue\n", + "\n", + "> A number, $\\lambda$, that must satisfy the *characteristic equation*\n", + " $\\det(A - \\lambda I) = 0.$\n", + "\n", + "eigenvector\n", + "\n", + "> A vector, $x$, which corresponds to an *eigenvalue* of a *square\n", + " matrix* $A$, satisfying the characteristic equation:\n", + " $$Ax = \\lambda x .$$\n", + "\n", + "**H** \n", + "\n", + "homogeneous equations\n", + "\n", + "> A set of linear equations, $Ax = b$ with the zero vector on the\n", + " right hand side, i.e. $b = 0$.\n", + "\n", + "**I** \n", + "\n", + "inhomogeneous equations\n", + "\n", + "> A set of linear equations, $Ax = b$ such that $b \\neq 0$.\n", + "\n", + "identity matrix\n", + "\n", + "> A *diagonal matrix* with the entries $a_{ii} = 1$:\n", + " $$\\left[ \\begin{array}{ccccccc}\n", + " & 1 & 0 & \\ldots & \\ldots & 0 & \\\\\n", + " & 0 & 1 & \\ddots & & \\vdots & \\\\\n", + " & \\vdots & \\ddots & \\ddots & \\ddots & \\vdots \\\\\n", + " & \\vdots & & \\ddots & 1 & 0 & \\\\\n", + " & 0 & \\ldots & \\ldots & 0 & 1 &\n", + " \\end{array} \\right]$$\n", + "\n", + "ill-conditioned matrix\n", + "\n", + "> A matrix with a large *condition number*, i.e., the matrix is not\n", + " well-behaved, and small errors to the matrix will have great effects\n", + " to the solution.\n", + "\n", + "invertible matrix\n", + "\n", + "> A square matrix, $A$, such that there exists another matrix,\n", + " $A^{-1}$, which satisfies:\n", + " $$AA^{-1} = I \\ \\ \\ \\ and \\ \\ \\ \\ A^{-1}A = I$$\n", + "\n", + "> The matrix, $A^{-1}$, is the *inverse* of $A$. An invertible matrix\n", + " is *nonsingular*.\n", + "\n", + "**L** \n", + "\n", + "linear system\n", + "\n", + "> A set of $m$ equations in $n$ unknowns: $$\\begin{array}{ccccccc}\n", + " a_{11}x_{1} & + & \\ldots & + & a_{1n}x_{n} & = & b_{1} \\\\\n", + " a_{21}x_{1} & + & \\ldots & + & a_{2n}x_{n} & = & b_{2} \\\\\n", + " & & \\vdots & & & & \\vdots \\\\\n", + " a_{m1}x_{1} & + & \\ldots & + & a_{mn}x_{n} & = & b_{m} \n", + " \\end{array}$$ with unknowns $x_{i}$ and coefficients\n", + " $a_{ij}, b_{j}$.\n", + "\n", + "lower triangular matrix\n", + "\n", + "> A square matrix, $L$, with the entries $l_{ij} = 0$, whenever\n", + " $j > i$: $$\\left[\n", + " \\begin{array}{ccccccc}\n", + " & * & 0 & \\ldots & \\ldots & 0 & \\\\\n", + " & * & * & \\ddots & & \\vdots & \\\\\n", + " & \\vdots & & \\ddots & \\ddots & \\vdots & \\\\\n", + " & \\vdots & & & * & 0 & \\\\\n", + " & * & \\ldots & \\ldots & \\ldots & * &\n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "**N** \n", + "\n", + "nonsingular matrix\n", + "\n", + "> A square matrix,$A$, that is invertible, i.e. the system $Ax = b$\n", + " has a *unique solution*.\n", + "\n", + "**S** \n", + "\n", + "singular matrix\n", + "\n", + "> A $n \\times n$ matrix that is degenerate and does not have an\n", + " inverse (refer to *invertible*), i.e., the system $Ax = b$ does not\n", + " have a *unique solution*.\n", + "\n", + "sparse matrix\n", + "\n", + "> A matrix with a high percentage of zero entries.\n", + "\n", + "square matrix\n", + "\n", + "> A matrix with the same number of rows and columns.\n", + "\n", + "**T** \n", + "\n", + "transpose\n", + "\n", + "> A $n \\times m$ matrix, $A^{T}$, that has the columns of a\n", + " $m \\times n$ matrix, $A$, as its rows, and the rows of $A$ as its\n", + " columns, i.e. the entry $a_{ij}$ in $A$ becomes $a_{ji}$ in $A^{T}$,\n", + " e.g.\n", + "\n", + "> $$A = \n", + " \\left[ \\begin{array}{ccc} 1 & 2 & 3 \\\\ 4 & 5 & 6 \\end{array} \\right] \n", + " \\ \\ \\rightarrow \\ \\ \n", + " A^{T} = \n", + " \\left[ \\begin{array}{cc} 1 & 4 \\\\ 2 & 5 \\\\ 3 & 6 \\end{array} \\right]$$\n", + "\n", + "tridiagonal matrix\n", + "\n", + "> A square matrix with the entries $a_{ij} = 0$, $| i-j | > 1 $:\n", + "\n", + "> $$\\left[\n", + " \\begin{array}{cccccccc}\n", + " & * & * & 0 & \\ldots & \\ldots & 0 & \\\\\n", + " & * & * & * & \\ddots & & \\vdots & \\\\\n", + " & 0 & * & \\ddots & \\ddots & \\ddots & \\vdots & \\\\\n", + " & \\vdots & \\ddots & \\ddots & \\ddots & * & 0 & \\\\\n", + " & \\vdots & & \\ddots & * & * & * & \\\\\n", + " & 0 & \\ldots & \\ldots & 0 & * & * &\n", + " \\end{array}\n", + " \\right]$$\n", + "\n", + "**U** \n", + "\n", + "unique solution\n", + "\n", + "> There is only solution, $x$, that satisfies a particular linear\n", + " system, $Ax = b$, for the given $A$. That is, this linear system has\n", + " exactly one solution. The matrix $A$ of the system is *invertible*\n", + " or *nonsingular*.\n", + "\n", + "upper triangular matrix\n", + "\n", + "> A square matrix, $U$, with the entries $u_{ij} = 0$ whenever\n", + " $i > j$:\n", + "\n", + "> $$\\left[\n", + " \\begin{array}{ccccccc}\n", + " & * & \\ldots & \\ldots & \\ldots & * & \\\\\n", + " & 0 & * & & & \\vdots & \\\\\n", + " & \\vdots & \\ddots & \\ddots & & \\vdots & \\\\\n", + " & \\vdots & & \\ddots & * & * & \\\\\n", + " & 0 & \\ldots & \\ldots & 0 & * &\n", + " \\end{array}\n", + " \\right]$$\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "header_metadata": { + "chead": "Feb., 2020", + "lhead": "Numeric Lab 2" + }, + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "324.176px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab3/lab3_files/quizzes/char/char.html b/notebooks/lab3/lab3_files/quizzes/char/char.html new file mode 100644 index 0000000..f0bc7de --- /dev/null +++ b/notebooks/lab3/lab3_files/quizzes/char/char.html @@ -0,0 +1,40 @@ + + + +Solution to Characteristic Equation + + + + + + +

Solution to Characteristic Equation

+

+

+

+ + Given +

+

+

+ Substituting the matrix into the characteristic equation: +

+

+

+ The eigenvalues of B are -1 and 8. +

+ Check this answer with Octave:
+

+ octave:1> B = [3 2 4; 2 0 2; 4 2 3]
+ octave:2> eig(B)
+

+ +

+ +

+


+

+John M. Stockie
+Fri Sep 8 14:03:35 PDT 1995
+
+ diff --git a/notebooks/lab3/lab3_files/quizzes/det/det.html b/notebooks/lab3/lab3_files/quizzes/det/det.html new file mode 100644 index 0000000..54f4afe --- /dev/null +++ b/notebooks/lab3/lab3_files/quizzes/det/det.html @@ -0,0 +1,84 @@ + + + +Solution to Determinant + + + + + + +

+

Solution to Determinant

+

+

+

+ +

    +
  1. Given +

    +

    + The third row has the most zero entries, so the matrix will +be multiplied about this row to find the determinant. The reason for +choosing this row will become clear below.
    +

    + To find the determinant, you need to first find the +components that make up the determinant of the matrix. For each +element in the chosen row i of a matrix, + is multiplied by the determinant of the matrix without the +row i and the column j i.e. an matrix.
    +

    + In this case, the first element is -1, and the matrix is the original matrix without row 3 and column 1: +

    +

    +

    + This is the first component for calculating det(A). +

    +The other components are derived the same way: +

    +

    +

    +

    +

    + Before adding the components together to find the +determinant, there is one extra factor that is necessary. Each +component has to be multiplied by the number , where i and +j are the row and column, respectively, that contain the element from +the chosen row.
    +

    + So when everything is put together: +

    +

    +

    + So A has an inverse. +

    + Note that since two of the elements in row three are 0, in calculating the determinant, two of the components can easily be removed, because they are 0. +

    + +

  2. Given +

    +

    + The third row should be chosen since it has the most zero +entries. In the same way as the previous question, the various +components are derived, so: +

    +

    +

    + B is singular, i.e. it does not have an inverse. +

    + +

  3. Given +

    + With Octave,
    +

    +octave:1> C = [4 -2 -7 6; -3 0 1 0; -1 -1 5 -1; 0 1 -5 3]
    +octave:2> det(C)

    +

    + you should get = -82 . C is non-singular, since its determinant is not zero. So it has an inverse. +

    +



+

+John M. Stockie
+Fri Sep 8 14:05:45 PDT 1995
+
+ diff --git a/notebooks/lab3/lab3_files/quizzes/eigvec/eigvec.html b/notebooks/lab3/lab3_files/quizzes/eigvec/eigvec.html new file mode 100644 index 0000000..a70e003 --- /dev/null +++ b/notebooks/lab3/lab3_files/quizzes/eigvec/eigvec.html @@ -0,0 +1,86 @@ + + + +Solution to Eigenvectors + + + + + + +

+

Press BACK to return to the lab.



+

+

Solution to Eigenvectors

+

+

+

+ +

+The matrix given is: +

+

+

+ From the previous section, you should have found that the +eigenvalues for this matrix are = -1 or 8. +

+ Using Octave, +

+ octave:1> B = [3 2 4; 2 0 2;4 2 3]
+octave:2> [x, lambda] = eig(B)
+x =

+ 0.74536 -0.66667 -0.34487
+ -0.29814 -0.33333 -0.65498
+ -0.59628 -0.66667 0.67236

+lambda =

+ -1.00000 0.00000 0.00000
+ 0.00000 8.00000 0.00000
+ 0.00000 0.00000 -1.00000
+

+ So the answer from Octave gives two unit eigenvectors for + = -1, and one solution for = 8.
+

+ Calculation by hand, with = -1, will be: +

+

+

+ Reading from the matrix, +

+

+

+ +

+ Remember that = -1 appears as two of the three +eigenvalues, so there are two degrees of freedom for the solution, +i.e. the solution is actually a plane defined by this equation. + +

+ So any vector on this plane is an eigenvector of B: +

+ +

+ The answers, (0.74536, -0.29814, -0.59628) and +(-0.34487, -0.65498, 0.67236), shown in Octave for = -1 are +only two of the possible unit eigenvectors for the solution. They +should satisfy the equation of the plane.
+

+ For = 8, the same method can be used to find the +eigenvector: +

+

+

+ Reading from the matrix: +

+

+

+ Letting = 2, then = 1 and = 2. This +answer is a multiple of the solution given by Octave, which is (-0.66667, -0.33333, -0.66667). Note that = 8 has +only one degree of freedom. +

+


+

+John M. Stockie
+Mon Sep 18 11:40:39 PDT 1995
+
+ diff --git a/notebooks/lab3/lab3_files/quizzes/gaus/gaus.html b/notebooks/lab3/lab3_files/quizzes/gaus/gaus.html new file mode 100644 index 0000000..c763f72 --- /dev/null +++ b/notebooks/lab3/lab3_files/quizzes/gaus/gaus.html @@ -0,0 +1,104 @@ + + + +Solution to Gaussian Elimination + + + + + + +

+

Solution to Gaussian Elimination

+

+

+

+ +

+Given the set of linear equations: +

+

+

+Let's look at the solution graphically first.

+

+Figure 1:

+Figure 2:

+Figure 3:
+

+Figure 1 is the plane of , Figure 2 is the plane of , +and Figure 3 is the plane of . +

+The final result will be the intersection of the three planes: +

+ +

+ +

+One way of solving the equations: +

+

    +
  1. Divide by 2; multiply by 3 and subtract it + from to cancel from ; subtract the new + from to cancel from : +

    +

    + +

  2. Divide by -18; multiply by 2 and add it to + to cancel from :

    +

    + +

  3. Now, solve for , and substitute it into to solve + for ; then substitute and into and solve + for : +

+

+So, = -2, = 9 and = 3. +

+ +

+Solving the equations in a matrix: +

+ Two main steps are involved in this solution. The Gaussian +elimination is performed first, followed by the Back-substitution: +

+

    +
  • Reducing the matrix with Gaussian elimination +

    + +

    +

    +

    + +

  • Back-substitution +

    +Reading from the matrix:
    +

    +

      +
    • finding +

      +

      + +

    • finding +

      +

      + +

    • finding +

+

+ So, = -2, = 9 and = 3. As expected, the two different methods give the same answer. +

+ +

+ +

+ +


+

+John M. Stockie
+Fri Sep 8 14:25:44 PDT 1995
+
+ diff --git a/notebooks/lab3/lab3_files/quizzes/inverse/inverse.html b/notebooks/lab3/lab3_files/quizzes/inverse/inverse.html new file mode 100644 index 0000000..7246287 --- /dev/null +++ b/notebooks/lab3/lab3_files/quizzes/inverse/inverse.html @@ -0,0 +1,60 @@ + + + +Solution to Matrix Inversion + + + + + + +

+

Solution to Matrix Inversion

+

+

+

+ +

  1. +

    +The given matrix: +

    +

    +

    +does not have an inverse. It is actually a singular matrix, +i.e. there is more than one solution to the system Ax = b, for some +b. +

    + Suppose b = [2 -6 1], then solving for : +

    +

    +

    + The system has more than one solution, i.e. it does not have +a unique solution. Therefore, the matrix A does not have an +inverse.

    +

    + +

  2. +

    +The given matrix: +

    +

    +

    +has an inverse, since the system Bx = c has a unique solution for some c. +

    + To find the inverse of matrix B: +

    +

    +

    +i.e. +

    +

+

+Use Octave to confirm the answers. +

+

+


+

+John M. Stockie
+Mon Sep 18 11:38:45 PDT 1995
+
+ diff --git a/notebooks/lab3/lab3_files/quizzes/quick/quick.html b/notebooks/lab3/lab3_files/quizzes/quick/quick.html new file mode 100644 index 0000000..9c48eb7 --- /dev/null +++ b/notebooks/lab3/lab3_files/quizzes/quick/quick.html @@ -0,0 +1,130 @@ + + + +Solution to Quick Review + + + + + + +

+

Solution to Quick Review

+

+

+

+ +

+

    +
  1. x + y +

    +

    + Click on the picture for a three-dimensional look at the solution. + +

    + +

  2. +

    +

    + +

  3. y-x +

    +

    + Click on the picture for a three-dimensional look of the solution. + +

    + +

  4. Ax i.e. times +

    +

    +Note that in this multiplication, the values of the same subscript match, i.e.: +

    +

    +

    + +

  5. i.e. times +

    +

    +As you can see, A is "too small" for y. There is nothing for + to multiply by. Another way to think of this is that the size +of A and y do not match, i.e. +

    +

    +

    +Unlike in question 4, where the subscript j in both A and x have +the same values, the subscript i that appears in y and A in this +question do not have the same values. +

    + +

  6. AB i.e. times +

    +

    +In this question, a 2 x 3 matrix is multiplied to a 3 x 2 matrix. Consider the subscripts: +

    +

    +

    +Note that: +

    • the inner matrix dimensions (pointed by arrows) agree +
    • the outer matrix dimensions determine the size of the resulting + matrix, i.e. the answer is a 2 x 2 matrix. +
    +

    + +

  7. BA i.e. times +

    +

    +In this question, a 3 x 2 matrix is multiplied to a 2 x 3 matrix. Consider the subscripts: +

    +

    +

    +Note that: +

    • the inner matrix dimensions (pointed by arrows) agree +
    • the outer matrix dimensions determine the size of the resulting + matrix, i.e. the answer is a 3 x 3 matrix. +
    +

    +Also note that . Usually, matrix multiplication is not +commutative, as in this case. However, there are situations where two +matrices are commutative, e.g., +

    +

    +

    +In this case, XY = YX: +

    +

    +

    +Also note that if two matrices are inverse of each other, they are commutative as well. +

    + +

  8. AA i.e. times +

    +

    +Here, a 2 x 3 matrix is multiplied to a 2 x 3 matrix. Looking at the +matrix, you can see that there is nothing to be multiplied by -4. +Consider the subscripts: +

    +

    +

    +Note that the inner matrix dimensions (pointed by arrows) do not +agree. Therefore, the answer is undefined. +

    + +

    +

+

+ +

+ +

+ +

+ +


+

+John M. Stockie
+Fri Sep 8 14:43:09 PDT 1995
+
+ diff --git a/notebooks/lab4/01-lab4.html b/notebooks/lab4/01-lab4.html new file mode 100644 index 0000000..644fcd9 --- /dev/null +++ b/notebooks/lab4/01-lab4.html @@ -0,0 +1,773 @@ + + + + + + + + Solving Ordinary Differential Equations with the Runge-Kutta Methods — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Solving Ordinary Differential Equations with the Runge-Kutta Methods

+
+

List of Problems

+ +

Assignment: see canvas for the problems you should hand-in.

+
+
+

Objectives

+

In this lab, you will explore Runge-Kutta methods for solving ordinary differential equations. The goal is to gain a better understanding of some of the more popular Runge-Kutta methods and the corresponding numerical code.

+

Specifically you will be able to:

+
    +
  • describe the mid-point method

  • +
  • construct a Runge-Kutta tableau from equations or equations from a tableau

  • +
  • describe how a Runge-Kutta method estimates truncation error

  • +
  • edit a working python code to use a different method or solve a different problem

  • +
+
+
+

Readings

+

There is no required reading for this lab, beyond the contents of the lab itself. However, if you would like additional background on any of the following topics, then refer to the sections indicated below.

+

Runge-Kutta Methods:

+
-   Newman, Chapter 8
+
+-   Press, et al.  Section 16.1
+
+-   Burden & Faires  Section 5.4
+
+
+
+
+

Introduction

+

Ordinary differential equations (ODEs) arise in many physical situations. For example, there is the first-order Newton cooling equation discussed in Lab 1, and perhaps the most famous equation of all, the second-order Newton’s Second Law of Mechanics \(F=ma\) .

+

In general, higher-order equations, such as Newton’s force equation, can be rewritten as a system of first-order equations . So the generic problem in ODEs is a set of N coupled first-order differential equations of the form,

+
+\[\frac{d{\bf y}}{dt} = f({\bf y},t)\]
+

where \({\bf y}\) is a vector of variables.

+

For a complete specification of the solution, boundary conditions for the problem must be given. Typically, the problems are broken up into two classes:

+
    +
  • Initial Value Problem (IVP): the initial values of \({\bf y}\) are specified.

  • +
  • Boundary Value Problem (BVP): \({\bf y}\) is specified at the initial and final times.

  • +
+

For this lab, we are concerned with the IVP’s. BVP’s tend to be much more difficult to solve and involve techniques which will not be dealt with in this set of labs.

+

Now as was pointed out in Lab 2, in general, it will not be possible to find exact, analytic solutions to the ODE. However, it is possible to find an approximate solution with a finite difference scheme such as the forward Euler method. This is a simple first-order, one-step scheme which is easy to implement. However, this method is rarely used in practice as it is neither very stable nor accurate.

+

The higher-order Taylor methods discussed in Lab 2 are one alternative but involve higher-order derivatives that must be calculated by hand or worked out numerically in a multi-step scheme. Like the forward Euler method, stability is a concern.

+

The Runge-Kutta methods are higher-order, one-step schemes that make use of information at different stages between the beginning and end of a step. They are more stable and accurate than the forward Euler method and are still relatively simple compared to schemes such as the multi-step predictor-corrector methods or the Bulirsch-Stoer method. Though they lack the accuracy and efficiency of these more sophisticated schemes, they are still powerful methods that almost always succeed for +non-stiff IVPs.

+
+
+

Runge-Kutta methods

+
+

The Midpoint Method: A Two-Stage Runge-Kutta Method

+

The forward Euler method takes the solution at time \(t_n\) and advances it to time \(t_{n+1}\) using the value of the derivative \(f(y_n,t_n)\) at time \(t_n\)

+
+\[y_{n+1} = y_n + h f(y_n,t_n)\]
+

where \(h \equiv \Delta t\).

+

fig1

+

Figure Euler: The forward Euler method is essentially a straight-line approximation to the solution, over the interval of one step, using the derivative at the starting point as the slope.

+

The idea of the Runge-Kutta schemes is to take advantage of derivative information at the times between \(t_n\) and \(t_{n+1}\) to increase the order of accuracy.

+

For example, in the midpoint method, the derivative at the initial time is used to approximate the derivative at the midpoint of the interval, \(f(y_n+\frac{1}{2}hf(y_n,t_n), t_n+\frac{1}{2}h)\). The derivative at the midpoint is then used to advance the solution to the next step.

+

The method can be written in two stages \(k_i\),

+

eq:midpoint

+
+\[\begin{split}\begin{aligned} + \begin{array}{l} + k_1 = h f(y_n,t_n)\\ + k_2 = h f(y_n+\frac{1}{2}k_1, t_n+\frac{1}{2}h)\\ + y_{n+1} = y_n + k_2 + \end{array} +\end{aligned}\end{split}\]
+

The midpoint method is known as a 2-stage Runge-Kutta formula.

+

fig2

+

Figure midpoint: The midpoint method again uses the derivative at the starting point to approximate the solution at the midpoint. The derivative at the midpoint is then used as the slope of the straight-line approximation.

+
+
+

Second-Order Runge-Kutta Methods

+

As was shown in Lab 2, the local error in the forward Euler method is proportional to \(h^2\). In other words, the forward Euler method has an accuracy which is first order in \(h\).

+

The advantage of the midpoint method is that the extra derivative information at the midpoint results in the first order error term cancelling out, making the method second order accurate. This can be shown by a Taylor expansion of equation eq:midpoint

+
+
+

ProblemMidpoint

+

Even though the midpoint method is second-order accurate, it may still be less accurate than the forward Euler method. In the demo below, compare the accuracy of the two methods on the initial value problem

+

eq:linexp

+

\begin{equation} +\frac{dy}{dt} = -y +t +1, \;\;\;\; y(0) =1 +\end{equation}

+

which has the exact solution \begin{equation} +y(t) = t + e^{-t} +\end{equation}

+
    +
  1. Why is it possible that the midpoint method may be less accurate than the forward Euler method, even though it is a higher order method?

  2. +
  3. Based on the numerical solutions of eq:linexp, which method appears more accurate?

  4. +
  5. Cut the stepsize in half and check the error at a given time. Repeat a couple of more times. How does the error drop relative to the change in stepsize?

  6. +
  7. How do the numerical solutions compare to \(y(t) = t + e^{-t}\) when you change the initial time? Why?

  8. +
+

Note: in the original lab code (below) there is a bug in the code, such that the euler method is being initialized at each timestep with the previous value from the midpoint method, NOT the previous value from the euler method!

+
+
[ ]:
+
+
+
# original demo - with bug
+import context
+from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41
+import numpy as np
+from matplotlib import pyplot as plt
+
+initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.25,'c1':-1.,'c2':1.,'c3':1.}
+coeff = initinter41(initialVals)
+timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)
+nsteps=len(timeVec)
+ye=[]
+ym=[]
+y=coeff.yinitial
+ye.append(coeff.yinitial)
+ym.append(coeff.yinitial)
+for i in np.arange(1,nsteps):
+    ynew=eulerinter41(coeff,y,timeVec[i-1])
+    ye.append(ynew)
+    ynew=midpointinter41(coeff,y,timeVec[i-1])
+    ym.append(ynew)
+    y=ynew
+analytic=timeVec + np.exp(-timeVec)
+theFig,theAx=plt.subplots(1,1)
+l1=theAx.plot(timeVec,analytic,'b-',label='analytic')
+theAx.set_xlabel('time (seconds)')
+l2=theAx.plot(timeVec,ye,'r-',label='euler')
+l3=theAx.plot(timeVec,ym,'g-',label='midpoint')
+theAx.legend(loc='best')
+theAx.set_title('interactive 4.1');
+
+
+
+

Note: this bug has been fixed in the code below, by calling each method with the previous value from that method!

+
+
[ ]:
+
+
+
# original demo
+import context
+from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41
+import numpy as np
+from matplotlib import pyplot as plt
+
+initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.25,'c1':-1.,'c2':1.,'c3':1.}
+coeff = initinter41(initialVals)
+timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)
+nsteps=len(timeVec)
+ye=[]
+ym=[]
+y=coeff.yinitial
+ye.append(coeff.yinitial)
+ym.append(coeff.yinitial)
+for i in np.arange(1,nsteps):
+    ynew=eulerinter41(coeff,ye[i-1],timeVec[i-1]) ## here we use ye[i-1] instead of y
+    ye.append(ynew)
+
+    ynew=midpointinter41(coeff,ym[i-1],timeVec[i-1]) ## here we use ym[i-1] instead of y
+    ym.append(ynew)
+analytic=timeVec + np.exp(-timeVec)
+theFig,theAx=plt.subplots(1,1)
+l1=theAx.plot(timeVec,analytic,'b-',label='analytic')
+theAx.set_xlabel('time (seconds)')
+l2=theAx.plot(timeVec,ye,'r-',label='euler')
+l3=theAx.plot(timeVec,ym,'g-',label='midpoint')
+theAx.legend(loc='best')
+theAx.set_title('interactive 4.1');
+
+
+
+

In general, an explicit 2-stage Runge-Kutta method can be written as

+

eq:explicitrk1

+

\begin{align} +k_1 =& h f(y_n,t_n)\\ +k_2 =& h f(y_n+b_{21}k_1, t_n+a_2h) \\ +y_{n+1} =& y_n + c_1k_1 +c_2k_2 +\end{align}

+

The scheme is said to be explicit since a given stage does not depend implicitly on itself, as in the backward Euler method, or on a later stage.

+

Other explicit second-order schemes can be derived by comparing the formula eq: explicitrk2 to the second-order Taylor method and matching terms to determine the coefficients \(a_2\), \(b_{21}\), \(c_1\) and \(c_2\).

+

See Appendix midpoint for the derivation of the midpoint method.

+
+
+

The Runge-Kutta Tableau

+

A general s-stage Runge-Kutta method can be written as,

+
+\[\begin{split}\begin{array}{l} + k_i = h f(y_n+ {\displaystyle \sum_{j=1}^{s} } b_{ij}k_j, t_n+a_ih), + \;\;\; i=1,..., s\\ + y_{n+1} = y_n + {\displaystyle \sum_{j=1}^{s}} c_jk_j +\end{array}\end{split}\]
+

An explicit Runge-Kutta method has \(b_{ij}=0\) for \(i\leq j\), i.e. a given stage \(k_i\) does not depend on itself or a later stage \(k_j\).

+

The coefficients can be expressed in a tabular form known as the Runge-Kutta tableau.

+
+\[\begin{split}\begin{array}{|c|c|cccc|c|} \hline +i & a_i &{b_{ij}} & & && c_i \\ \hline +1 & a_1 & b_{11} & b_{12} & ... & b_{1s} & c_1\\ +2 & a_2 & b_{21} & b_{22} & ... & b_{2s} & c_2\\ +\vdots & \vdots & \vdots & \vdots & & \vdots & \vdots\\ +s &a_s & b_{s1} & b_{s2} & ... & b_{ss} & c_s\\\hline +{j=} & & 1 \ 2 & ... & s & \\ \hline +\end{array}\end{split}\]
+

An explicit scheme will be strictly lower-triangular.

+

For example, a general 2-stage Runge-Kutta method,

+
+\[\begin{split} \begin{array}{l} + k_1 = h f(y_n+b_{11}k_1+b_{12}k_2,t_n+a_1h)\\ + k_2 = h f(y_n+b_{21}k_1+b_{22}k_2, t_n+a_2h)\\ + y_{n+1} = y_n + c_1k_1 +c_2k_2 +\end{array}\end{split}\]
+

has the coefficients,

+
+\[\begin{split}\begin{array}{|c|c|cc|c|} \hline +i & a_i & {b_{ij}} & & c_i \\ \hline +1 & a_1 & b_{11} & b_{12} & c_1\\ +2 & a_2 & b_{21} & b_{22} & c_2\\ \hline +{j=} & & 1 & 2 & \\ \hline +\end{array}\end{split}\]
+

In particular, the midpoint method is given by the tableau,

+
+\[\begin{split}\begin{array}{|c|c|cc|c|} \hline +i & a_i & {b_{ij}} & & c_i \\ \hline +1 & 0 & 0 & 0 & 0\\ +2 & \frac{1}{2} & \frac{1}{2} & 0 & 1\\ \hline +{j=} & & 1 & 2 & \\ \hline +\end{array}\end{split}\]
+
+
+

ProblemTableau

+

Write out the tableau for

+
    +
  1. Heun’s/Ralston method:

    +
    +\[\begin{split} \begin{array}{l} +k_1 = h f(y_n,t_n)\\ +k_2 = h f(y_n+\frac{2}{3}k_1, t_n+\frac{2}{3}h)\\ +y_{n+1} = y_n + \frac{1}{4}k_1 + \frac{3}{4}k_2 + \end{array}\end{split}\]
    +
  2. +
  3. the fourth-order Runge-Kutta method (eq:rk4) (discussed further in the next section):

    +
    +\[\begin{split} \begin{array}{l} +k_1 = h f(y_n,t_n)\\ +k_2 = h f(y_n+\frac{k_1}{2}, t_n+\frac{h}{2})\\ +k_3 = h f(y_n+\frac{k_2}{2}, t_n+\frac{h}{2})\\ +k_4 = h f(y_n+k_3, t_n+h)\\ +y_{n+1} = y_n + \frac{k_1}{6}+ \frac{k_2}{3}+ \frac{k_3}{3} + \frac{k_4}{6} + \end{array}\end{split}\]
    +
  4. +
+
+
+

Explicit Fourth-Order Runge-Kutta Method

+

Explicit Runge-Kutta methods are popular as each stage can be calculated with one function evaluation. In contrast, implicit Runge-Kutta methods usually involves solving a non-linear system of equations in order to evaluate the stages. As a result, explicit schemes are much less expensive to implement than implicit schemes.

+

However, there are cases in which implicit schemes are necessary and that is in the case of stiff sets of equations. See section 16.6 of Press et al. for a discussion. For these labs, we will focus on non-stiff equations and on explicit Runge-Kutta methods.

+

The higher-order Runge-Kutta methods can be derived by in manner similar to the midpoint formula. An s-stage method is compared to a Taylor method and the terms are matched up to the desired order.

+

Methods of order \(M > 4\) require \(M+1\) or \(M+2\) function evaluations or stages, in the case of explicit Runge-Kutta methods. As a result, fourth-order Runge-Kutta methods have achieved great popularity over the years as they require only four function evaluations per step. In particular, there is the classic fourth-order Runge-Kutta formula:

+

eq:rk4

+
+\[\begin{split}\begin{array}{l} + k_1 = h f(y_n,t_n)\\ + k_2 = h f(y_n+\frac{k_1}{2}, t_n+\frac{h}{2})\\ + k_3 = h f(y_n+\frac{k_2}{2}, t_n+\frac{h}{2})\\ + k_4 = h f(y_n+k_3, t_n+h)\\ + y_{n+1} = y_n + \frac{k_1}{6}+ \frac{k_2}{3}+ \frac{k_3}{3} + \frac{k_4}{6} +\end{array}\end{split}\]
+
+
+

ProblemRK4

+

In the cell below, compare compare solutions to the test problem

+

eq:test

+
+\[\frac{dy}{dt} = -y +t +1, \;\;\;\; y(0) =1\]
+

generated with the fourth-order Runge-Kutta method to solutions generated by the forward Euler and midpoint methods.

+
    +
  1. Based on the numerical solutions of (eq:test), which of the three methods appears more accurate?

  2. +
  3. Again determine how the error changes relative to the change in stepsize, as the stepsize is halved.

  4. +
+
+
[ ]:
+
+
+
from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41,\
+                                        rk4ODEinter41
+initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.05,'c1':-1.,'c2':1.,'c3':1.}
+coeff = initinter41(initialVals)
+timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)
+nsteps=len(timeVec)
+ye=[]
+ym=[]
+yrk=[]
+y=coeff.yinitial
+ye.append(coeff.yinitial)
+ym.append(coeff.yinitial)
+yrk.append(coeff.yinitial)
+for i in np.arange(1,nsteps):
+    ynew=eulerinter41(coeff,ye[i-1],timeVec[i-1])
+    ye.append(ynew)
+    ynew=midpointinter41(coeff,ym[i-1],timeVec[i-1])
+    ym.append(ynew)
+    ynew=rk4ODEinter41(coeff,yrk[i-1],timeVec[i-1])
+    yrk.append(ynew)
+analytic=timeVec + np.exp(-timeVec)
+theFig=plt.figure(0)
+theFig.clf()
+theAx=theFig.add_subplot(111)
+l1=theAx.plot(timeVec,analytic,'b-',label='analytic')
+theAx.set_xlabel('time (seconds)')
+l2=theAx.plot(timeVec,ye,'r-',label='euler')
+l3=theAx.plot(timeVec,ym,'g-',label='midpoint')
+l4=theAx.plot(timeVec,yrk,'m-',label='rk4')
+theAx.legend(loc='best')
+theAx.set_title('interactive 4.2');
+
+
+
+
+
+

Embedded Runge-Kutta Methods: Estimate of the Truncation Error

+

It is possible to find two methods of different order which share the same stages \(k_i\) and differ only in the way they are combined, i.e. the coefficients \(c_i\). For example, the original so-called embedded Runge-Kutta scheme was discovered by Fehlberg and consisted of a fourth-order scheme and fifth-order scheme which shared the same six stages.

+

In general, a fourth-order scheme embedded in a fifth-order scheme will share the stages

+
+\[\begin{split}\begin{array}{l} + k_1 = h f(y_n,t_n)\\ + k_2 = h f(y_n+b_{21}k_1, t_n+a_2h)\\ + \vdots \\ + k_6 = h f(y_n+b_{61}k_1+ ...+b_{66}k_6, t_n+a_6h) +\end{array}\end{split}\]
+

The fifth-order formula takes the step:

+
+\[y_{n+1}=y_n+c_1k_1+c_2k_2+c_3k_3+c_4k_4+c_5k_5+c_6k_6\]
+

while the embedded fourth-order formula takes a different step:

+
+\[y_{n+1}^*=y_n+c^*_1k_1+c^*_2k_2+c^*_3k_3+c^*_4k_4+c^*_5k_5+c^*_6k_6\]
+

If we now take the difference between the two numerical estimates, we get an estimate \(\Delta_{\rm spec}\) of the truncation error for the fourth-order method,

+
+\[ \Delta_{\rm est}(i)=y_{n+1}(i) - y_{n+1}^{*}(i) += \sum^{6}_{i=1}(c_i-c_{i}^{*})k_i\]
+

This will prove to be very useful in the next lab where we provide the Runge-Kutta algorithms with adaptive stepsize control. The error estimate is used as a guide to an appropriate choice of stepsize.

+

An example of an embedded Runge-Kutta scheme was found by Cash and Karp and has the tableau:

+
+\[\begin{split}\begin{array}{|c|c|cccccc|c|c|} \hline +i & a_i & {b_{ij}} & & & & & & c_i & c^*_i \\ \hline +1 & & & & & & & & \frac{37}{378} & \frac{2825}{27648}\\ +2 & \frac{1}{5} & \frac{1}{5}& & & & & & 0 &0 \\ +3 & \frac{3}{10} & \frac{3}{40}&\frac{9}{40}& & & & &\frac{250}{621}&\frac{18575}{48384}\\ +4 & \frac{3}{5}&\frac{3}{10}& -\frac{9}{10}&\frac{6}{5}& & & &\frac{125}{594}& \frac{13525}{55296}\\ +5 & 1 & -\frac{11}{54}&\frac{5}{2}&-\frac{70}{27}&\frac{35}{27}& & & 0 & \frac{277}{14336}\\ +6 & \frac{7}{8}& \frac{1631}{55296}& \frac{175}{512}&\frac{575}{13824}& \frac{44275}{110592}& \frac{253}{4096}& & \frac{512}{1771} & \frac{1}{4}\\\hline +{j=} & & 1 & 2 & 3 & 4 & 5 & 6 & & \\ \hline +\end{array}\end{split}\]
+
+
+

ProblemEmbedded

+

Though the error estimate is for the embedded fourth-order Runge-Kutta method, the fifth-order method can be used in practice for calculating the solution, the assumption being the fifth-order method should be at least as accurate as the fourth-order method. In the demo below, compare solutions of the test problem eq:test2

+

eq:test2

+
+\[\frac{dy}{dt} = -y +t +1, \;\;\;\; y(0) =1\]
+

generated by the fifth-order method with solutions generated by the standard fourth-order Runge-Kutta method. Which method is more accurate? For each method, quantitatively analyse how the error decreases as you halve the stepsizes - discuss whether this is the expected behaviour given what you know about the order of the methods?

+

Optional extra part: adapt the rkckODEinter41 code to return both the 5th order (as it currently does) AND the embedded 4th order scheme. Compare the accuracy of the embedded 4th order scheme to the standard 4th order scheme.

+
+
[ ]:
+
+
+
import numpy as np
+from matplotlib import pyplot as plt
+
+from numlabs.lab4.lab4_functions import initinter41,rk4ODEinter41,rkckODEinter41
+initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.2,'c1':-1.,'c2':1.,'c3':1.}
+coeff = initinter41(initialVals)
+
+timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)
+nsteps=len(timeVec)
+ye=[]
+ym=[]
+yrk=[]
+yrkck=[]
+y1=coeff.yinitial
+y2=coeff.yinitial
+yrk.append(coeff.yinitial)
+yrkck.append(coeff.yinitial)
+for i in np.arange(1,nsteps):
+    ynew=rk4ODEinter41(coeff,y1,timeVec[i-1])
+    yrk.append(ynew)
+    y1=ynew
+    ynew=rkckODEinter41(coeff,y2,timeVec[i-1])
+    yrkck.append(ynew)
+    y2=ynew
+analytic=timeVec + np.exp(-timeVec)
+theFig,theAx=plt.subplots(1,1)
+l1=theAx.plot(timeVec,analytic,'b-',label='analytic')
+theAx.set_xlabel('time (seconds)')
+l2=theAx.plot(timeVec,yrkck,'g-',label='rkck')
+l3=theAx.plot(timeVec,yrk,'m-',label='rk')
+theAx.legend(loc='best')
+theAx.set_title('interactive 4.3');
+
+
+
+
+
+
+

Python: moving from a notebook to a library

+
+

Managing problem configurations

+

So far we’ve hardcoded our initialVars file into a cell. We need a strategy for saving this information into a file that we can keep track of using git, and modify for various runs. In python the fundamental data type is the dictionary. It’s very flexible, but that comes at a cost – there are other data structures that are better suited to storing this type of information.

+
+

Mutable vs. immutable data types

+

Python dictionaries and lists are mutable, which means they can be modified after they are created. Python tuples, on the other hand, are immutable – there is no way of changing them without creating a copy. Why does this matter? One reason is efficiency and safety, an immutable object is easier to reason about. Another reason is that immutable objects are hashable, that is, they can be turned into a unique string that can be guaranteed to represent that exact instance of the +datatype. Hashable data structures can be used as dictionary keys, mutable data structures can’t. Here’s an illustration – this cell works:

+
+
[ ]:
+
+
+
test_dict=dict()
+the_key = (0,1,2,3) # this is a tuple, i.e. immutable - it uses curved parentheses ()
+test_dict[the_key]=5
+print(test_dict)
+
+
+
+

this cell fails:

+
+
[ ]:
+
+
+
import traceback, sys
+test_dict=dict()
+the_key = [0,1,2,3] # this is a list - it uses square parentheses []
+try:
+    test_dict[the_key]=5
+except TypeError as e:
+    tb = sys.exc_info()
+    traceback.print_exception(*tb)
+
+
+
+
+
Named tuples
+

One particular tuple flavor that bridges the gap between tuples and dictionaries is the namedtuple. It has the ability to look up values by attribute instead of numerical index (unlike a tuple), but it’s immutable and so can be used as a dictionary key. The cell below show how to convert from a dictionary to a namedtuple for our case:

+
+
[ ]:
+
+
+
from collections import namedtuple
+initialDict={'yinitial': 1,'t_beg':0.,'t_end':1.,
+                    'dt':0.2,'c1':-1.,'c2':1.,'c3':1.}
+inittup=namedtuple('inittup','dt c1 c2 c3 t_beg t_end yinitial')
+initialCond=inittup(**initialDict)
+print(f"values are {initialCond.c1} and {initialCond.yinitial}")
+
+
+
+

Comment on the cell above:

+
    +
  1. inittup=namedtuple('inittup','dt c1 c2 c3 t_beg t_end yinitial') creats a new data type with a type name (inittup) and properties (the attributes we wlll need like dt, c1 etc.)

  2. +
  3. initialCond=inittup(**initialDict) uses “keyword expansion” via the “doublesplat” operator ** to expand the initialDict into a set of key=value pairs for the inittup constructor which makes an instance of our new datatype called initialCond

  4. +
  5. we access these readonly members of the instance using attributes like this: newc1 = initialCond.c1

  6. +
+

Note the other big benefit for namedtuples – “initialCond.c1” is self-documenting, you don’t have to explain that the tuple value initialCond[3] holds c1, and you never have to worry about changes to the order of the tuple changing the results of your code.

+
+
+
+
+

Saving named tuples to a file

+

One drawback to namedtuples is that there’s no one annointed way to serialize them i.e. we are in charge of trying to figure out how to write our namedtuple out to a file for future use. Contrast this with lists, strings, and scalar numbers and dictionaries, which all have a builtin json representation in text form.

+

So here’s how to turn our named tuple back into a dictionary:

+
+
[ ]:
+
+
+
#
+# make the named tuple a dictionary
+#
+initialDict = initialCond._asdict()
+print(initialDict)
+
+
+
+

Why does _asdict start with an underscore? It’s to keep the fundamental methods and attributes of the namedtuple class separate from the attributes we added when we created the new inittup class. For more information, see the collections docs

+
+
[ ]:
+
+
+
outputDict = dict(initialconds = initialDict)
+import json
+outputDict['history'] = 'written Jan. 28, 2020'
+outputDict['plot_title'] = 'simple damped oscillator run 1'
+with open('run1.json', 'w') as jsonout:
+    json.dump(outputDict,jsonout,indent=4)
+
+
+
+

After running this cell, you should see the following json output in the file run1.json:

+
{
+    "initialconds": {
+        "dt": 0.2,
+        "c1": -1.0,
+        "c2": 1.0,
+        "c3": 1.0,
+        "t_beg": 0.0,
+        "t_end": 1.0,
+        "yinitial": 1
+    },
+    "history": "written Jan. 26, 2022",
+    "plot_title": "simple damped oscillator run 1"
+}
+
+
+
+
+

Reading a json file back into python

+

To recover your conditions read the file back in as a dictionary:

+
+
[ ]:
+
+
+
with open("run1.json",'r') as jsonin:
+    inputDict = json.load(jsonin)
+initial_conds = inittup(**inputDict['initialconds'])
+print(f"values are {initial_conds.c1} and {initial_conds.yinitial}")
+
+
+
+
+
+

Passing a derivative function to an integrator

+

In python, functions are first class objects, which means you can pass them around like any other datatype, no need to get function handles as in matlab or Fortran. The integrators in do_example.py have been written to accept a derivative function of the form:

+
def derivs4(coeff, y):
+
+
+

i.e. as long as the derivative can be written in terms of coefficients and the previous value of y, the integrator will move the ode ahead one timestep. If we wanted coefficients that were a function of time, we would need to also include those functions the coeff namedtuple, and add keep track of the timestep through the integration.

+

Here’s an example using forward euler to integrate the harmonic oscillator

+

Note that you can also run this from the terminal by doing:

+
cd numlabs/lab4/example
+python do_example.py
+
+
+
+
[ ]:
+
+
+
import json
+from numlabs.lab4.example.do_example import get_init, euler4
+#
+# specify the derivs function
+#
+def derivs(coeff, y):
+  f=np.empty_like(y) #create a 2 element vector to hold the derivative
+  f[0]=y[1]
+  f[1]= -1.*coeff.c1*y[1] - coeff.c2*y[0]
+  return f
+#
+# first make sure we have an input file in this directory
+#
+
+coeff=get_init()
+
+#
+# integrate and save the result in savedata
+#
+time=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)
+y=coeff.yinitial
+nsteps=len(time)
+savedata=np.empty([nsteps],np.float64)
+
+for i in range(nsteps):
+    y=euler4(coeff,y,derivs)
+    savedata[i]=y[0]
+
+theFig,theAx=plt.subplots(1,1,figsize=(8,8))
+theAx.plot(time,savedata,'o-')
+theAx.set_title(coeff.plot_title)
+theAx.set_xlabel('time (seconds)')
+theAx.set_ylabel('y0');
+

+
+
+
+
+

ProblemCodingA

+

As set up above, do_example.py solves the damped, harmonic oscillator with the (unstable) forward Euler method.

+
    +
  1. Write a new routine that solves the harmonic oscilator using Heun’s method along the lines of the routines in lab4_functions.py

  2. +
+
+
+

ProblemCodingB

+
    +
  1. Now solve the following test equation by both the midpoint and Heun’s method and compare.

    +
    +\[f(y,t) = t - y + 1.0\]
    +

    Choose two sets of initial conditions and determine if there is any difference between the two methods when applied to this problem. Should there be? Explain by analyzing the steps that each method is taking.

    +
  2. +
+
+
+

ProblemCodingC

+
    +
  1. Solve the Newtonian cooling equation of lab 1 by any of the above methods.

  2. +
  3. Add cells that do this and also generate some plots, showing your along with the parameter values and initial conditions.

  4. +
+
+
+
+

Mathematical Notes

+
+

Note on the Derivation of the Second-Order Runge-Kutta Methods

+

A general s-stage Runge-Kutta method can be written as,

+
+\[\begin{split} \begin{array}{l} + k_i = h f(y_n+ {\displaystyle \sum_{j=1}^{s} } b_{ij}k_j, t_n+a_ih), + \;\;\; i=1,..., s\\ + y_{n+1} = y_n + {\displaystyle \sum_{j=1}^{s}} c_jk_j +\end{array}\end{split}\]
+

where

+

\({\displaystyle \sum_{j=1}^{s} } b_{ij} = a_i\).

+

In particular, an explicit 2-stage Runge-Kutta method can be written as,

+
+\[\begin{split}\begin{array}{l} + k_1 = h f(y_n,t_n)\\ + k_2 = h f(y_n+ak_1, t_n+ah)\\ + y_{n+1} = y_n + c_1k_1 +c_2k_2 +\end{array}\end{split}\]
+

where

+

\(b_{21} = a_2 \equiv a\).

+

So we want to know what values of \(a\), \(c_1\) and \(c_2\) leads to a second-order method, i.e. a method with an error proportional to \(h^3\).

+

To find out, we compare the method against a second-order Taylor expansion,

+
+\[y(t_n+h) = y(t_n) + hy^\prime(t_n) + \frac{h^2}{2}y^{\prime \prime}(t_n) + + O(h^3)\]
+

So for the \(y_{n+1}\) to be second-order accurate, it must match the Taylor method. In other words, \(c_1k_1 +c_2k_2\) must match \(hy^\prime(t_n) + \frac{h^2}{2}y^{\prime \prime}\). To do this, we need to express \(k_1\) and \(k_2\) in terms of derivatives of \(y\) at time \(t_n\).

+

First note, \(k_1 = hf(y_n, t_n) = hy^\prime(t_n)\).

+

Next, we can expand \(k_2\) about \((y_n.t_n)\),

+
+\[k_2 = hf(y_n+ak_1, t_n+ah) = h(f + haf_t + haf_yy^\prime + O(h^2))\]
+

However, we can write \(y^{\prime \prime}\) as,

+
+\[y^{\prime \prime} = \frac{df}{dt} = f_t + f_yy^\prime\]
+

This allows us to rewrite \(k_2\) in terms of \(y^{\prime \prime}\),

+
+\[k_2 = h(y^\prime + hay^{\prime \prime}+ O(h^2))\]
+

Substituting these expressions for \(k_i\) back into the Runge-Kutta formula gives us,

+
+\[y_{n+1} = y_n + c_1hy^\prime +c_2h(y^\prime + hay^{\prime \prime})\]
+

or

+
+\[y_{n+1} = y_n + h(c_1 +c_2)y^\prime + h^2(c_2a)y^{\prime \prime}\]
+

If we compare this against the second-order Taylor method, we see that we need,

+
+\[\begin{split} \begin{array}{l} +c_1 + c_2 = 1\\ +a c_2 = \frac{1}{2} + \end{array}\end{split}\]
+

for the Runge-Kutta method to be second-order.

+

If we choose \(a = 1/2\), this implies \(c_2 = 1\) and \(c_1=0\). This gives us the midpoint method.

+

However, note that other choices are possible. In fact, we have a one-parameter family of second-order methods. For example if we choose, \(a=1\) and \(c_1=c_2=\frac{1}{2}\), we get the modified Euler method,

+
+\[\begin{split} \begin{array}{l} + k_1 = h f(y_n,t_n)\\ + k_2 = h f(y_n+k_1, t_n+h)\\ + y_{n+1} = y_n + \frac{1}{2}(k_1 +k_2) +\end{array}\end{split}\]
+

while the choice \(a=\frac{2}{3}\), \(c_1=\frac{1}{4}\) and \(c_2=\frac{3}{4}\), gives us

+

Heun’s Method (also referred to as Ralston’s method)

+
Note: you may find a different definition of Heun's Method depending on the textbook you are reading)
+
+
+
+\[\begin{split}\begin{array}{l} + k_1 = h f(y_n,t_n)\\ + k_2 = h f(y_n+\frac{2}{3}k_1, t_n+\frac{2}{3}h)\\ + y_{n+1} = y_n + \frac{1}{4}k_1 + \frac{3}{4}k_2 +\end{array}\end{split}\]
+
+
+
+

Glossary

+
    +
  • driver A routine that calls the other routines to solve the problem.

  • +
  • embedded Runge-Kutta methods: Two Runge-Kutta methods that share the same stages. The difference between the solutions give an estimate of the local truncation error.

  • +
  • explicit In an explicit numerical scheme, the calculation of the solution at a given step or stage does not depend on the value of the solution at that step or on a later step or stage.

  • +
  • fourth-order Runge-Kutta method A popular fourth-order, four-stage, explicit Runge-Kutta method.

  • +
  • implicit: In an implicit numerical scheme, the calculation of the solution at a given step or stage does depend on the value of the solution at that step or on a later step or stage. Such methods are usually more expensive than implicit schemes but are better for handling stiff ODEs.

  • +
  • midpoint method : A two-stage, second-order Runge-Kutta method.

  • +
  • stages: The approximations to the derivative made in a Runge-Kutta method between the start and end of a step.

  • +
  • tableau The tableau for a Runge-Kutta method organizes the coefficients for the method in tabular form.

  • +
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab4/01-lab4.ipynb b/notebooks/lab4/01-lab4.ipynb new file mode 100644 index 0000000..c76129c --- /dev/null +++ b/notebooks/lab4/01-lab4.ipynb @@ -0,0 +1,1346 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Solving Ordinary Differential Equations with the Runge-Kutta Methods " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems \n", + "\n", + "\n", + "- [Problem Midpoint](#ProblemMidpoint)\n", + "\n", + "- [Problem Tableau](#ProblemTableau)\n", + "\n", + "- [Problem Runge Kutta4](#ProblemRK4)\n", + "\n", + "- [Problem embedded](#ProblemEmbedded)\n", + "\n", + "- [Problem coding A](#ProblemCodingA)\n", + "\n", + "- [Problem coding B](#ProblemCodingB)\n", + "\n", + "- [Problem coding C](#ProblemCodingC)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Assignment: see canvas for the problems you should hand-in.**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "In this lab, you will explore Runge-Kutta methods for solving ordinary\n", + "differential equations. The goal is to gain a better understanding of\n", + "some of the more popular Runge-Kutta methods and the corresponding\n", + "numerical code.\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- describe the mid-point method\n", + "\n", + "- construct a Runge-Kutta tableau from equations or equations from a\n", + " tableau\n", + "\n", + "- describe how a Runge-Kutta method estimates truncation error\n", + "\n", + "- edit a working python code to use a different method or solve a\n", + " different problem" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. However, if you would like additional background on any of\n", + "the following topics, then refer to the sections indicated below.\n", + "\n", + "**Runge-Kutta Methods:**\n", + "\n", + " - Newman, Chapter 8\n", + "\n", + " - Press, et al.  Section 16.1\n", + "\n", + " - Burden & Faires  Section 5.4\n", + " " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction\n", + "\n", + "Ordinary differential equations (ODEs) arise in many physical situations. For example, there is the first-order Newton cooling equation discussed in Lab 1, and perhaps the most famous equation of all, the second-order Newton’s Second Law of Mechanics $F=ma$ .\n", + "\n", + "In general, higher-order equations, such as Newton’s force equation, can be rewritten as a system of first-order equations . So the generic problem in ODEs is a set of N coupled first-order differential equations of the form, \n", + "\n", + "$$\n", + " \\frac{d{\\bf y}}{dt} = f({\\bf y},t)\n", + "$$ \n", + " \n", + "where ${\\bf y}$ is a vector of\n", + "variables.\n", + "\n", + "For a complete specification of the solution, boundary conditions for the problem must be given. Typically, the problems are broken up into two classes:\n", + "\n", + "- **Initial Value Problem (IVP)**: the initial values of\n", + " ${\\bf y}$ are specified.\n", + "\n", + "- **Boundary Value Problem (BVP)**: ${\\bf y}$ is\n", + " specified at the initial and final times.\n", + "\n", + "For this lab, we are concerned with the IVP’s. BVP’s tend to be much more difficult to solve and involve techniques which will not be dealt with in this set of labs.\n", + "\n", + "Now as was pointed out in Lab 2, in general, it will not be possible to find exact, analytic solutions to the ODE. However, it is possible to find an approximate solution with a finite difference scheme such as the forward Euler method. This is a simple first-order, one-step scheme which is easy to implement. However, this method is rarely used in practice as it is neither very stable nor accurate.\n", + "\n", + "The higher-order Taylor methods discussed in Lab 2 are one alternative but involve higher-order derivatives that must be calculated by hand or worked out numerically in a multi-step scheme. Like the forward Euler method, stability is a concern.\n", + "\n", + "The Runge-Kutta methods are higher-order, one-step schemes that make use of information at different *stages* between the beginning and end of a step. They are more stable and accurate than the forward Euler method and are still relatively simple compared to schemes such as the multi-step predictor-corrector methods or the Bulirsch-Stoer method. Though they lack the accuracy and efficiency of these more sophisticated schemes, they are still powerful methods that almost always succeed for non-stiff IVPs." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Runge-Kutta methods" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The Midpoint Method: A Two-Stage Runge-Kutta Method \n", + "\n", + "The forward Euler method takes the solution at time $t_n$ and advances\n", + "it to time $t_{n+1}$ using the value of the derivative $f(y_n,t_n)$ at\n", + "time $t_n$ \n", + "\n", + "$$y_{n+1} = y_n + h f(y_n,t_n)$$ \n", + "\n", + "where $h \\equiv \\Delta t$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "title": "[markdown" + }, + "source": [ + "![fig1](images/euler.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Figure Euler: The forward Euler method is essentially a straight-line approximation to the solution, over the interval of one step, using the derivative at the starting point as the slope. \n", + "\n", + "The idea of the Runge-Kutta schemes is to take advantage of derivative information at the times between $t_n$ and $t_{n+1}$ to increase the order of accuracy.\n", + "\n", + "For example, in the midpoint method, the derivative at the initial time is used to approximate the derivative at the midpoint of the interval, $f(y_n+\\frac{1}{2}hf(y_n,t_n), t_n+\\frac{1}{2}h)$. The derivative at the midpoint is then used to advance the solution to the next step. \n", + "\n", + "The method can be written in two *stages* $k_i$,\n", + "\n", + "
eq:midpoint
\n", + "$$\n", + "\\begin{aligned}\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{1}{2}k_1, t_n+\\frac{1}{2}h)\\\\\n", + " y_{n+1} = y_n + k_2\n", + " \\end{array}\n", + "\\end{aligned}\n", + "$$ \n", + "\n", + "The midpoint method is known as a 2-stage Runge-Kutta formula.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "![fig2](images/midpoint.png)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Figure midpoint: The midpoint method again uses the derivative at the starting point to\n", + "approximate the solution at the midpoint. The derivative at the midpoint\n", + "is then used as the slope of the straight-line approximation." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Second-Order Runge-Kutta Methods\n", + "\n", + "As was shown in Lab 2, the local error in the forward Euler method is proportional to $h^2$. In other words, the forward Euler method has an accuracy which is *first order* in $h$.\n", + "\n", + "The advantage of the midpoint method is that the extra derivative information at the midpoint results in the first order error term cancelling out, making the method *second order* accurate. This can be shown by a Taylor expansion of equation\n", + "[eq:midpoint](#eq:midpoint)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-24T03:40:10.198195Z", + "start_time": "2022-01-24T03:40:10.179554Z" + } + }, + "source": [ + "### ProblemMidpoint\n", + "\n", + "Even though the midpoint method is second-order\n", + "accurate, it may still be less accurate than the forward Euler method.\n", + "In the demo below, compare the accuracy of the two methods on the\n", + "initial value problem \n", + "\n", + "
eq:linexp
\n", + "\\begin{equation}\n", + "\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1\n", + "\\end{equation}\n", + "\n", + "which has the exact\n", + "solution \n", + "\\begin{equation}\n", + "y(t) = t + e^{-t}\n", + "\\end{equation}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "1. Why is it possible that the midpoint method may be less accurate\n", + " than the forward Euler method, even though it is a higher order\n", + " method?\n", + "\n", + "2. Based on the numerical solutions of [eq:linexp](#eq:linexp), which method\n", + " appears more accurate?\n", + "\n", + "3. Cut the stepsize in half and check the error at a given time. Repeat\n", + " a couple of more times. How does the error drop relative to the\n", + " change in stepsize?\n", + "\n", + "4. How do the numerical solutions compare to $y(t) = t + e^{-t}$ when\n", + " you change the initial time? Why?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "_Note: in the original lab code (below) there is a bug in the code, such that the euler method is being initialized at each timestep with the previous value from the midpoint method, NOT the previous value from the euler method!_" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# original demo - with bug\n", + "import context\n", + "from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41\n", + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.25,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "y=coeff.yinitial\n", + "ye.append(coeff.yinitial)\n", + "ym.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=eulerinter41(coeff,y,timeVec[i-1])\n", + " ye.append(ynew) \n", + " ynew=midpointinter41(coeff,y,timeVec[i-1])\n", + " ym.append(ynew)\n", + " y=ynew\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig,theAx=plt.subplots(1,1)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,ye,'r-',label='euler')\n", + "l3=theAx.plot(timeVec,ym,'g-',label='midpoint')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.1');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "_Note: this bug has been fixed in the code below, by calling each method with the previous value from that method!_" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-24T03:40:30.477370Z", + "start_time": "2022-01-24T03:40:29.188538Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# original demo\n", + "import context\n", + "from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41\n", + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.25,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "y=coeff.yinitial\n", + "ye.append(coeff.yinitial)\n", + "ym.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=eulerinter41(coeff,ye[i-1],timeVec[i-1]) ## here we use ye[i-1] instead of y\n", + " ye.append(ynew)\n", + " \n", + " ynew=midpointinter41(coeff,ym[i-1],timeVec[i-1]) ## here we use ym[i-1] instead of y\n", + " ym.append(ynew)\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig,theAx=plt.subplots(1,1)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,ye,'r-',label='euler')\n", + "l3=theAx.plot(timeVec,ym,'g-',label='midpoint')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.1');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In general, an *explicit* 2-stage Runge-Kutta method can be\n", + "written as\n", + "\n", + "\n", + "\n", + "
eq:explicitrk1
\n", + "\n", + "\\begin{align}\n", + "k_1 =& h f(y_n,t_n)\\\\\n", + "k_2 =& h f(y_n+b_{21}k_1, t_n+a_2h) \\\\\n", + "y_{n+1} =& y_n + c_1k_1 +c_2k_2\n", + "\\end{align}" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The scheme is said to be *explicit* since a given stage does not depend *implicitly* on itself, as in the backward Euler method, or on a later stage.\n", + "\n", + "Other explicit second-order schemes can be derived by comparing the formula [eq: explicitrk2](#eq:explicitrk2) to the second-order Taylor method and matching terms to determine the coefficients $a_2$, $b_{21}$, $c_1$ and $c_2$.\n", + "\n", + "See [Appendix midpoint](#app_midpoint) for the derivation of the midpoint method." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### The Runge-Kutta Tableau \n", + "\n", + "A general s-stage Runge-Kutta method can be written as,\n", + "\n", + "$$\n", + "\\begin{array}{l}\n", + " k_i = h f(y_n+ {\\displaystyle \\sum_{j=1}^{s} } b_{ij}k_j, t_n+a_ih), \n", + " \\;\\;\\; i=1,..., s\\\\\n", + " y_{n+1} = y_n + {\\displaystyle \\sum_{j=1}^{s}} c_jk_j \n", + "\\end{array}\n", + "$$\n", + "\n", + "\n", + "\n", + "\n", + "An *explicit* Runge-Kutta method has $b_{ij}=0$ for\n", + "$i\\leq j$, i.e. a given stage $k_i$ does not depend on itself or a later\n", + "stage $k_j$.\n", + "\n", + "The coefficients can be expressed in a tabular form known as the\n", + "Runge-Kutta tableau. \n", + "\n", + "$$\n", + "\\begin{array}{|c|c|cccc|c|} \\hline\n", + "i & a_i &{b_{ij}} & & && c_i \\\\ \\hline\n", + "1 & a_1 & b_{11} & b_{12} & ... & b_{1s} & c_1\\\\\n", + "2 & a_2 & b_{21} & b_{22} & ... & b_{2s} & c_2\\\\ \n", + "\\vdots & \\vdots & \\vdots & \\vdots & & \\vdots & \\vdots\\\\\n", + "s &a_s & b_{s1} & b_{s2} & ... & b_{ss} & c_s\\\\\\hline\n", + "{j=} & & 1 \\ 2 & ... & s & \\\\ \\hline\n", + "\\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "An explicit scheme will be strictly lower-triangular.\n", + "\n", + "For example, a general 2-stage Runge-Kutta method, \n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n+b_{11}k_1+b_{12}k_2,t_n+a_1h)\\\\\n", + " k_2 = h f(y_n+b_{21}k_1+b_{22}k_2, t_n+a_2h)\\\\\n", + " y_{n+1} = y_n + c_1k_1 +c_2k_2\n", + " \\end{array}\n", + "$$\n", + " \n", + " \n", + " has the coefficients,\n", + "\n", + "$$\n", + "\\begin{array}{|c|c|cc|c|} \\hline\n", + "i & a_i & {b_{ij}} & & c_i \\\\ \\hline\n", + "1 & a_1 & b_{11} & b_{12} & c_1\\\\\n", + "2 & a_2 & b_{21} & b_{22} & c_2\\\\ \\hline\n", + "{j=} & & 1 & 2 & \\\\ \\hline\n", + "\\end{array}\n", + "$$\n", + "\n", + "\n", + "\n", + "In particular, the midpoint method is given by the tableau,\n", + "\n", + "$$\n", + "\\begin{array}{|c|c|cc|c|} \\hline\n", + "i & a_i & {b_{ij}} & & c_i \\\\ \\hline\n", + "1 & 0 & 0 & 0 & 0\\\\\n", + "2 & \\frac{1}{2} & \\frac{1}{2} & 0 & 1\\\\ \\hline\n", + "{j=} & & 1 & 2 & \\\\ \\hline\n", + "\\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemTableau \n", + "\n", + "Write out the tableau for\n", + "\n", + "1. [Heun’s/Ralston method](#eq:heuns):\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{2}{3}k_1, t_n+\\frac{2}{3}h)\\\\\n", + " y_{n+1} = y_n + \\frac{1}{4}k_1 + \\frac{3}{4}k_2\n", + " \\end{array}\n", + "$$\n", + "\n", + "3. the fourth-order Runge-Kutta method ([eq:rk4](#eq:rk4)) (discussed further in the\n", + " next section):\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{k_1}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_3 = h f(y_n+\\frac{k_2}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_4 = h f(y_n+k_3, t_n+h)\\\\\n", + " y_{n+1} = y_n + \\frac{k_1}{6}+ \\frac{k_2}{3}+ \\frac{k_3}{3} + \\frac{k_4}{6}\n", + " \\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Explicit Fourth-Order Runge-Kutta Method \n", + "\n", + "\n", + "\n", + "\n", + "Explicit Runge-Kutta methods are popular as each stage can be calculated\n", + "with one function evaluation. In contrast, implicit Runge-Kutta methods\n", + "usually involves solving a non-linear system of equations in order to\n", + "evaluate the stages. As a result, explicit schemes are much less\n", + "expensive to implement than implicit schemes.\n", + "\n", + "However, there are cases in which implicit schemes are necessary and\n", + "that is in the case of *stiff* sets of equations. See\n", + "section 16.6 of Press et al. for a discussion. For these labs, we will\n", + "focus on non-stiff equations and on explicit Runge-Kutta methods.\n", + "\n", + "The higher-order Runge-Kutta methods can be derived by in manner similar\n", + "to the midpoint formula. An s-stage method is compared to a Taylor\n", + "method and the terms are matched up to the desired order.\n", + "\n", + "Methods of order $M > 4$ require $M+1$ or $M+2$ function evaluations or\n", + "stages, in the case of explicit Runge-Kutta methods. As a result,\n", + "fourth-order Runge-Kutta methods have achieved great popularity over the\n", + "years as they require only four function evaluations per step. In\n", + "particular, there is the classic fourth-order Runge-Kutta formula:\n", + "\n", + "
eq:rk4
\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{k_1}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_3 = h f(y_n+\\frac{k_2}{2}, t_n+\\frac{h}{2})\\\\\n", + " k_4 = h f(y_n+k_3, t_n+h)\\\\\n", + " y_{n+1} = y_n + \\frac{k_1}{6}+ \\frac{k_2}{3}+ \\frac{k_3}{3} + \\frac{k_4}{6}\n", + " \\end{array}\n", + "$$\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemRK4\n", + " \n", + "In the cell below, compare compare solutions to the test\n", + "problem\n", + "\n", + "
eq:test
\n", + "$$\n", + "\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1\n", + "$$ \n", + "\n", + "generated with the\n", + "fourth-order Runge-Kutta method to solutions generated by the forward\n", + "Euler and midpoint methods.\n", + "\n", + "1. Based on the numerical solutions of ([eq:test](#eq:test)), which of the\n", + " three methods appears more accurate?\n", + "\n", + "2. Again determine how the error changes relative to the change in\n", + " stepsize, as the stepsize is halved." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:33:20.675166Z", + "start_time": "2022-01-27T04:33:16.987738Z" + } + }, + "outputs": [], + "source": [ + "from numlabs.lab4.lab4_functions import initinter41,eulerinter41,midpointinter41,\\\n", + " rk4ODEinter41\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.05,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "yrk=[]\n", + "y=coeff.yinitial\n", + "ye.append(coeff.yinitial)\n", + "ym.append(coeff.yinitial)\n", + "yrk.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=eulerinter41(coeff,ye[i-1],timeVec[i-1])\n", + " ye.append(ynew)\n", + " ynew=midpointinter41(coeff,ym[i-1],timeVec[i-1])\n", + " ym.append(ynew)\n", + " ynew=rk4ODEinter41(coeff,yrk[i-1],timeVec[i-1])\n", + " yrk.append(ynew)\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig=plt.figure(0)\n", + "theFig.clf()\n", + "theAx=theFig.add_subplot(111)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,ye,'r-',label='euler')\n", + "l3=theAx.plot(timeVec,ym,'g-',label='midpoint')\n", + "l4=theAx.plot(timeVec,yrk,'m-',label='rk4')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.2');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Embedded Runge-Kutta Methods: Estimate of the Truncation Error \n", + "\n", + "\n", + "\n", + "It is possible to find two methods of different order which share the\n", + "same stages $k_i$ and differ only in the way they are combined, i.e. the\n", + "coefficients $c_i$. For example, the original so-called embedded\n", + "Runge-Kutta scheme was discovered by Fehlberg and consisted of a\n", + "fourth-order scheme and fifth-order scheme which shared the same six\n", + "stages.\n", + "\n", + "In general, a fourth-order scheme embedded in a fifth-order scheme will\n", + "share the stages \n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+b_{21}k_1, t_n+a_2h)\\\\\n", + " \\vdots \\\\\n", + " k_6 = h f(y_n+b_{61}k_1+ ...+b_{66}k_6, t_n+a_6h)\n", + " \\end{array}\n", + "$$\n", + "\n", + " \n", + "\n", + "\n", + "\n", + "\n", + "The fifth-order formula takes the step: \n", + "\n", + "$$\n", + " y_{n+1}=y_n+c_1k_1+c_2k_2+c_3k_3+c_4k_4+c_5k_5+c_6k_6\n", + "$$ \n", + "\n", + "while the\n", + "embedded fourth-order formula takes a different step:\n", + "\n", + "\n", + "\n", + "$$\n", + " y_{n+1}^*=y_n+c^*_1k_1+c^*_2k_2+c^*_3k_3+c^*_4k_4+c^*_5k_5+c^*_6k_6\n", + "$$\n", + "\n", + "If we now take the difference between the two numerical estimates, we\n", + "get an estimate $\\Delta_{\\rm spec}$ of the truncation error for the\n", + "fourth-order method, \n", + "\n", + "\n", + "$$\n", + " \\Delta_{\\rm est}(i)=y_{n+1}(i) - y_{n+1}^{*}(i) \n", + "= \\sum^{6}_{i=1}(c_i-c_{i}^{*})k_i\n", + "$$ \n", + "\n", + "This will prove to be very useful\n", + "in the next lab where we provide the Runge-Kutta algorithms with\n", + "adaptive stepsize control. The error estimate is used as a guide to an\n", + "appropriate choice of stepsize.\n", + "\n", + "An example of an embedded Runge-Kutta scheme was found by Cash and Karp\n", + "and has the tableau: " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "$$\n", + "\\begin{array}{|c|c|cccccc|c|c|} \\hline\n", + "i & a_i & {b_{ij}} & & & & & & c_i & c^*_i \\\\ \\hline\n", + "1 & & & & & & & & \\frac{37}{378} & \\frac{2825}{27648}\\\\\n", + "2 & \\frac{1}{5} & \\frac{1}{5}& & & & & & 0 &0 \\\\\n", + "3 & \\frac{3}{10} & \\frac{3}{40}&\\frac{9}{40}& & & & &\\frac{250}{621}&\\frac{18575}{48384}\\\\\n", + "4 & \\frac{3}{5}&\\frac{3}{10}& -\\frac{9}{10}&\\frac{6}{5}& & & &\\frac{125}{594}& \\frac{13525}{55296}\\\\\n", + "5 & 1 & -\\frac{11}{54}&\\frac{5}{2}&-\\frac{70}{27}&\\frac{35}{27}& & & 0 & \\frac{277}{14336}\\\\\n", + "6 & \\frac{7}{8}& \\frac{1631}{55296}& \\frac{175}{512}&\\frac{575}{13824}& \\frac{44275}{110592}& \\frac{253}{4096}& & \\frac{512}{1771} & \\frac{1}{4}\\\\\\hline\n", + "{j=} & & 1 & 2 & 3 & 4 & 5 & 6 & & \\\\ \\hline\n", + "\\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemEmbedded\n", + "\n", + "Though the error estimate is for the embedded\n", + "fourth-order Runge-Kutta method, the fifth-order method can be used in\n", + "practice for calculating the solution, the assumption being the\n", + "fifth-order method should be at least as accurate as the fourth-order\n", + "method. In the demo below, compare solutions of the test problem\n", + "[eq:test2](#eq:test2]) \n", + "\n", + "
eq:test2
\n", + "$$\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1$$\n", + "\n", + "generated by the fifth-order method with solutions generated by the\n", + "standard fourth-order Runge-Kutta method. Which method\n", + "is more accurate? For each method, quantitatively analyse how the error decreases as you halve\n", + "the stepsizes - discuss whether this is the expected behaviour given what you know about the order of the methods?\n", + "\n", + "Optional extra part: adapt the rkckODEinter41 code to return both the 5th order (as it currently does) AND the embedded 4th order scheme. Compare the accuracy of the embedded 4th order scheme to the standard 4th order scheme. " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import numpy as np\n", + "from matplotlib import pyplot as plt\n", + "\n", + "from numlabs.lab4.lab4_functions import initinter41,rk4ODEinter41,rkckODEinter41\n", + "initialVals={'yinitial': 1,'t_beg':0.,'t_end':1.,'dt':0.2,'c1':-1.,'c2':1.,'c3':1.}\n", + "coeff = initinter41(initialVals)\n", + "\n", + "timeVec=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "nsteps=len(timeVec)\n", + "ye=[]\n", + "ym=[]\n", + "yrk=[]\n", + "yrkck=[]\n", + "y1=coeff.yinitial\n", + "y2=coeff.yinitial\n", + "yrk.append(coeff.yinitial)\n", + "yrkck.append(coeff.yinitial)\n", + "for i in np.arange(1,nsteps):\n", + " ynew=rk4ODEinter41(coeff,y1,timeVec[i-1])\n", + " yrk.append(ynew)\n", + " y1=ynew\n", + " ynew=rkckODEinter41(coeff,y2,timeVec[i-1])\n", + " yrkck.append(ynew)\n", + " y2=ynew\n", + "analytic=timeVec + np.exp(-timeVec)\n", + "theFig,theAx=plt.subplots(1,1)\n", + "l1=theAx.plot(timeVec,analytic,'b-',label='analytic')\n", + "theAx.set_xlabel('time (seconds)')\n", + "l2=theAx.plot(timeVec,yrkck,'g-',label='rkck')\n", + "l3=theAx.plot(timeVec,yrk,'m-',label='rk')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('interactive 4.3');" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Python: moving from a notebook to a library\n", + "\n", + "### Managing problem configurations\n", + "\n", + "So far we've hardcoded our initialVars file into a cell. We need a strategy for saving\n", + "this information into a file that we can keep track of using git, and modify for\n", + "various runs. In python the fundamental data type is the dictionary. It's very\n", + "flexible, but that comes at a cost -- there are other data structures that are better\n", + "suited to storing this type of information.\n", + "\n", + "##### Mutable vs. immutable data types\n", + "\n", + "Python dictionaries and lists are **mutable**, which means they can be modified after they\n", + "are created. Python tuples, on the other hand, are **immutable** -- there is no way of changing\n", + "them without creating a copy. Why does this matter? One reason is efficiency and safety, an\n", + "immutable object is easier to reason about. Another reason is that immutable objects are **hashable**,\n", + "that is, they can be turned into a unique string that can be guaranteed to represent that exact\n", + "instance of the datatype. Hashable data structures can be used as dictionary keys, mutable\n", + "data structures can't. Here's an illustration -- this cell works:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:38:47.145511Z", + "start_time": "2022-01-27T04:38:47.141507Z" + } + }, + "outputs": [], + "source": [ + "test_dict=dict()\n", + "the_key = (0,1,2,3) # this is a tuple, i.e. immutable - it uses curved parentheses ()\n", + "test_dict[the_key]=5\n", + "print(test_dict)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "this cell fails:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:38:45.037053Z", + "start_time": "2022-01-27T04:38:45.016737Z" + } + }, + "outputs": [], + "source": [ + "import traceback, sys\n", + "test_dict=dict()\n", + "the_key = [0,1,2,3] # this is a list - it uses square parentheses []\n", + "try:\n", + " test_dict[the_key]=5\n", + "except TypeError as e:\n", + " tb = sys.exc_info()\n", + " traceback.print_exception(*tb)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Named tuples\n", + "\n", + "One particular tuple flavor that bridges the gap between tuples and dictionaries\n", + "is the [namedtuple](https://docs.python.org/3/library/collections.html#collections.namedtuple).\n", + "It has the ability to look up values by attribute instead of numerical index (unlike\n", + "a tuple), but it's immutable and so can be used as a dictionary key. The cell\n", + "below show how to convert from a dictionary to a namedtuple for our case:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:44:58.906667Z", + "start_time": "2022-01-27T04:44:58.893321Z" + } + }, + "outputs": [], + "source": [ + "from collections import namedtuple\n", + "initialDict={'yinitial': 1,'t_beg':0.,'t_end':1.,\n", + " 'dt':0.2,'c1':-1.,'c2':1.,'c3':1.}\n", + "inittup=namedtuple('inittup','dt c1 c2 c3 t_beg t_end yinitial')\n", + "initialCond=inittup(**initialDict)\n", + "print(f\"values are {initialCond.c1} and {initialCond.yinitial}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Comment on the cell above:\n", + "\n", + "1) `inittup=namedtuple('inittup','dt c1 c2 c3 t_beg t_end yinitial')`\n", + " creats a new data type with a type name (inittup) and properties\n", + " (the attributes we wlll need like dt, c1 etc.)\n", + " \n", + "2) `initialCond=inittup(**initialDict)`\n", + " uses \"keyword expansion\" via the \"doublesplat\" operator `**` to expand\n", + " the initialDict into a set of key=value pairs for the inittup constructor\n", + " which makes an instance of our new datatype called initialCond\n", + " \n", + "3) we access these readonly members of the instance using attributes like this:\n", + " `newc1 = initialCond.c1`\n", + "\n", + " \n", + "Note the other big benefit for namedtuples -- \"initialCond.c1\" is self-documenting,\n", + "you don't have to explain that the tuple value initialCond[3] holds c1,\n", + "and you never have to worry about changes to the order of the tuple changing the \n", + "results of your code." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "### Saving named tuples to a file\n", + "\n", + "One drawback to namedtuples is that there's no one annointed way to **serialize** them\n", + "i.e. we are in charge of trying to figure out how to write our namedtuple out\n", + "to a file for future use. Contrast this with lists, strings, and scalar numbers and\n", + "dictionaries, which all have a builtin **json** representation in text form.\n", + "\n", + "So here's how to turn our named tuple back into a dictionary:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:45:01.144390Z", + "start_time": "2022-01-27T04:45:01.138679Z" + } + }, + "outputs": [], + "source": [ + "#\n", + "# make the named tuple a dictionary\n", + "#\n", + "initialDict = initialCond._asdict()\n", + "print(initialDict)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Why does `_asdict` start with an underscore? It's to keep the fundamental\n", + "methods and attributes of the namedtuple class separate from the attributes\n", + "we added when we created the new `inittup` class. For more information, see\n", + "the [collections docs](https://docs.python.org/3/library/collections.html#module-collections)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:45:02.889690Z", + "start_time": "2022-01-27T04:45:02.883888Z" + } + }, + "outputs": [], + "source": [ + "outputDict = dict(initialconds = initialDict)\n", + "import json\n", + "outputDict['history'] = 'written Jan. 28, 2020'\n", + "outputDict['plot_title'] = 'simple damped oscillator run 1'\n", + "with open('run1.json', 'w') as jsonout:\n", + " json.dump(outputDict,jsonout,indent=4)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "After running this cell, you should see the following [json output](https://en.wikipedia.org/wiki/JSON) in the file `run1.json`:\n", + "\n", + "```\n", + "{\n", + " \"initialconds\": {\n", + " \"dt\": 0.2,\n", + " \"c1\": -1.0,\n", + " \"c2\": 1.0,\n", + " \"c3\": 1.0,\n", + " \"t_beg\": 0.0,\n", + " \"t_end\": 1.0,\n", + " \"yinitial\": 1\n", + " },\n", + " \"history\": \"written Jan. 26, 2022\",\n", + " \"plot_title\": \"simple damped oscillator run 1\"\n", + "}\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Reading a json file back into python\n", + "\n", + "To recover your conditions read the file back in as a dictionary:\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T04:48:15.111091Z", + "start_time": "2022-01-27T04:48:15.087970Z" + } + }, + "outputs": [], + "source": [ + "with open(\"run1.json\",'r') as jsonin:\n", + " inputDict = json.load(jsonin)\n", + "initial_conds = inittup(**inputDict['initialconds'])\n", + "print(f\"values are {initial_conds.c1} and {initial_conds.yinitial}\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Passing a derivative function to an integrator\n", + "\n", + "In python, functions are first class objects, which means you can pass them around like any\n", + "other datatype, no need to get function handles as in matlab or Fortran. The integrators\n", + "in [do_example.py](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab4/example/do_example.py)\n", + "have been written to accept a derivative function of the form:\n", + "\n", + "```python\n", + " def derivs4(coeff, y):\n", + "```\n", + "\n", + "i.e. as long as the derivative can be written in terms of coefficients\n", + "and the previous value of y, the integrator will move the ode ahead one\n", + "timestep. If we wanted coefficients that were a function of time, we would\n", + "need to also include those functions the coeff namedtuple, and add keep track of the\n", + "timestep through the integration.\n", + "\n", + "Here's an example using forward euler to integrate the harmonic oscillator\n", + "\n", + "Note that you can also run this from the terminal by doing:\n", + "\n", + "```\n", + "cd numlabs/lab4/example\n", + "python do_example.py\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-01-27T05:01:11.730184Z", + "start_time": "2022-01-27T05:01:11.592970Z" + } + }, + "outputs": [], + "source": [ + "import json\n", + "from numlabs.lab4.example.do_example import get_init, euler4\n", + "#\n", + "# specify the derivs function\n", + "#\n", + "def derivs(coeff, y):\n", + " f=np.empty_like(y) #create a 2 element vector to hold the derivative\n", + " f[0]=y[1]\n", + " f[1]= -1.*coeff.c1*y[1] - coeff.c2*y[0]\n", + " return f\n", + "#\n", + "# first make sure we have an input file in this directory\n", + "#\n", + "\n", + "coeff=get_init()\n", + "\n", + "#\n", + "# integrate and save the result in savedata\n", + "#\n", + "time=np.arange(coeff.t_beg,coeff.t_end,coeff.dt)\n", + "y=coeff.yinitial\n", + "nsteps=len(time)\n", + "savedata=np.empty([nsteps],np.float64)\n", + "\n", + "for i in range(nsteps):\n", + " y=euler4(coeff,y,derivs)\n", + " savedata[i]=y[0]\n", + "\n", + "theFig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "theAx.plot(time,savedata,'o-')\n", + "theAx.set_title(coeff.plot_title)\n", + "theAx.set_xlabel('time (seconds)')\n", + "theAx.set_ylabel('y0');\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### ProblemCodingA\n", + "\n", + "As set up above, do_example.py\n", + "solves the damped, harmonic oscillator with the (unstable) forward Euler method.\n", + "\n", + "1. Write a new routine that solves the harmonic oscilator using [Heun’s method](#eq:heuns)\n", + " along the lines of the routines in [lab4_functions.py](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab4/lab4_functions.py)\n", + "\n", + "\n", + "### ProblemCodingB\n", + "\n", + "1. Now solve the following test equation by both the midpoint and\n", + " Heun’s method and compare. \n", + " \n", + " $$f(y,t) = t - y + 1.0$$ \n", + " \n", + " Choose two sets of initial conditions and determine if there is any difference\n", + " between the two methods when applied to this problem. Should there be? Explain by\n", + " analyzing the steps that each method is taking.\n", + " \n", + "\n", + "### ProblemCodingC\n", + "\n", + "1. Solve the Newtonian cooling equation of lab 1 by any of the above\n", + " methods. \n", + "\n", + "2. Add cells that do this and also generate some plots, showing your along with the parameter values and\n", + " initial conditions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes \n", + "\n", + "\n", + "\n", + "\n", + "\n", + "\n", + "### Note on the Derivation of the Second-Order Runge-Kutta Methods\n", + "\n", + "A general s-stage Runge-Kutta method can be written as,\n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_i = h f(y_n+ {\\displaystyle \\sum_{j=1}^{s} } b_{ij}k_j, t_n+a_ih), \n", + " \\;\\;\\; i=1,..., s\\\\\n", + " y_{n+1} = y_n + {\\displaystyle \\sum_{j=1}^{s}} c_jk_j \n", + "\\end{array}\n", + "$$ \n", + " \n", + " where\n", + "\n", + "${\\displaystyle \\sum_{j=1}^{s} } b_{ij} = a_i$." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "In particular, an *explicit* 2-stage Runge-Kutta method can be written as, \n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+ak_1, t_n+ah)\\\\\n", + " y_{n+1} = y_n + c_1k_1 +c_2k_2\n", + " \\end{array}\n", + "$$\n", + "\n", + "where \n", + " \n", + "$b_{21} = a_2 \\equiv a$. \n", + " \n", + "So we want to know what values of $a$, $c_1$ and $c_2$ leads to a second-order method, i.e. a method with an error proportional to $h^3$.\n", + "\n", + "To find out, we compare the method against a second-order Taylor expansion,\n", + "\n", + "\n", + "\n", + "$$\n", + " y(t_n+h) = y(t_n) + hy^\\prime(t_n) + \\frac{h^2}{2}y^{\\prime \\prime}(t_n)\n", + " + O(h^3)\n", + "$$\n", + "\n", + "So for the $y_{n+1}$ to be second-order accurate, it must match the Taylor method. In other words, $c_1k_1 +c_2k_2$ must match $hy^\\prime(t_n) + \\frac{h^2}{2}y^{\\prime \\prime}$. To do this, we need to express $k_1$ and $k_2$ in terms of derivatives of $y$ at time $t_n$.\n", + "\n", + "First note, $k_1 = hf(y_n, t_n) = hy^\\prime(t_n)$.\n", + "\n", + "Next, we can expand $k_2$ about $(y_n.t_n)$, \n", + "\n", + "\n", + "\n", + "$$\n", + "k_2 = hf(y_n+ak_1, t_n+ah) = h(f + haf_t + haf_yy^\\prime + O(h^2))\n", + "$$\n", + "\n", + "\n", + "\n", + "However, we can write $y^{\\prime \\prime}$ as, \n", + "\n", + "$$\n", + " y^{\\prime \\prime} = \\frac{df}{dt} = f_t + f_yy^\\prime\n", + "$$ \n", + "This allows us\n", + "to rewrite $k_2$ in terms of $y^{\\prime \\prime}$,\n", + "\n", + "$$k_2 = h(y^\\prime + hay^{\\prime \\prime}+ O(h^2))$$\n", + "\n", + "Substituting these expressions for $k_i$ back into the Runge-Kutta formula gives us,\n", + "$$y_{n+1} = y_n + c_1hy^\\prime +c_2h(y^\\prime + hay^{\\prime \\prime})$$\n", + "or \n", + "$$y_{n+1} = y_n + h(c_1 +c_2)y^\\prime + h^2(c_2a)y^{\\prime \\prime}$$\n", + "\n", + "If we compare this against the second-order Taylor method, we see that we need, \n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " c_1 + c_2 = 1\\\\\n", + " a c_2 = \\frac{1}{2}\n", + " \\end{array}\n", + "$$\n", + " \n", + "for the Runge-Kutta method to be\n", + "second-order.\n", + "\n", + "
\n", + "If we choose $a = 1/2$, this implies $c_2 = 1$ and $c_1=0$. This gives us the midpoint method.\n", + "\n", + "However, note that other choices are possible. In fact, we have a *one-parameter family* of second-order methods. For example if we choose, $a=1$ and $c_1=c_2=\\frac{1}{2}$, we get the *modified Euler method*,\n", + "\n", + "\n", + "\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+k_1, t_n+h)\\\\\n", + " y_{n+1} = y_n + \\frac{1}{2}(k_1 +k_2)\n", + " \\end{array}\n", + "$$\n", + " \n", + "while the choice\n", + "$a=\\frac{2}{3}$, $c_1=\\frac{1}{4}$ and $c_2=\\frac{3}{4}$, gives us\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
Heun's Method (also referred to as Ralston's method)
\n", + " Note: you may find a different definition of Heun's Method depending on the textbook you are reading)\n", + "\n", + "$$\n", + " \\begin{array}{l}\n", + " k_1 = h f(y_n,t_n)\\\\\n", + " k_2 = h f(y_n+\\frac{2}{3}k_1, t_n+\\frac{2}{3}h)\\\\\n", + " y_{n+1} = y_n + \\frac{1}{4}k_1 + \\frac{3}{4}k_2\n", + " \\end{array}\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 3 + }, + "source": [ + "## Glossary \n", + "\n", + "\n", + "- **driver** A routine that calls the other routines to solve the\n", + " problem.\n", + "\n", + "- **embedded Runge-Kutta methods**: Two Runge-Kutta\n", + " methods that share the same stages. The difference between the solutions\n", + " give an estimate of the local truncation error.\n", + "\n", + "- **explicit** In an explicit numerical scheme, the calculation of the solution at a given\n", + " step or stage does not depend on the value of the solution at that step\n", + " or on a later step or stage.\n", + " \n", + "- **fourth-order Runge-Kutta method** A popular fourth-order, four-stage, explicit Runge-Kutta\n", + " method.\n", + "\n", + "- **implicit**: In an implicit numerical scheme, the\n", + " calculation of the solution at a given step or stage does depend on the\n", + " value of the solution at that step or on a later step or stage. Such\n", + " methods are usually more expensive than implicit schemes but are better\n", + " for handling stiff ODEs.\n", + "\n", + "- **midpoint method** : A two-stage,\n", + " second-order Runge-Kutta method.\n", + "\n", + "- **stages**: The approximations\n", + " to the derivative made in a Runge-Kutta method between the start and end\n", + " of a step.\n", + "\n", + "- **tableau** The tableau for a Runge-Kutta method\n", + " organizes the coefficients for the method in tabular form.\n", + "\n" + ] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": true, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "165px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab5/01-lab5.html b/notebooks/lab5/01-lab5.html new file mode 100644 index 0000000..16f246d --- /dev/null +++ b/notebooks/lab5/01-lab5.html @@ -0,0 +1,1309 @@ + + + + + + + + Lab 5: Daisyworld — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Lab 5: Daisyworld

+
+

List of Problems

+

Problem Constant Growth: Daisyworld with a constant growth rate

+

Problem Coupling: Daisyworld of neutral daisies coupled to the temperature

+

Problem Conduction: Daisyworld steady states and the effect of the conduction parameter R

+

Problem Initial: Daisyworld steady states and initial conditions

+

Problem Temperature: Add temperature retrieval code

+

Problem Estimate: Compare the error estimate to the true error

+

Problem Adaptive: Adaptive Timestep Code

+

Problem Predators: Adding predators to Daisyworld

+
+
+

Assignment

+

See canvas site for which problems you should hand in. Your answers should all be within a jupyter notebook. Use subheadings to organise your notebook by question, and markdown cells to describe what you’ve done and to answer the questions You will be asked to upload: 1. a pdf of your jupyter notebook answering all questions 2. the jupyter notebook itself (ipynb file) - if you want to import your own module code, include that with the notebook in a zipfile

+
+
+

Objectives

+

In this lab, you will use the tools you have learnt in the previous labs to explore a simple environmental model, Daisyworld, with the help of a Runge-Kutta method with adaptive stepsize control.

+

The goal is for you to gain some experience using a Runge-Kutta integrator and to see the advantages of applying error control to the algorithm. As well, you will discover the the possible insights one can get from studying numerical solutions of a physical model.

+

In particular you will be able to:

+
    +
  • explain how the daisies affect the climate in the daisy world model

  • +
  • define an adaptive step-size model

  • +
  • explain the reasons why an adaptive step-size model may be faster for a given accuracy

  • +
  • explain why white daisies (alone) can survive at a higher solar constant than black daisies

  • +
  • define hysteresis

  • +
+
+
+

Readings

+

There is no required reading for this lab, beyond the contents of the lab itself. However, if you would like additional background on any of the following topics, then refer to the sections indicated below:

+
    +
  • Daisy World:

    +
      +
    • The original article by Watson and Lovelock, 1983 which derive the equations used here.

    • +
    • A 2008 Reviews of Geophysics article by Wood et al. with more recent developments (species competition, etc.)

    • +
    +
  • +
  • Runge-Kutta Methods with Adaptive Stepsize Control:

    +
      +
    • Newman, Section 8.4

    • +
    • Press, et al. Section 16.2: these are equations we implemented in Python, scanned pdf here

    • +
    • Burden & Faires Section 5.5

    • +
    +
  • +
+
+
+

Introduction

+

It is obvious that life on earth is highly sensitive to the planet’s atmospheric and climatic conditions. What is less obvious, but of great interest, is the role biology plays in the sensitivity of the climate. This is dramatically illustrated by the concern over the possible contribution to global warming by the loss of the rain forests in Brazil, and shifts from forests to croplands over many regions of the Earth.

+

The fact that each may affect the other implies that the climate and life on earth are interlocked in a complex series of feedbacks, i.e. the climate affects the biosphere which, when altered, then changes the climate and so on. A fascinating question arises as to whether or not this would eventually lead to a stable climate. This scenerio is exploited to its fullest in the Gaia hypothesis which postulates that the biosphere, atmosphere, ocean and land are all part of some totality, dubbed +Gaia, which is essentially an elaborate feedback system which optimizes the conditions of life here on earth.

+

It would be hopeless to attempt to mathematically model such a large, complex system. What can be done instead is to construct a ’toy model’ of the system in which much of the complexity has been stripped away and only some of the relevant characteristics retained. The resulting system will then be tractable but, unfortunately, may bear little connection with the original physical system.

+

Daisyworld is such a model. Life on Daisyworld has been reduced to just two species of daisies of different colors. The only environmental condition that affects the daisy growth rate is temperature. The temperature in turn is modified by the varying amounts of radiation absorbed by the daisies.

+

Daisyworld is obviously a gross simplification of the real earth. However, it does retain the central feature of interest: a feedback loop between the climate and life on the planet. Since the equations governing the system will be correspondingly simplified, it will allow us to investigate under what conditions, if any, can equilibrium be reached. The hope is that this will then gain us some insight into how life on the real earth may lead to a stable (or unstable) climate.

+
+
+

The Daisyworld Model

+

Daisyworld is populated by two types of daisies, one darker than bare ground and the other lighter than bare ground. As with life on earth, the daisies will not grow at extreme temperatures and will have optimum growth at moderate temperatures.

+

The darker, ’black’ daisies absorb more radiation than the lighter, ’white’ daisies. If the black daisy population grows and spreads over more area, an increased amount of solar energy will be absorbed, which will ultimately raise the temperature of the planet. Conversely, an increase in the white daisy population will result in more radiation being reflected away, lowering the planet’s temperature.

+

The question to be answered is:

+

Under what conditions, if any, will the daisy population and temperature reach equilibrium?

+
+

The Daisy Population

+

The daisy population will be modeled along the lines of standard population ecology models where the net growth depends upon the current population. For example, the simplest model assumes the rate of growth is proportional to the population, i.e.

+
+\[\frac{dA_w}{dt} = k_w A_w\]
+
+\[\frac{dA_b}{dt} = k_b A_b\]
+

where \(A_w\) and \(A_b\) are fractions of the total planetary area covered by the white and black daisies, respectively, and \(k_i\), \(i=w,b\), are the white and black daisy growth rates per unit time, respectively. If assume the the growth rates \(k_i\) are (positive) constants we would have exponential growth like the bunny rabbits of lore.

+

We can make the model more realistic by letting the daisy birthrate depend on the amount of available land, i.e.

+
+\[k_i = \beta_i x\]
+

where \(\beta_i\) are the white and black daisy growth rates per unit time and area, respectively, and \(x\) is the fractional area of free fertile ground not colonized by either species. We can also add a daisy death rate per unit time, \(\chi\), to get

+

\(\textbf{eq: constantgrowth}\)

+
+\[\frac{dA_w}{dt} = A_w ( \beta_w x - \chi)\]
+
+\[\frac{dA_b}{dt} = A_b ( \beta_b x - \chi)\]
+

However, even these small modifications are non-trivial mathematically as the available fertile land is given by,

+
+\[x = 1 - A_w - A_b\]
+

(assuming all the land mass is fertile) which makes the equations non-linear.

+
+
+

Problem constant growth

+

Note that though the daisy growth rate per unit time depends on the amount of available fertile land, it is not otherwise coupled to the environment (i.e. \(\beta_i\) is not a function of temperature. Making the growth a function of bare ground, however, keeps the daisy population bounded and the daisy population will eventually reach some steady state. The next python cell has a script that runs a fixed timestep Runge Kutte routine that calculates area coverage of white and black daisies +for fixed growth rates \(\beta_w\) and \(\beta_b\). Try changing these growth rates (specified in the derivs5 routine) and the initial white and black concentrations (specified in the fixed_growth.yaml file discussed next). To hand in: plot graphs to illustrate how these changes have affected the fractional coverage of black and white daisies over time compared to the original. Comment on the changes that you see.

+
    +
  1. For a given set of growth rates try various (non-zero) initial daisy populations.

  2. +
  3. For a given set of initial conditions try various growth rates. In particular, try rates that are both greater than and less than the death rate.

  4. +
  5. Can you determine when non-zero steady states are achieved? Explain. Connect what you see here with the discussion of hysteresis towards the end of this lab - what determines which steady state is reached?

  6. +
+
+

Running the constant growth rate demo

+

In the appendix we discuss the design of the integrator class and the adaptive Runge-Kutta routine. For this demo, we need to be able to change variables in the configuration file. For this assignment problem you are asked to:

+
    +
  1. Change the inital white and black daisy concentrations by changing these lines in the fixed_growth.yaml input file (you can find this file in this lab directory):

    +
    initvars:
    +   whiteconc: 0.2
    +   blackconc: 0.7
    +
    +
    +
  2. +
  3. Change the white and black daisy growth rates by editing the variables beta_w and beta_b in the derivs5 routine in the next cell

  4. +
+

The Integrator class contains two different timeloops, both of which use embedded Runge Kutta Cash Carp code given in Lab 4 and coded here as rkckODE5. The simplest way to loop through the timesteps is just to call the integrator with a specified set of times. This is done in timeloop5fixed. Below we will describe how to use +the error extimates returned by rkckODE5 to tune the size of the timesteps, which is done in timeloop5Err.

+
+
[ ]:
+
+
+
#
+# 4.1  integrate constant growth rates with fixed timesteps
+#
+import context
+from numlabs.lab5.lab5_funs import Integrator
+from collections import namedtuple
+import numpy as np
+import matplotlib.pyplot as plt
+
+
+class Integ51(Integrator):
+    def set_yinit(self):
+        #
+        # read in 'albedo_white chi S0 L albedo_black R albedo_ground'
+        #
+        uservars = namedtuple('uservars', self.config['uservars'].keys())
+        self.uservars = uservars(**self.config['uservars'])
+        #
+        # read in 'whiteconc blackconc'
+        #
+        initvars = namedtuple('initvars', self.config['initvars'].keys())
+        self.initvars = initvars(**self.config['initvars'])
+        self.yinit = np.array(
+            [self.initvars.whiteconc, self.initvars.blackconc])
+        self.nvars = len(self.yinit)
+        return None
+
+    #
+    # Construct an Integ51 class by inheriting first intializing
+    # the parent Integrator class (called super).  Then do the extra
+    # initialization in the set_yint function
+    #
+    def __init__(self, coeffFileName):
+        super().__init__(coeffFileName)
+        self.set_yinit()
+
+    def derivs5(self, y, t):
+        """y[0]=fraction white daisies
+           y[1]=fraction black daisies
+
+           Constant growty rates for white
+           and black daisies beta_w and beta_b
+
+           returns dy/dt
+        """
+        user = self.uservars
+        #
+        # bare ground
+        #
+        x = 1.0 - y[0] - y[1]
+
+        # growth rates don't depend on temperature
+        beta_b = 0.7  # growth rate for black daisies
+        beta_w = 0.7  # growth rate for white daisies
+
+        # create a 1 x 2 element vector to hold the derivitive
+        f = np.empty([self.nvars], 'float')
+        f[0] = y[0] * (beta_w * x - user.chi)
+        f[1] = y[1] * (beta_b * x - user.chi)
+        return f
+
+
+theSolver = Integ51('fixed_growth.yaml')
+timeVals, yVals, errorList = theSolver.timeloop5fixed()
+
+plt.close('all')
+thefig, theAx = plt.subplots(1, 1)
+theLines = theAx.plot(timeVals, yVals)
+theLines[0].set_marker('+')
+theLines[1].set_linestyle('--')
+theLines[1].set_color('k')
+theLines[1].set_marker('*')
+theAx.set_title('lab 5 interactive 1  constant growth rate')
+theAx.set_xlabel('time')
+theAx.set_ylabel('fractional coverage')
+theAx.legend(theLines, ('white daisies', 'black daisies'), loc='best')
+
+thefig, theAx = plt.subplots(1, 1)
+theLines = theAx.plot(timeVals, errorList)
+theLines[0].set_marker('+')
+theLines[1].set_linestyle('--')
+theLines[1].set_color('k')
+theLines[1].set_marker('*')
+theAx.set_title('lab 5 interactive 1 errors')
+theAx.set_xlabel('time')
+theAx.set_ylabel('error')
+out = theAx.legend(theLines, ('white errors', 'black errors'), loc='best')
+
+
+
+
+
+
+

The Daisy Growth Rate - Coupling to the Environment

+

We now want to couple the Daisy growth rate to the climate, which we do by making the growth rate a function of the local temperature \(T_i\),

+
+\[\beta_i = \beta_i(T_i)\]
+

The growth rate should drop to zero at extreme temperatures and be optimal at moderate temperatures. In Daisyworld this means the daisy population ceases to grow if the temperature drops below \(5^o\)C or goes above $40^o $C. The simplest model for the growth rate would then be parabolic function of temperature, peaking at \(22.5^o\)C:

+
+\[\beta_i = 1.0 - 0.003265(295.5 K - T_i)^2\]
+

where the \(i\) subscript denotes the type of daisy: grey (i=y), white (i=w) or black (i=b). (We’re reserving \(\alpha_g\) for the bare ground albedo)

+

8f132bf0926b429c824d97d8b72b65ee

+

Before specifying the local temperature, and its dependence on the daisy population, first consider the emission temperature \(T_e\), which is the mean temperature of the planet,

+
+\[T^4_e = L \frac{S_0}{4\sigma}(1-\alpha_p)\]
+

where \(S_0\) is a solar flux density constant, \(L\) is the fraction of \(S_0\) received at Daisyworld, and \(\alpha_p\) is the planetary albedo. The greater the planetary albedo \(\alpha_p\), i.e. the more solar radiation the planet reflects, the lower the emission temperature.

+

Mathematical note: The emission temperature is derived on the assumption that the planet is in global energy balance and is behaving as a blackbody radiator. See the appendix for more information.

+
+
+

Problem Coupling

+

Consider daisies with the same albedo as the planet, i.e. ’grey’ or neutral daisies, as specified in derivs5 routine below.

+
    +
  1. For the current value of L (0.2) in the file coupling.yaml, the final daisy steady state is zero. Why is it zero?

  2. +
  3. Find a value of L which leads to a non-zero steady state.

  4. +
  5. What happens to the emission temperature as L is varied? Make a plot of \(L\) vs. \(T_E\) for 10-15 values of \(L\). To do this, I overrode the value of L from the init file by passing a new value into the IntegCoupling constructor (see Appendix A). This allowed me to put

    +
    theSolver = IntegCoupling("coupling.yaml",newL)
    +timeVals, yVals, errorList = theSolver.timeloop5fixed()
    +
    +
    +

    inside a loop that varied the L value and saved the steady state concentration for plotting

    +
  6. +
+

After reading the the next section on the local temperature,

+
    +
  1. Do you see any difference between the daisy temperature and emission temperature? Plot both and explain. (Hint: I modified derivs5 to save these variables to self so I could compare their values at the end of the simulation. You could also override timeloop5fixed to do the same thing at each timestep.)

  2. +
  3. How (i.e. through what mechanism) does the makeup of the global daisy population affect the local temperature?

  4. +
+
+
[ ]:
+
+
+
# define functions
+import matplotlib.pyplot as plt
+
+
+class IntegCoupling(Integrator):
+    """rewrite the init and derivs5 methods to
+       work with a single (grey) daisy
+    """
+    def set_yinit(self):
+        #
+        # read in 'albedo_grey chi S0 L  R albedo_ground'
+        #
+        uservars = namedtuple('uservars', self.config['uservars'].keys())
+        self.uservars = uservars(**self.config['uservars'])
+        #
+        # read in 'greyconc'
+        #
+        initvars = namedtuple('initvars', self.config['initvars'].keys())
+        self.initvars = initvars(**self.config['initvars'])
+        self.yinit = np.array([self.initvars.greyconc])
+        self.nvars = len(self.yinit)
+        return None
+
+    def __init__(self, coeffFileName):
+        super().__init__(coeffFileName)
+        self.set_yinit()
+
+    def derivs5(self, y, t):
+        """
+           Make the growth rate depend on the ground temperature
+           using the quadratic function of temperature
+
+           y[0]=fraction grey daisies
+           t = time
+           returns f[0] = dy/dt
+        """
+        sigma = 5.67e-8  # Stefan Boltzman constant W/m^2/K^4
+        user = self.uservars
+        x = 1.0 - y[0]
+        albedo_p = x * user.albedo_ground + y[0] * user.albedo_grey
+        Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma
+        eta = user.R * user.L * user.S0 / (4.0 * sigma)
+        temp_y = (eta * (albedo_p - user.albedo_grey) + Te_4)**0.25
+        if (temp_y >= 277.5 and temp_y <= 312.5):
+            beta_y = 1.0 - 0.003265 * (295.0 - temp_y)**2.0
+        else:
+            beta_y = 0.0
+
+        # create a 1 x 1 element vector to hold the derivative
+        f = np.empty([self.nvars], np.float64)
+        f[0] = y[0] * (beta_y * x - user.chi)
+        return f
+
+
+
+
+
[ ]:
+
+
+
# Solve and plot for grey daisies
+import matplotlib.pyplot as plt
+
+theSolver = IntegCoupling('coupling.yaml')
+timeVals, yVals, errorList = theSolver.timeloop5fixed()
+
+thefig, theAx = plt.subplots(1, 1)
+theLines = theAx.plot(timeVals, yVals)
+theAx.set_title('lab 5: interactive 2 Coupling with grey daisies')
+theAx.set_xlabel('time')
+theAx.set_ylabel('fractional coverage')
+out = theAx.legend(theLines, ('grey daisies', ), loc='best')
+
+
+
+
+
+
+

The Local Temperature - Dependence on Surface Heat Conductivity

+

If we now allow for black and white daisies, the local temperature will differ according to the albedo of the region. The regions with white daisies will tend to be cooler than the ground and the regions with black daisies will tend to be hotter. To determine what the temperature is locally, we need to decide how readily the planet surface thermalises, i.e. how easily large-scale weather patterns redistributes the surface heat.

+
    +
  • If there is perfect heat ‘conduction’ between the different regions of the planet then the local temperature will equal the mean temperature given by the emission temperature \(T_e\).

    +
    +\[T^4_i \equiv T^4_e = L \frac{S_0}{4\sigma}(1-\alpha_p)\]
    +
  • +
  • If there is no conduction, or perfect ‘insulation’, between regions then the temperature will be the emission temperature due to the albedo of the local region.

    +
    +\[T^4_i= L \frac{S_0}{4\sigma}(1-\alpha_i)\]
    +

    where \(\alpha_i\) indicates either \(\alpha_g\), \(\alpha_w\) or \(\alpha_b\).

    +
  • +
+

The local temperature can be chosen to lie between these two values,

+
+\[T^4_i = R L \frac{S_0}{4\sigma}(\alpha_p-\alpha_i) + T^4_e\]
+

where \(R\) is a parameter that interpolates between the two extreme cases i.e. \(R=0\) means perfect conduction and \(R=1\) implies perfect insulation between regions.

+
+

Problem Conduction

+

The conduction parameter R will determine the temperature differential between the bare ground and the regions with black or white daisies. The code in the next cell specifies the derivatives for this situation, removing the feedback between the daisies and the planetary albedo but introducint conduction. Use it to investigate these two questions:

+
    +
  1. Change the value of R and observe the effects on the daisy and emission temperature.

  2. +
  3. What are the effects on the daisy growth rate and the final steady states?

  4. +
+
+
[ ]:
+
+
+
# 5.2  keep the albedo constant at alpha_p and vary the conductivity R
+#
+from numlabs.lab5.lab5_funs import Integrator
+
+
+class Integ53(Integrator):
+    def set_yinit(self):
+        #
+        # read in 'albedo_white chi S0 L albedo_black R albedo_ground'
+        #
+        uservars = namedtuple('uservars', self.config['uservars'].keys())
+        self.uservars = uservars(**self.config['uservars'])
+        #
+        # read in 'whiteconc blackconc'
+        #
+        initvars = namedtuple('initvars', self.config['initvars'].keys())
+        self.initvars = initvars(**self.config['initvars'])
+        self.yinit = np.array(
+            [self.initvars.whiteconc, self.initvars.blackconc])
+        self.nvars = len(self.yinit)
+        return None
+
+    def __init__(self, coeffFileName):
+        super().__init__(coeffFileName)
+        self.set_yinit()
+
+    def derivs5(self, y, t):
+        """y[0]=fraction white daisies
+           y[1]=fraction black daisies
+           no feedback between daisies and
+           albedo_p (set to ground albedo)
+        """
+        sigma = 5.67e-8  # Stefan Boltzman constant W/m^2/K^4
+        user = self.uservars
+        x = 1.0 - y[0] - y[1]
+        #
+        # hard wire the albedo to that of the ground -- no daisy feedback
+        #
+        albedo_p = user.albedo_ground
+        Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma
+        eta = user.R * user.L * user.S0 / (4.0 * sigma)
+        temp_b = (eta * (albedo_p - user.albedo_black) + Te_4)**0.25
+        temp_w = (eta * (albedo_p - user.albedo_white) + Te_4)**0.25
+
+        if (temp_b >= 277.5 and temp_b <= 312.5):
+            beta_b = 1.0 - 0.003265 * (295.0 - temp_b)**2.0
+        else:
+            beta_b = 0.0
+
+        if (temp_w >= 277.5 and temp_w <= 312.5):
+            beta_w = 1.0 - 0.003265 * (295.0 - temp_w)**2.0
+        else:
+            beta_w = 0.0
+
+        # create a 1 x 2 element vector to hold the derivitive
+        f = np.empty([self.nvars], 'float')
+        f[0] = y[0] * (beta_w * x - user.chi)
+        f[1] = y[1] * (beta_b * x - user.chi)
+        return f
+
+
+
+
+
[ ]:
+
+
+
# Solve and plot conduction problem
+import matplotlib.pyplot as plt
+
+theSolver = Integ53('conduction.yaml')
+timeVals, yVals, errorList = theSolver.timeloop5fixed()
+
+plt.close('all')
+thefig, theAx = plt.subplots(1, 1)
+theLines = theAx.plot(timeVals, yVals)
+theLines[1].set_linestyle('--')
+theLines[1].set_color('k')
+theAx.set_title('lab 5 interactive 3 -- conduction problem')
+theAx.set_xlabel('time')
+theAx.set_ylabel('fractional coverage')
+out = theAx.legend(theLines, ('white daisies', 'black daisies'),
+                   loc='center right')
+
+
+
+
+
+
+

The Feedback Loop - Feedback Through the Planetary Albedo

+

The amount of solar radiation the planet reflects will depend on the daisy population since the white daisies will reflect more radiation than the bare ground and the black daisies will reflect less. So a reasonable estimate of the planetary albedo \(\alpha_p\) is an average of the albedo’s of the white and black daisies and the bare ground, weighted by the amount of area covered by each, i.e.

+
+\[\alpha_p = A_w\alpha_w + A_b\alpha_b + A_g\alpha_g\]
+

A greater population of white daisies will tend to increase planetary albedo and decrease the emission temperature, as is apparent from equation ([lab5:eq:tempe]), while the reverse is true for the black daisies.

+

To summarize: The daisy population is controlled by its growth rate \(\beta_i\) which is a function of the local temperature \(T_i\)

+
+\[\beta_i = 1.0 - 0.003265(295.5 K -T_i)^2\]
+

If the conductivity \(R\) is nonzero, the local temperature is a function of planetary albedo \(\alpha_p\)

+
+\[T_i = \left[ R L \frac{S_0}{4\sigma}(\alpha_p-\alpha_i) + + T^4_e \right]^{\frac{1}{4}}\]
+

which is determined by the daisy population.

+
    +
  • Physically, this provides the feedback from the daisy population back to the temperature, completing the loop between the daisies and temperature.

  • +
  • Mathematically, this introduces a rather nasty non-linearity into the equations which, as pointed out in the lab 1, usually makes it difficult, if not impossible, to obtain exact analytic solutions.

  • +
+
+

Problem Initial

+

The feedback means a stable daisy population (a steady state) and the environmental conditions are in a delicate balance. The code below produces a steady state which arises from a given initial daisy population that starts with only white daisies.

+
    +
  1. Add a relatively small (5%, blackconc = 0.05) initial fraction of black daisies to the value in initial.yaml and see what effect this has on the temperature and final daisy populations. Do you still have a final non-zero daisy population?

  2. +
  3. Set the initial black daisy population to 0.05 Attempt to adjust the initial white daisy population to obtain a non-zero steady state. What value of initial white daisy population gives you a non-zero steady state for blackconc=0.05? Do you have to increase or decrease the initial fraction? What is your explanation for this behavior?

  4. +
  5. Experiment with other initial fractions of daisies and look for non-zero steady states. Describe and explain your results. Connect what you see here with the discussion of hysteresis towards the end of this lab - what determines which steady state is reached?

  6. +
+
+
[ ]:
+
+
+
# functions for problem initial
+from numlabs.lab5.lab5_funs import Integrator
+
+
+class Integ54(Integrator):
+    def set_yinit(self):
+        #
+        # read in 'albedo_white chi S0 L albedo_black R albedo_ground'
+        #
+        uservars = namedtuple('uservars', self.config['uservars'].keys())
+        self.uservars = uservars(**self.config['uservars'])
+        #
+        # read in 'whiteconc blackconc'
+        #
+        initvars = namedtuple('initvars', self.config['initvars'].keys())
+        self.initvars = initvars(**self.config['initvars'])
+        self.yinit = np.array(
+            [self.initvars.whiteconc, self.initvars.blackconc])
+        self.nvars = len(self.yinit)
+        return None
+
+    def __init__(self, coeff_file_name):
+        super().__init__(coeff_file_name)
+        self.set_yinit()
+
+    def find_temp(self, yvals):
+        """
+            Calculate the temperatures over the white and black daisies
+            and the planetary equilibrium temperature given the daisy fractions
+
+            input:  yvals -- array of dimension [2] with the white [0] and black [1]
+                    daisy fractiion
+            output:  white temperature (K), black temperature (K), equilibrium temperature (K)
+        """
+        sigma = 5.67e-8  # Stefan Boltzman constant W/m^2/K^4
+        user = self.uservars
+        bare = 1.0 - yvals[0] - yvals[1]
+        albedo_p = bare * user.albedo_ground + \
+            yvals[0] * user.albedo_white + yvals[1] * user.albedo_black
+        Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma
+        temp_e = Te_4**0.25
+        eta = user.R * user.L * user.S0 / (4.0 * sigma)
+        temp_b = (eta * (albedo_p - user.albedo_black) + Te_4)**0.25
+        temp_w = (eta * (albedo_p - user.albedo_white) + Te_4)**0.25
+        return (temp_w, temp_b, temp_e)
+
+    def derivs5(self, y, t):
+        """y[0]=fraction white daisies
+           y[1]=fraction black daisies
+           no feedback between daisies and
+           albedo_p (set to ground albedo)
+        """
+        temp_w, temp_b, temp_e = self.find_temp(y)
+
+        if (temp_b >= 277.5 and temp_b <= 312.5):
+            beta_b = 1.0 - 0.003265 * (295.0 - temp_b)**2.0
+        else:
+            beta_b = 0.0
+
+        if (temp_w >= 277.5 and temp_w <= 312.5):
+            beta_w = 1.0 - 0.003265 * (295.0 - temp_w)**2.0
+        else:
+            beta_w = 0.0
+        user = self.uservars
+        bare = 1.0 - y[0] - y[1]
+        # create a 1 x 2 element vector to hold the derivitive
+        f = np.empty_like(y)
+        f[0] = y[0] * (beta_w * bare - user.chi)
+        f[1] = y[1] * (beta_b * bare - user.chi)
+        return f
+
+
+
+
+
[ ]:
+
+
+
# Solve and plot for problem initial
+import matplotlib.pyplot as plt
+import pandas as pd
+
+theSolver = Integ54('initial.yaml')
+timevals, yvals, errorlist = theSolver.timeloop5fixed()
+daisies = pd.DataFrame(yvals, columns=['white', 'black'])
+
+thefig, theAx = plt.subplots(1, 1)
+line1, = theAx.plot(timevals, daisies['white'])
+line2, = theAx.plot(timevals, daisies['black'])
+line1.set(linestyle='--', color='r', label='white')
+line2.set(linestyle='--', color='k', label='black')
+theAx.set_title('lab 5 interactive 4, initial conditions')
+theAx.set_xlabel('time')
+theAx.set_ylabel('fractional coverage')
+out = theAx.legend(loc='center right')
+
+
+
+
+
+

Problem Temperature

+

The code above in Problem Initial adds a new method, find_temp that takes the white/black daisy fractions and calculates local and planetary temperatures.

+
    +
  1. override timeloop5fixed so that it saves these three temperatures, plus the daisy growth rates to new variables in the Integ54 instance

  2. +
  3. Make plots of (temp_w, temp_b) and (beta_w, beta_b) vs. time for a case with non-zero equilibrium concentrations of both black and white daisies

  4. +
+
+
+
+

Adaptive Stepsize in Runge-Kutta

+
+

Why Adaptive Stepsize?

+

As a rule of thumb, accuracy increases in Runge-Kutta methods as stepsize decreases. At the same time, the number of function evaluations performed increases. This tradeoff between accuracy of the solution and computational cost always exists, but in the ODE solution algorithms presented earlier it often appears to be unnecessarily large. To see this, consider the solution to a problem in two different time intervals - in the first time interval, the solution is close to steady, whereas in the +second one it changes quickly. For acceptable accuracy with a non-adaptive method the step size will have to be adjusted so that the approximate solution is close to the actual solution in the second interval. The stepsize will be fairly small, so that the approximate solution is able to follow the changes in the solution here. However, as there is no change in stepsize throughout the solution process, the same step size will be applied to approximate the solution in the first time interval, +where clearly a much larger stepsize would suffice to achieve the same accuracy. Thus, in a region where the solution behaves nicely a lot of function evaluations are wasted because the stepsize is chosen in accordance with the most quickly changing part of the solution.

+

The way to address this problem is the use of adaptive stepsize control. This class of algorithms adjusts the stepsize taken in a time interval according to the properties of the solution in that interval, making it useful for producing a solution that has a given accuracy in the minimum number of steps.

+
+
+

Designing Adaptive Stepsize Control

+

Now that the goal is clear, the question remains of how to close in on it. As mentioned above, an adaptive algorithm is usually asked to solve a problem to a desired accuracy. To be able to adjust the stepsize in Runge-Kutta the algorithm must therefore calculate some estimate of how far its solution deviates from the actual solution. If with its initial stepsize this estimate is already well within the desired accuracy, the algorithm can proceed with a larger stepsize. If the error estimate is +larger than the desired accuracy, the algorithm decreases the stepsize at this point and attempts to take a smaller step. Calculating this error estimate will always increase the amount of work done at a step compared to non-adaptive methods. Thus, the remaining problem is to devise a method of calculating this error estimate that is both inexpensive and accurate.

+
+
+

Error Estimate by Step Doubling

+

The first and simple approach to arriving at an error estimate is to simply take every step twice. The second time the step is divided up into two steps, producing a different estimate of the solution. The difference in the two solutions can be used to produce an estimate of the truncation error for this step.

+

How expensive is this method to estimate the error? A single step of fourth order Runge-Kutta always takes four function evaluations. As the second time the step is taken in half-steps, it will take 8 evaluations. However, the first function evaluation in taking a step twice is identical to both steps, and thus the overall cost for one step with step doubling is \(12 - 1 = 11\) function evaluations. This should be compared to taking two normal half-steps as this corresponds to the overall +accuracy achieved. So we are looking at 3 function evaluations more per step, or an increase of computational cost by a factor of \(1.375\).

+

Step doubling works in practice, but the next section presents a slicker way of arriving at an error estimate that is less computationally expensive. It is the commmonly used one today.

+
+
+

Error Estimate using Embedded Runge-Kutta

+

Another way of estimating the truncation error of a step is due to the existence of the special fifth-order Runge-Kutta methods discussed earlier. These methods use six function evaluations which can be recombined to produce a fourth-order method . Again, the difference between the fifth and the fourth order solution is used to calculate an estimate of the truncation error. Obviously this method requires fewer function evaluations than step doubling, as the two estimates use the same evaluation +points. Originally this method was found by Fehlberg, and later Cash and Karp produced the set of constants presented earlier that produce an efficient and accurate error estimate.

+
+
+

Problem Estimate

+

In the demo below, compare the error estimate to the true error, on the initial value problem from Lab 4,

+
+\[\frac{dy}{dt} = -y +t +1, \;\;\;\; y(0) =1\]
+

which has the exact solution

+
+\[y(t) = t + e^{-t}\]
+
    +
  1. Play with the time step and final time, attempting small changes at first. How reasonable is the error estimate?

  2. +
  3. Keep decreasing the time step. Does the error estimate diverge from the computed error? Why?

  4. +
  5. Keep increasing the time step. Does the error estimate diverge? What is happening with the numerical solution?

  6. +
+
+
[ ]:
+
+
+
# Functions for problem estimate
+from numlabs.lab5.lab5_funs import Integrator
+
+
+class Integ55(Integrator):
+    def set_yinit(self):
+        #
+        # read in 'c1 c2 c3'
+        #
+        uservars = namedtuple('uservars', self.config['uservars'].keys())
+        self.uservars = uservars(**self.config['uservars'])
+        #
+        # read in initial yinit
+        #
+        initvars = namedtuple('initvars', self.config['initvars'].keys())
+        self.initvars = initvars(**self.config['initvars'])
+        self.yinit = np.array([self.initvars.yinit])
+        self.nvars = len(self.yinit)
+        return None
+
+    def __init__(self, coeff_file_name):
+        super().__init__(coeff_file_name)
+        self.set_yinit()
+
+    def derivs5(self, y, theTime):
+        """
+           y[0]=fraction white daisies
+        """
+        user = self.uservars
+        f = np.empty_like(self.yinit)
+        f[0] = user.c1 * y[0] + user.c2 * theTime + user.c3
+        return f
+
+
+
+
+
[ ]:
+
+
+
# Solve and plot for problem estimate
+import matplotlib.pyplot as plt
+
+theSolver = Integ55('expon.yaml')
+
+timeVals, yVals, yErrors = theSolver.timeloop5Err()
+timeVals = np.array(timeVals)
+exact = timeVals + np.exp(-timeVals)
+yVals = np.array(yVals)
+yVals = yVals.squeeze()
+yErrors = np.array(yErrors)
+
+thefig, theAx = plt.subplots(1, 1)
+line1 = theAx.plot(timeVals, yVals, label='adapt')
+line2 = theAx.plot(timeVals, exact, 'r+', label='exact')
+theAx.set_title('lab 5 interactive 5')
+theAx.set_xlabel('time')
+theAx.set_ylabel('y value')
+theAx.legend(loc='center right')
+
+#
+# we need to unpack yvals (a list of arrays of length 1
+# into an array of numbers using a list comprehension
+#
+
+thefig, theAx = plt.subplots(1, 1)
+realestError = yVals - exact
+actualErrorLine = theAx.plot(timeVals, realestError, label='actual error')
+estimatedErrorLine = theAx.plot(timeVals, yErrors, label='estimated error')
+theAx.legend(loc='best')
+
+timeVals, yVals, yErrors = theSolver.timeloop5fixed()
+
+np_yVals = np.array(yVals).squeeze()
+yErrors = np.array(yErrors)
+np_exact = timeVals + np.exp(-timeVals)
+
+thefig, theAx = plt.subplots(1, 1)
+line1 = theAx.plot(timeVals, np_yVals, label='fixed')
+line2 = theAx.plot(timeVals, np_exact, 'r+', label='exact')
+theAx.set_title('lab 5 interactive 5 -- fixed')
+theAx.set_xlabel('time')
+theAx.set_ylabel('y value')
+theAx.legend(loc='center right')
+
+thefig, theAx = plt.subplots(1, 1)
+realestError = np_yVals - np_exact
+actualErrorLine = theAx.plot(timeVals, realestError, label='actual error')
+estimatedErrorLine = theAx.plot(timeVals, yErrors, label='estimated error')
+theAx.legend(loc='best')
+theAx.set_title('lab 5 interactive 5 -- fixed errors')
+
+
+
+
+
+

Using Error to Adjust the Stepsize

+

Both step doubling and embedded methods leave us with the difference between two different order solutions to the same step. Provided is a desired accuracy, \(\Delta_{des}\). The way this accuracy is specified depends on the problem. It can be relative to the solution at step \(i\),

+
+\[\Delta_{des}(i) = RTOL\cdot |y(i)|\]
+

where \(RTOL\) is the relative tolerance desired. An absolute part should be added to this so that the desired accuracy does not become zero. There are more ways to adjust the error specification to the problem, but the overall goal of the algorithm always is to make \(\Delta_{est}(i)\), the estimated error for a step, satisfy

+
+\[|\Delta_{est}(i)|\leq\Delta_{des}(i)|\]
+

Note also that for a system of ODEs \(\Delta_{des}\) is of course a vector, and it is wise to replace the componentwise comparison by a vector norm.

+

Note now that the calculated error term is \(O(h^{5})\) as it was found as an error estimate to fourth-order Runge-Kutta methods. This makes it possible to scale the stepsize as

+

eq:hnew

+
+\[h_{new} = h_{old}[{\Delta_{des}\over \Delta_{est}}]^{1/5}\]
+

or, to give an example of the suggested use of vector norms above, the new stepsize is given by

+

eq:hnewnorm

+
+\[h_{new} = S h_{old}\{[{1\over N}\sum_{i=1}^{N}({\Delta_{est}(i)\over \Delta_{des}(i)})^{2}]^{1/2}\}^{-1/5}\}\]
+

using the root-mean-square norm. \(S\) appears as a safety factor (\(0<S<1\)) to counteract the inaccuracy in the use of estimates.

+

The coefficients for the adaptive tolerances are set in adaptvars section of adapt.yaml:

+
adaptvars:
+  dtpassmin: 0.1
+  dtfailmax: 0.5
+  dtfailmin: 0.1
+  s: 0.9
+  rtol: 1.0e-05
+  atol: 1.0e-05
+  maxsteps: 2000.0
+  maxfail: 60.0
+  dtpassmax: 5.0
+
+
+
+
[ ]:
+
+
+
# Solve and plot for adaptive timestep
+import matplotlib.pyplot as plt
+import pandas as pd
+
+theSolver = Integ54('adapt.yaml')
+timeVals, yVals, errorList = theSolver.timeloop5Err()
+
+yvals = pd.DataFrame.from_records(yVals, columns=['white', 'black'])
+
+thefig, theAx = plt.subplots(1, 1)
+
+points, = theAx.plot(timeVals, yvals['white'], '-b+', label='white daisies')
+points.set_markersize(12)
+theLine1, = theAx.plot(timeVals, yvals['black'], '--ko', label='black daisies')
+theAx.set_title('lab 5 interactive 6')
+theAx.set_xlabel('time')
+theAx.set_ylabel('fractional coverage')
+out = theAx.legend(loc='best')
+
+# timeVals,yVals,errorList=theSolver.timeloop5fixed()
+# whiteDaisies=[frac[0] for frac in yVals]
+
+
+
+
+
+
+

Coding Runge-Kutta Adaptive Stepsize Control

+

The Runge-Kutta code developed in Lab 4 solves the given ODE system in fixed timesteps. It is now necessary to exert adaptive timestep control over the solution. The python code for this is at given in these lines.

+

In principle, this is pretty simple:

+
    +
  1. As before, take a step specified by the Runge-Kutta algorithm.

  2. +
  3. Determine whether the estimated error lies within the user specified tolerance

  4. +
  5. If the error is too large, calculate the new stepsize using the equations above, e.g. \(h_{new} = S h_{old}\{[{1\over N}\sum_{i=1}^{N}({\Delta_{est}(i)\over \Delta_{des}(i)})^{2}]^{1/2}\}^{-1/5}\}\) and retake the step.

  6. +
+

This can be accomplished by writing a new timeloop5Err method which evaluates each Runge-Kutta step. This routine must now also return the estimate of the truncation error.

+

In practice, it is prudent to take a number of safeguards. This involves defining a number of variables that place limits on the change in stepsize:

+
    +
  • A safety factor (\(0<S<1\)) is used when a new step is calculated to further ensure that a small enough step is taken.

  • +
  • When a step fails, i.e. the error bound equation is not satisfied,

    +
      +
    • dtfailmin: The new step must change by some minimum factor.

    • +
    • dtfailmax: The step cannot change by more than some maximum factor

    • +
    • maxattempts: A limit is placed on the number of times a step is retried.

    • +
    • A check must be made to ensure that the new step is larger than machine roundoff. (Check if \(t+dt == t\).)

    • +
    +
  • +
  • When a step passes, i.e. \(|\Delta_{est}(i)|\leq\Delta_{des}(i)|\) is satisfied,

    +
      +
    • dtpassmin: The step is not changed unless it is by some minimum factor.

    • +
    • dtpassmax: The step is not changed by more than some maximum factor.

    • +
    +
  • +
+

The only remaining question is what to take for the initial step. In theory, any step can be taken and the stepper will adjust the step accordingly. In practice, if the step is too far off, and the error is much larger than the given tolerance, the stepper will have difficulty. So some care must be taken in choosing the initial step.

+

Some safeguards can also be taken during the integration by defining,

+
    +
  • dtmin: A limit placed on the smallest possible stepsize

  • +
  • maxsteps: A limit placed on the total number of steps taken.

  • +
+

The Python code for the the adaptive stepsize control is discussed further in Appendix Organization.

+
+

Problem adaptive

+

The demos in the previous section solved the Daisyworld equations using the embedded Runge-Kutta methods with adaptive timestep control.

+
    +
  1. Run the code and find solutions of Daisyworld with the default settings found in adapt.yaml using the timeloop5Err adaptive code

  2. +
  3. Find the solutions again but this time with fixed stepsizes (you can copy and paste code for this if you don’t want to code your own - be sure to read the earlier parts of the lab before attempting this question if you are stuck on how to do this!) and compare the solutions, the size of the timesteps, and the number of the timesteps between the fixed and adaptive timestep code.

  4. +
  5. Given the difference in the number of timesteps, how much faster would the fixed timeloop need to be to give the same performance as the adaptive timeloop for this case?

  6. +
+
+
+

Daisyworld Steady States

+

We can now use the Runge-Kutta code with adaptive timestep control to find some steady states of Daisyworld by varying the luminosity \(LS_0\) in the uservars section of adapt.yaml and recording the daisy fractions at the end of the integration. The code was used in the earlier sections to find some adhoc steady state solutions and the effect of altering some of the model parameters. What is of interest now is the effect of the daisy feedback on the range of parameter values for which +non-zero steady states exist. That the feedback does have an effect on the steady states was readily seen in Problem initial.

+

If we fix all other Daisyworld parameters, we find that non-zero steady states will exist for a range of solar luminosities which we characterize by the parameter L. Recall, that L is the multiple of the solar constant \(S_0\) that Daisyworld receives. What we will investigate in the next few sections is the effect of the daisy feedback on the range of L for which non-zero steady states can exist.

+

We accomplish this by fixing the Daisyworld parameters and finding the resulting steady state daisy population for a given value of L. A plot is then made of the steady-state daisy populations versus various values of L.

+
+
+

Neutral Daisies

+

The first case we consider is the case investigated in a previous demo where the albedo of the daisies and the ground are set to the same value. This means the daisy population has no effect on the planetary temperature, i.e. there is no feedback (Problem coupling).

+

\(~\)

+

Daisy fraction – daisies have ground albedo 6159986e702a456aa495ba93ece0ff75

+

Emission temperature 09f90f25fee7423faeac61c8fd37ca40

+
+
+

Black Daisies

+

Now consider a population of black daisies. Note the sharp jump in the graph when the first non-zero daisy steady states appear and the corresponding rise in the planetary temperature. The appearance of the black daisies results in a strong positive feedback on the temperature. Note as well that the graph drops back to zero at a lower value of L than in the case of neutral daisies.

+

Daisies darker than ground 668f4bfc53b54959a6af8f8a07fff98b

+

Temperature 8542cf8fd7b04e32bb7e0d7020d639d8

+
+
+

White Daisies

+

Consider now a population of purely white daisies. In this case there is an abrupt drop in the daisy steady state when it approaches zero with a corresponding jump in the emission temperature. Another interesting feature is the appearance of hysteresis (the dependence of the state of a system on its history). I.e. at an L of 1.4, there are two steady state solutions:

+
    +
  1. fractional coverage of white daisies of about 0.7, and an emission temperature of about 20C (look at the direction of the arrows on each plot to determine which emission temperature is linked to which fractional coverage)

  2. +
  3. no white daisies, and an emission temperature of about 55C.

  4. +
+

This hysteresis arises since the plot of steady states is different when solar luminosity is lowered as opposed to being raised incrementally. So which steady state the planet will be in will depend on the value of the solar constant before it was 1.4 - was it lower, in which case we’d be in state a (white daisies at about 0.7; see the arrows for direction); or was it higher (state b, no white daisies). This is what we mean when we say the state depends on its history - it matters what state +it was in before, even though we’re studying steady states.

+

Daisies brighter than ground 607bbce58f4645ed819d3ab99281e67f

+

Temperature 1c9dd577fa444962b7a741b1abab82a9

+
+
+

Black and White Daisies

+

Finally, consider a population of both black and white daisies. This blends in features from the cases where the daisy population was purely white or black. Note how the appearance of a white daisy population initially causes the planetary temperature to actually drop even though the solar luminosity has been increased.

+

fraction of black and white daisies 0b9ce28ee3be4b3c8b4a3700be828e8a

+

note extended temperature range with stabilizing feedbacks 7143a133c3554864bfaca0ec8bcb7a2e

+
+
+
+

Conclusion

+

Black daisies can survive at lower mean temperatures than the white daisies and the reverse is true for white daisies. The end result is that the range of L for which the non-zero daisy steady states exist is greater than the case of neutral (or no) daisies . In other words, the feedback from the daisies provide a stabilizing effect that extends the set of environmental conditions in which life on Daisyworld can exist.

+
+

Problem Predator

+

To make life a little more interesting on Daisyworld, add a population of rabbits that feed upon the daisies. The rabbit birth rate will be proportional to the area covered by the daisies while, conversely, the daisy death rate will be proportional to the rabbit population.

+

Add another equation to the Daisyworld model which governs the rabbit population and make the appropriate modifications to the existing daisy equations. Modify the set of equations and solve it with the Runge-Kutta method with adaptive timesteps. Use it to look for steady states and to determine their dependence on the initial conditions and model parameters.

+

Hand in notebook cells that:

+
    +
  1. Show your modified Daisyworld equations and your new integrator class.

  2. +
  3. At least one set of parameter values and initial conditions that leads to the steady state and a plot of the timeseries for the daisies and rabbits.

  4. +
  5. A discussion of the steady state’s dependence on these values, i.e. what happens when they are altered. Include a few plots for illustration.

  6. +
  7. Does adding this feedback extend the range of habital L values for which non-zero populations exist?

  8. +
+
+
+
+

Appendix: Note on Global Energy Balance

+

The statement that the earth is in energy balance follows from the First Law of Thermodynamics, i.e.

+

The energy absorbed by an isolated system is equal to the change in the internal energy minus the work extracted

+

which itself is an expression of the conservation of energy.

+

For the earth, the primary source of energy is radiation from the sun. The power emitted by the sun, known as the solar luminosity, is \(L_0=3.9 \times 10^{26}W\) while the energy flux received at the mean distance of the earth from the sun (\(1.5\times 10^{11}m\)) is called the solar constant, \(S_0=1367\ W m^{-2}\). For Daisy World the solar constant is taken to be \(S_0=3668\ W m^{-2}\).

+

The emission temperature of a planet is the temperature the planet would be at if it emitted energy like a blackbody. A blackbody, so-called because it is a perfect absorber of radiation, obeys the Stefan-Boltzmann Law:

+

\textbf{eq: Stefan-Boltzman}

+
+\[F_B\ (Wm^{-2}) = \sigma T^4_e\]
+

where \(\epsilon\) is the energy density and \(\sigma = 5.67\times 10^{-8}Wm^{-2}K^{-4}\). Given the energy absorbed, it is easy to calculate the emission temperature \(T_e\) with Stefan-Boltzman equation.

+

In general, a planet will reflect some of the radiation it receives, with the fraction reflected known as the albedo \(\alpha_p\). So the total energy absorbed by the planet is actually flux density received times the fraction absorbed times the perpendicular area to the sun ( the ’shadow area’), i.e.

+
+\[E_{\rm absorbed}=S_0(1-\alpha_p)\pi r_p^2\]
+

where \(r^2_p\) is the planet’s radius.

+

If we still assume the planet emits like a blackbody, we can calculate the corresponding blackbody emission temperature. The total power emitted would be the flux \(F_B\) of the blackbody times its surface area, i.e.

+
+\[E_{\rm blackbody} = \sigma T^4_e 4\pi r_p^2\]
+

Equating the energy absorbed with the energy emitted by a blackbody we can calculate the emission temperature,

+
+\[T^4_e = L \frac{S_0}{4\sigma}(1-\alpha_p)\]
+
+
+

Summary: Daisy World Equations

+
+\[\frac{dA_w}{dt} = A_w ( \beta_w x - \chi)\]
+
+\[\frac{dA_b}{dt} = A_b ( \beta_b x - \chi)\]
+
+\[x = 1 - A_w - A_b\]
+
+\[\beta_i = 1.0 - 0.003265(295.5 K -T_i)^2\]
+
+\[T^4_i = R L \frac{S_0}{4\sigma}(\alpha_p-\alpha_i) + T^4_e\]
+
+\[\alpha_p = A_w\alpha_w + A_b\alpha_b + A_g\alpha_g\]
+
+\[T^4_e = L \frac{S_0}{4\sigma}(1-\alpha_p)\]
+
+
+

Appendix: Organization of the adaptive Runge Kutta routines

+
    +
  • The coding follows Press et al., with the adaptive Runge Kutta defined in the Integrator base class here

  • +
  • The step size choice is made in timeloop5err

  • +
  • To set up a specific problem, you need to overide two methods as demonstrated in the example code: the member function that initalizes the concentrations: yinit and the derivatives routine derivs5

  • +
  • In Problem Initial we define a new member function:

  • +
+
def find_temp(self, yvals):
+        """
+            Calculate the temperatures over the white and black daisies
+            and the planetary equilibrium temperature given the daisy fractions
+
+            input:  yvals -- array of dimension [2] with the white [0] and black [1]
+                    daisy fraction
+            output:  white temperature (K), black temperature (K), equilibrium temperature (K)
+        """
+
+
+

which give an example of how to use the instance variable data (self.uservars) in additional calculations.

+
+
+

Appendix: 2 minute intro to object oriented programming

+

For a very brief introduction to python classes take a look at these scipy lecture notes that define some of the basic concepts. For perhaps more detail than you want/need to know, see this 2 part series on object oriented programming and inheritence (supercharge your classes with super()) Briefly, we need a way to store a lot of +information, for example the Runge-Kutta coefficients, in an organized way that is accessible to multiple functions, without having to pass all that information through the function arguments. Python solves this problem by putting both the data and the functions together into an class, as in the Integrator class below.

+
+

Classes and constructors

+
+
[ ]:
+
+
+
class Integrator:
+    def __init__(self, first, second, third):
+        print('Constructing Integrator')
+        self.a = first
+        self.b = second
+        self.c = third
+
+    def dumpit(self, the_name):
+        printlist = [self.a, self.b, self.c]
+        print(f'dumping arguments for {the_name}: {printlist}')
+
+
+
+
    +
  • __init__() is called the class constructor

  • +
  • a,b,c are called class attributes

  • +
  • dumpit() is called a member function or method

  • +
  • We construct and instance of the class by passing the required arguments to __init__

  • +
+
+
[ ]:
+
+
+
the_integ = Integrator(1, 2, 3)
+print(dir(the_integ))
+#note that the_integ now has a, b, c, and dumpit
+
+
+
+
    +
  • and we call the member function like this:

  • +
+
+
[ ]:
+
+
+
the_integ.dumpit('Demo object')
+
+
+
+

What does this buy us? Member functions only need arguments specific to them, and can use any attribute or other member function attached to the self variable, which doesn’t need to be part of the function call.

+
+
+

finding the attributes and methods of a class instance

+

Python has a couple of functions that allow you to see the methods and attributes of objects

+

To get a complete listing of builtin and user-defined methods and attributes use

+
dir
+
+
+
+
[ ]:
+
+
+
dir(the_integ)
+
+
+
+

To see just the attributes, use

+
vars
+
+
+
+
[ ]:
+
+
+
vars(the_integ)
+
+
+
+

The inspect.getmembers function gives you everything as a list of (name,object) tuples so you can filter the items you’re interested in. See:

+

https://docs.python.org/3/library/inspect.html

+
+
[ ]:
+
+
+
import inspect
+all_info_the_integ = inspect.getmembers(the_integ)
+only_methods = [
+    item[0] for item in all_info_the_integ if inspect.ismethod(item[1])
+]
+print('methods for the_integ: ', only_methods)
+
+
+
+
+
+

Inheritance

+

We can also specialize a class by driving from a base and then adding more data or members, or overriding existing values. For example:

+
+
[ ]:
+
+
+
import numpy as np
+class Trig(Integrator):
+
+    def __init__(self, one, two, three, four):
+        print('constructing Trig')
+        #
+        # first construct the base class
+        #
+        super().__init__(one, two, three)
+        self.d = four
+
+    def calc_trig(self):
+        self.trigval = np.sin(self.c * self.d)
+
+    def print_trig(self, the_date):
+        print(f'on {the_date} the value of sin(a*b)=: {self.trigval:5.3f}')
+

+
+
+
+
[ ]:
+
+
+
sample = Trig(1, 2, 3, 4)
+sample.calc_trig()
+sample.print_trig('July 5')
+
+
+
+
+
+

Initializing using yaml

+

To specify the intial values for the class, we use a plain text format called yaml. To write a yaml file, start with a dictionary that contains entries that are themselves dictionaries:

+
+
[ ]:
+
+
+
import yaml
+out_dict = dict()
+out_dict['vegetables'] = dict(carrots=5, eggplant=7, corn=2)
+out_dict['fruit'] = dict(apples='Out of season', strawberries=8)
+with open('groceries.yaml', 'w') as f:
+    yaml.safe_dump(out_dict, f)
+
+
+
+
+
[ ]:
+
+
+
#what's in the yaml file?
+#each toplevel dictionary key became a category
+import sys  #output to sys.stdout because print adds blank lines
+with open('groceries.yaml', 'r') as f:
+    for line in f.readlines():
+        sys.stdout.write(line)
+
+
+
+
+
[ ]:
+
+
+
#read into a dictionary
+with open('groceries.yaml', 'r') as f:
+    init_dict = yaml.safe_load(f)
+print(init_dict)
+
+
+
+
+
+

Overriding initial values in a derived class

+

Suppose we want to change a value like the strength of the sun, \(L\), after it’s been read in from the initail yaml file? Since a derived class can override the yinit function in the Integrator class, we are free to change it to overwrite any variable by reassigning the new value to self in the child constructor.

+

Here’s a simple example showing this kind of reinitialization:

+
+
[ ]:
+
+
+
import numpy as np
+
+
+class Base:
+    #
+    # this constructor is called first
+    #
+    def __init__(self, basevar):
+        self.L = basevar
+
+
+class Child(Base):
+    #
+    # this class changes the initialization
+    # to add a new variable
+    #
+    def __init__(self, a, L):
+        super().__init__(a)
+        #
+        # change the L in the child class
+        #
+        self.L = L
+
+
+
+

Now we can use Child(a,Lval) to construct instances with any value of L we want:

+
+
[ ]:
+
+
+
Lvals = np.linspace(0, 100, 11)
+
+#
+# now make 10 children, each with a different value of L
+#
+a = 5
+for theL in Lvals:
+    newItem = Child(a, theL)
+    print(f'set L value in child class to {newItem.L:3.0f}')
+
+
+
+

To change L in the IntegCoupling class in Problem Conduction look at changing the value above these lines:

+
initvars = namedtuple('initvars', self.config['initvars'].keys())
+self.initvars = initvars(**self.config['initvars'])
+
+
+
+
+

Specific example

+

So to use this technique for Problem Conduction, override set_yinit so that it will take a new luminosity value newL, and add it to uservars, like this:

+
class IntegCoupling(Integrator):
+    """rewrite the set_yinit method
+       to work with luminosity
+    """
+
+    def set_yinit(self, newL):
+        #
+        # change the luminocity
+        #
+        self.config["uservars"]["L"] = newL # change solar incidence fraction
+        #
+        # make a new namedtuple factory called uservars that includes newL
+        #
+        uservars_fac = namedtuple('uservars', self.config['uservars'].keys())
+        #
+        # use the factory to make the augmented uservars named tuple
+        #
+        self.uservars = uservars_fac(**self.config['uservars'])
+        #
+
+
+    def __init__(self, coeffFileName, newL):
+       super().__init__(coeffFileName)
+       self.set_yinit(newL)
+
+    ...
+
+
+

then construct a new instance with a value of newL like this:

+
theSolver = IntegCoupling("coupling.yaml", newL)
+
+
+

The IntegCoupling constructor first constructs an instance of the Integrator class by calling super() and passing it the name of the yaml file. Once this is done then it calls the IntegCoupling.set_yinit method which takes the Integrator class instance (called “self” by convention) and modifies it by adding newL to the usersvars attribute.

+

Try executing

+
newL = 50
+theSolver = IntegCoupling("coupling.yaml", newL)
+
+
+

and verify that:

+

theSolver.uservars.L is indeed 50

+
+

Check your understanding

+

To see if you’re really getting the zeitgeist, try an alternative design where you leave the constructor as is, and instead add a new method called:

+
def reset_L(self,newL)
+
+
+

so that you could do this:

+
newL = 50
+theSolver = IntegCoupling("coupling.yaml")
+theSolver.reset_L(newL)
+
+
+

and get theSolver.uservars.L set to 50.

+
+
+
+

Why bother?

+

What does object oriented programming buy us? The dream was that companies/coders could ship standard base classes, thoroughly tested and documented, and then users could adapt those classes to their special needs using inheritence. This turned out to be too ambitous, but a dialed-back version of this is definitely now part of many major programming languages.

+
+
[ ]:
+
+
+

+
+
+
+
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab5/01-lab5.ipynb b/notebooks/lab5/01-lab5.ipynb new file mode 100644 index 0000000..f8f496d --- /dev/null +++ b/notebooks/lab5/01-lab5.ipynb @@ -0,0 +1,2251 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Lab 5: Daisyworld" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems\n", + "\n", + "\n", + "[Problem Constant Growth](#prob_constant): Daisyworld with a constant growth rate\n", + "\n", + "[Problem Coupling](#prob_coupling): Daisyworld of neutral daisies coupled to\n", + "the temperature\n", + "\n", + "[Problem Conduction](#prob_conduction): Daisyworld steady states and the effect\n", + "of the conduction parameter R\n", + "\n", + "[Problem Initial](#prob_initial): Daisyworld steady states and initial\n", + "conditions\n", + "\n", + "[Problem Temperature](#prob_temperature): Add temperature retrieval code\n", + "\n", + "[Problem Estimate](#prob_estimate): Compare the error estimate to the true\n", + "error\n", + "\n", + "[Problem Adaptive](#prob_adaptive): Adaptive Timestep Code\n", + "\n", + "[Problem Predators](#prob_predator): Adding predators to Daisyworld\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "## Assignment\n", + "\n", + "See canvas site for which problems you should hand in. Your answers should all be within a jupyter notebook. Use subheadings to organise your notebook by question, and markdown cells to describe what you've done and to answer the questions \n", + "You will be asked to upload:\n", + "1. a pdf of your jupyter notebook answering all questions\n", + "2. the jupyter notebook itself (ipynb file) - if you want to import your own module code, include that with the notebook in a zipfile\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Objectives\n", + "\n", + "In this lab, you will use the tools you have learnt in the previous labs to explore a simple environmental model,\n", + "*Daisyworld*, with the help of a Runge-Kutta method with\n", + "adaptive stepsize control.\n", + "\n", + "The goal is for you to gain some experience using a Runge-Kutta\n", + "integrator and to see the advantages of applying error control to the\n", + "algorithm. As well, you will discover the the possible insights one can\n", + "get from studying numerical solutions of a physical model.\n", + "\n", + "In particular you will be able to:\n", + "\n", + "- explain how the daisies affect the climate in the daisy world model\n", + "\n", + "- define an adaptive step-size model\n", + "\n", + "- explain the reasons why an adaptive step-size model may be faster\n", + " for a given accuracy\n", + "\n", + "- explain why white daisies (alone) can survive at a higher solar\n", + " constant than black daisies\n", + "\n", + "- define hysteresis" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Readings\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. However, if you would like additional background on any of\n", + "the following topics, then refer to the sections indicated below:\n", + "\n", + "- **Daisy World:**\n", + "\n", + " - The original article by [Watson and Lovelock, 1983](http://ezproxy.library.ubc.ca/login?url=http://onlinelibrary.wiley.com/enhanced/doi/10.1111/j.1600-0889.1983.tb00031.x) which derive the equations used here.\n", + "\n", + " - A 2008 Reviews of Geophysics article by [Wood et al.](http://ezproxy.library.ubc.ca/login?url=http://doi.wiley.com/10.1029/2006RG000217) with more recent developments (species competition, etc.)\n", + "\n", + "- **Runge-Kutta Methods with Adaptive Stepsize Control:**\n", + "\n", + " - Newman, Section 8.4\n", + "\n", + " - Press, et al. Section 16.2: these are equations we implemented in Python,\n", + " [scanned pdf here](adapt_ode.pdf)\n", + "\n", + " - Burden & Faires Section 5.5" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Introduction\n", + "\n", + "It is obvious that life on earth is highly sensitive to the planet’s\n", + "atmospheric and climatic conditions. What is less obvious, but of great\n", + "interest, is the role biology plays in the sensitivity of the climate.\n", + "This is dramatically illustrated by the concern over the possible\n", + "contribution to global warming by the loss of the rain forests in\n", + "Brazil, and shifts from forests to croplands over many regions of the Earth. \n", + "\n", + "The fact that each may affect the other implies that the climate and\n", + "life on earth are interlocked in a complex series of feedbacks, i.e. the\n", + "climate affects the biosphere which, when altered, then changes the\n", + "climate and so on. A fascinating question arises as to whether or not\n", + "this would eventually lead to a stable climate. This scenerio is\n", + "exploited to its fullest in the *Gaia* hypothesis which\n", + "postulates that the biosphere, atmosphere, ocean and land are all part\n", + "of some totality, dubbed *Gaia*, which is essentially an\n", + "elaborate feedback system which optimizes the conditions of life here on\n", + "earth.\n", + "\n", + "It would be hopeless to attempt to mathematically model such a large,\n", + "complex system. What can be done instead is to construct a ’toy model’\n", + "of the system in which much of the complexity has been stripped away and\n", + "only some of the relevant characteristics retained. The resulting system\n", + "will then be tractable but, unfortunately, may bear little connection\n", + "with the original physical system.\n", + "\n", + "Daisyworld is such a model. Life on Daisyworld has been reduced to just\n", + "two species of daisies of different colors. The only environmental\n", + "condition that affects the daisy growth rate is temperature. The\n", + "temperature in turn is modified by the varying amounts of radiation\n", + "absorbed by the daisies.\n", + "\n", + "Daisyworld is obviously a gross simplification of the real earth.\n", + "However, it does retain the central feature of interest: a feedback loop\n", + "between the climate and life on the planet. Since the equations\n", + "governing the system will be correspondingly simplified, it will allow\n", + "us to investigate under what conditions, if any, can equilibrium be\n", + "reached. The hope is that this will then gain us some insight into how\n", + "life on the real earth may lead to a stable (or unstable) climate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## The Daisyworld Model\n", + "\n", + "Daisyworld is populated by two types of daisies, one darker than bare ground and the other lighter than bare ground. As with life on earth, the daisies will not grow at extreme temperatures and will have optimum growth at moderate temperatures.\n", + "\n", + "The darker, ’black’ daisies absorb more radiation than the lighter, ’white’ daisies. If the black daisy population grows and spreads over more area, an increased amount of solar energy will be absorbed, which will ultimately raise the temperature of the planet. Conversely, an increase in the white daisy population will result in more radiation being reflected away, lowering the planet’s temperature.\n", + "\n", + "The question to be answered is:\n", + "\n", + "**Under what conditions, if any, will the daisy population and temperature reach equilibrium?**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "### The Daisy Population\n", + "\n", + "The daisy population will be modeled along the lines of standard\n", + "population ecology models where the net growth depends upon the current\n", + "population. For example, the simplest model assumes the rate of growth\n", + "is proportional to the population, i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + "\\frac{dA_w}{dt} = k_w A_w\n", + "$$\n", + "\n", + "$$\n", + "\\frac{dA_b}{dt} = k_b A_b\n", + "$$\n", + "\n", + "where $A_w$\n", + "and $A_b$ are fractions of the total planetary area covered by the white\n", + "and black daisies, respectively, and $k_i$, $i=w,b$, are the white and\n", + "black daisy growth rates per unit time, respectively. If assume the the\n", + "growth rates $k_i$ are (positive) constants we would have exponential\n", + "growth like the bunny rabbits of lore.\n", + "\n", + "We can make the model more realistic by letting the daisy birthrate\n", + "depend on the amount of available land, i.e. $$k_i = \\beta_i x$$ where\n", + "$\\beta_i$ are the white and black daisy growth rates per unit time and\n", + "area, respectively, and $x$ is the fractional area of free fertile\n", + "ground not colonized by either species. We can also add a daisy death\n", + "rate per unit time, $\\chi$, to get\n", + "\n", + "\n", + "$\\textbf{eq: constantgrowth}$\n", + "$$\n", + "\\frac{dA_w}{dt} = A_w ( \\beta_w x - \\chi)\n", + "$$\n", + "\n", + "\n", + "$$\n", + "\\frac{dA_b}{dt} = A_b ( \\beta_b x - \\chi)\n", + "$$\n", + "\n", + "However, even these small modifications are non-trivial mathematically\n", + "as the available fertile land is given by,\n", + "$$\n", + " x = 1 - A_w - A_b\n", + "$$\n", + "\n", + "(assuming all the land mass is fertile) which\n", + "makes the equations non-linear." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem constant growth\n", + "\n", + "\n", + "\n", + "Note that though the daisy growth rate per unit time depends on the amount of available fertile land, it is not\n", + "otherwise coupled to the environment (i.e. $\\beta_i$ is not a function of temperature. Making the growth a function of bare ground, however, keeps the daisy population bounded and the daisy population will eventually reach some steady state. The next python cell has a script that runs a fixed timestep Runge Kutte routine that calculates area coverage of white and black daisies for fixed growth rates $\\beta_w$ and $\\beta_b$. Try changing these growth rates (specified in the derivs5 routine) and the initial white and black concentrations (specified in the fixed_growth.yaml file\n", + "discussed next). To hand in: plot graphs to illustrate how these changes have affected the fractional coverage of black and white daisies over time compared to the original. Comment on the changes that you see.\n", + "\n", + "1. For a given set of growth rates try various (non-zero) initial daisy\n", + " populations.\n", + "\n", + "2. For a given set of initial conditions try various growth rates. In\n", + " particular, try rates that are both greater than and less than the\n", + " death rate.\n", + "\n", + "3. Can you determine when non-zero steady states are achieved? Explain. Connect what you see here with the discussion of hysteresis towards the end of this lab - what determines which steady state is reached? \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "#### Running the constant growth rate demo\n", + "\n", + "In the appendix we discuss the design of the integrator class and the adaptive Runge-Kutta routine. For this demo, we need to be able to change variables in the configuration file. For this assignment problem you are asked to:\n", + "\n", + "a. Change the inital white and black daisy concentrations by changing these lines in the [fixed_growth.yaml](https://github.com/rhwhite/numeric_2022/blob/main/notebooks/lab5/fixed_growth.yaml#L13-L15) input file (you can find this file in this lab directory):\n", + "\n", + " ```yaml\n", + "\n", + " initvars:\n", + " whiteconc: 0.2\n", + " blackconc: 0.7\n", + " ```\n", + "\n", + "b. Change the white and black daisy growth rates by editing the variables beta_w and beta_b in the derivs5 routine in the next cell\n", + "\n", + "The Integrator class contains two different timeloops, both of which use embedded Runge Kutta Cash Carp\n", + "code given in Lab 4 and coded here as [rkckODE5](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L70). The simplest way to loop through the timesteps is just to call the integrator with a specified set of times. This is done in [timeloop5fixed](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L244). Below we will describe how to use the error extimates returned by rkckODE5 to tune the size of the timesteps, which is done in [timeloop5Err](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L244)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.493997Z", + "start_time": "2022-02-17T03:40:22.213290Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "#\n", + "# 4.1 integrate constant growth rates with fixed timesteps\n", + "#\n", + "import context\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "from collections import namedtuple\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "\n", + "\n", + "class Integ51(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_white chi S0 L albedo_black R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'whiteconc blackconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.whiteconc, self.initvars.blackconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " #\n", + " # Construct an Integ51 class by inheriting first intializing\n", + " # the parent Integrator class (called super). Then do the extra\n", + " # initialization in the set_yint function\n", + " #\n", + " def __init__(self, coeffFileName):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"y[0]=fraction white daisies\n", + " y[1]=fraction black daisies\n", + "\n", + " Constant growty rates for white\n", + " and black daisies beta_w and beta_b\n", + "\n", + " returns dy/dt\n", + " \"\"\"\n", + " user = self.uservars\n", + " #\n", + " # bare ground\n", + " #\n", + " x = 1.0 - y[0] - y[1]\n", + "\n", + " # growth rates don't depend on temperature\n", + " beta_b = 0.7 # growth rate for black daisies\n", + " beta_w = 0.7 # growth rate for white daisies\n", + "\n", + " # create a 1 x 2 element vector to hold the derivitive\n", + " f = np.empty([self.nvars], 'float')\n", + " f[0] = y[0] * (beta_w * x - user.chi)\n", + " f[1] = y[1] * (beta_b * x - user.chi)\n", + " return f\n", + "\n", + "\n", + "theSolver = Integ51('fixed_growth.yaml')\n", + "timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + "\n", + "plt.close('all')\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, yVals)\n", + "theLines[0].set_marker('+')\n", + "theLines[1].set_linestyle('--')\n", + "theLines[1].set_color('k')\n", + "theLines[1].set_marker('*')\n", + "theAx.set_title('lab 5 interactive 1 constant growth rate')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "theAx.legend(theLines, ('white daisies', 'black daisies'), loc='best')\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, errorList)\n", + "theLines[0].set_marker('+')\n", + "theLines[1].set_linestyle('--')\n", + "theLines[1].set_color('k')\n", + "theLines[1].set_marker('*')\n", + "theAx.set_title('lab 5 interactive 1 errors')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('error')\n", + "out = theAx.legend(theLines, ('white errors', 'black errors'), loc='best')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### The Daisy Growth Rate - Coupling to the Environment\n", + "\n", + "We now want to couple the Daisy growth rate to the climate, which we do by making the growth rate a function of the local temperature $T_i$,\n", + "$$\\beta_i = \\beta_i(T_i)$$\n", + "The growth rate should drop to zero at extreme temperatures and be optimal at moderate temperatures. In Daisyworld this means the daisy population ceases to grow if the temperature drops below $5^o$C or goes above $40^o $C. The simplest model for the growth rate would then be parabolic function of temperature, peaking at $22.5^o$C:\n", + "\n", + "\n", + "$$\\beta_i = 1.0 - 0.003265(295.5 K - T_i)^2$$\n", + "where the $i$ subscript denotes the type of daisy: grey (i=y), white (i=w) or black (i=b). (We're reserving $\\alpha_g$ for the bare ground albedo)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Before specifying the local temperature, and its dependence on the daisy\n", + "population, first consider the emission temperature $T_e$, which is the\n", + "mean temperature of the planet,\n", + "\n", + "\n", + "\n", + "$$ T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)$$\n", + "\n", + "where $S_0$ is a solar\n", + "flux density constant, $L$ is the fraction of $S_0$ received at\n", + "Daisyworld, and $\\alpha_p$ is the planetary albedo. The greater the\n", + "planetary albedo $\\alpha_p$, i.e. the more solar radiation the planet\n", + "reflects, the lower the emission temperature.\n", + "\n", + "**Mathematical note**: The emission temperature is derived on the assumption that the planet is\n", + "in global energy balance and is behaving as a blackbody radiator. See\n", + "the appendix for more information." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-05T01:50:27.578068Z", + "start_time": "2022-02-05T01:50:27.573440Z" + } + }, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Coupling\n", + "\n", + "Consider daisies with the same albedo as the planet, i.e. ’grey’ or neutral daisies, as specified in derivs5 routine below.\n", + "\n", + "1. For the current value of L (0.2) in the file coupling.yaml, the final daisy steady state is zero. Why is it zero?\n", + "\n", + "2. Find a value of L which leads to a non-zero steady state.\n", + "\n", + "3. What happens to the emission temperature as L is varied? Make a plot of $L$ vs. $T_E$ for 10-15 values of $L$. To do this, I overrode the value of L from the init file by passing a new value into the IntegCoupling constructor (see [Appendix A](#sec_override)). This allowed me to put\n", + "\n", + " ```python\n", + " theSolver = IntegCoupling(\"coupling.yaml\",newL)\n", + " timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + " ```\n", + "\n", + " inside a loop that varied the L value and saved the steady state concentration\n", + " for plotting\n", + "\n", + "After reading the the next section on the local temperature,\n", + "\n", + "4. Do you see any difference between the daisy temperature and emission temperature? Plot both and explain. (Hint: I modified derivs5 to save these variables to self so I could compare their values at the end of the simulation. You could also override timeloop5fixed to do the same thing at each timestep.)\n", + "\n", + "5. How (i.e. through what mechanism) does the makeup of the global daisy population affect the local temperature?\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.504206Z", + "start_time": "2022-02-17T03:40:23.496511Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# define functions\n", + "import matplotlib.pyplot as plt\n", + "\n", + "\n", + "class IntegCoupling(Integrator):\n", + " \"\"\"rewrite the init and derivs5 methods to\n", + " work with a single (grey) daisy\n", + " \"\"\"\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_grey chi S0 L R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'greyconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array([self.initvars.greyconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeffFileName):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"\n", + " Make the growth rate depend on the ground temperature\n", + " using the quadratic function of temperature\n", + "\n", + " y[0]=fraction grey daisies\n", + " t = time\n", + " returns f[0] = dy/dt\n", + " \"\"\"\n", + " sigma = 5.67e-8 # Stefan Boltzman constant W/m^2/K^4\n", + " user = self.uservars\n", + " x = 1.0 - y[0]\n", + " albedo_p = x * user.albedo_ground + y[0] * user.albedo_grey\n", + " Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma\n", + " eta = user.R * user.L * user.S0 / (4.0 * sigma)\n", + " temp_y = (eta * (albedo_p - user.albedo_grey) + Te_4)**0.25\n", + " if (temp_y >= 277.5 and temp_y <= 312.5):\n", + " beta_y = 1.0 - 0.003265 * (295.0 - temp_y)**2.0\n", + " else:\n", + " beta_y = 0.0\n", + "\n", + " # create a 1 x 1 element vector to hold the derivative\n", + " f = np.empty([self.nvars], np.float64)\n", + " f[0] = y[0] * (beta_y * x - user.chi)\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.707043Z", + "start_time": "2022-02-17T03:40:23.506823Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot for grey daisies\n", + "import matplotlib.pyplot as plt\n", + "\n", + "theSolver = IntegCoupling('coupling.yaml')\n", + "timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, yVals)\n", + "theAx.set_title('lab 5: interactive 2 Coupling with grey daisies')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "out = theAx.legend(theLines, ('grey daisies', ), loc='best')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## The Local Temperature - Dependence on Surface Heat Conductivity\n", + "\n", + "If we now allow for black and white daisies, the local temperature will\n", + "differ according to the albedo of the region. The regions with white\n", + "daisies will tend to be cooler than the ground and the regions with\n", + "black daisies will tend to be hotter. To determine what the temperature\n", + "is locally, we need to decide how readily the planet surface\n", + "thermalises, i.e. how easily large-scale weather patterns redistributes\n", + "the surface heat.\n", + "\n", + "- If there is perfect heat ‘conduction’ between the different regions\n", + " of the planet then the local temperature will equal the mean\n", + " temperature given by the emission temperature $T_e$.\n", + "\n", + " \n", + " $$\n", + " T^4_i \\equiv T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)\n", + " $$\n", + "\n", + "- If there is no conduction, or perfect ‘insulation’, between regions\n", + " then the temperature will be the emission temperature due to the\n", + " albedo of the local region.\n", + "\n", + " \n", + " $$\n", + " T^4_i= L \\frac{S_0}{4\\sigma}(1-\\alpha_i)\n", + " $$\n", + "where $\\alpha_i$ indicates either $\\alpha_g$, $\\alpha_w$ or $\\alpha_b$.\n", + "\n", + "The local temperature can be chosen to lie between these two values,\n", + "\n", + "\n", + "\n", + "$$\n", + " T^4_i = R L \\frac{S_0}{4\\sigma}(\\alpha_p-\\alpha_i) + T^4_e\n", + "$$\n", + "\n", + "where $R$\n", + "is a parameter that interpolates between the two extreme cases i.e.\n", + "$R=0$ means perfect conduction and $R=1$ implies perfect insulation\n", + "between regions.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-07T19:31:15.932451Z", + "start_time": "2022-02-07T19:31:15.926836Z" + } + }, + "source": [ + "### Problem Conduction\n", + "The conduction parameter R will determine the temperature differential between the bare ground and the regions with black or white daisies. The code in the next cell specifies the derivatives for this situation, removing the feedback between the daisies and the planetary albedo but introducint conduction. Use it to investigate these two questions:\n", + "\n", + "1. Change the value of R and observe the effects on the daisy and emission temperature.\n", + "\n", + "2. What are the effects on the daisy growth rate and the final steady states?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.719383Z", + "start_time": "2022-02-17T03:40:23.710150Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# 5.2 keep the albedo constant at alpha_p and vary the conductivity R\n", + "#\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "\n", + "\n", + "class Integ53(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_white chi S0 L albedo_black R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'whiteconc blackconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.whiteconc, self.initvars.blackconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeffFileName):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"y[0]=fraction white daisies\n", + " y[1]=fraction black daisies\n", + " no feedback between daisies and\n", + " albedo_p (set to ground albedo)\n", + " \"\"\"\n", + " sigma = 5.67e-8 # Stefan Boltzman constant W/m^2/K^4\n", + " user = self.uservars\n", + " x = 1.0 - y[0] - y[1]\n", + " #\n", + " # hard wire the albedo to that of the ground -- no daisy feedback\n", + " #\n", + " albedo_p = user.albedo_ground\n", + " Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma\n", + " eta = user.R * user.L * user.S0 / (4.0 * sigma)\n", + " temp_b = (eta * (albedo_p - user.albedo_black) + Te_4)**0.25\n", + " temp_w = (eta * (albedo_p - user.albedo_white) + Te_4)**0.25\n", + "\n", + " if (temp_b >= 277.5 and temp_b <= 312.5):\n", + " beta_b = 1.0 - 0.003265 * (295.0 - temp_b)**2.0\n", + " else:\n", + " beta_b = 0.0\n", + "\n", + " if (temp_w >= 277.5 and temp_w <= 312.5):\n", + " beta_w = 1.0 - 0.003265 * (295.0 - temp_w)**2.0\n", + " else:\n", + " beta_w = 0.0\n", + "\n", + " # create a 1 x 2 element vector to hold the derivitive\n", + " f = np.empty([self.nvars], 'float')\n", + " f[0] = y[0] * (beta_w * x - user.chi)\n", + " f[1] = y[1] * (beta_b * x - user.chi)\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.925942Z", + "start_time": "2022-02-17T03:40:23.721788Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot conduction problem\n", + "import matplotlib.pyplot as plt\n", + "\n", + "theSolver = Integ53('conduction.yaml')\n", + "timeVals, yVals, errorList = theSolver.timeloop5fixed()\n", + "\n", + "plt.close('all')\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "theLines = theAx.plot(timeVals, yVals)\n", + "theLines[1].set_linestyle('--')\n", + "theLines[1].set_color('k')\n", + "theAx.set_title('lab 5 interactive 3 -- conduction problem')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "out = theAx.legend(theLines, ('white daisies', 'black daisies'),\n", + " loc='center right')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## The Feedback Loop - Feedback Through the Planetary Albedo\n", + "\n", + "The amount of solar radiation the planet reflects will depend on the\n", + "daisy population since the white daisies will reflect more radiation\n", + "than the bare ground and the black daisies will reflect less. So a\n", + "reasonable estimate of the planetary albedo $\\alpha_p$ is an average of\n", + "the albedo’s of the white and black daisies and the bare ground,\n", + "weighted by the amount of area covered by each, i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + " \\alpha_p = A_w\\alpha_w + A_b\\alpha_b + A_g\\alpha_g\n", + "$$\n", + "\n", + "A greater\n", + "population of white daisies will tend to increase planetary albedo and\n", + "decrease the emission temperature, as is apparent from equation\n", + "([lab5:eq:tempe]), while the reverse is true for the black daisies.\n", + "\n", + "To summarize: The daisy population is controlled by its growth rate\n", + "$\\beta_i$ which is a function of the local\n", + "temperature $T_i$ $$\\beta_i = 1.0 - 0.003265(295.5 K -T_i)^2$$ If the\n", + "conductivity $R$ is nonzero, the local temperature is a function of\n", + "planetary albedo $\\alpha_p$\n", + "\n", + "$$T_i = \\left[ R L \\frac{S_0}{4\\sigma}(\\alpha_p-\\alpha_i)\n", + " + T^4_e \\right]^{\\frac{1}{4}}$$\n", + "\n", + "which is determined by the daisy\n", + "population.\n", + "\n", + "- Physically, this provides the feedback from the daisy population\n", + " back to the temperature, completing the loop between the daisies and\n", + " temperature.\n", + "\n", + "- Mathematically, this introduces a rather nasty non-linearity into\n", + " the equations which, as pointed out in the lab 1, usually makes it\n", + " difficult, if not impossible, to obtain exact analytic solutions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Initial\n", + "The feedback means a stable daisy population (a\n", + "steady state) and the environmental conditions are in a delicate\n", + "balance. The code below produces a steady state which arises from a given initial daisy\n", + "population that starts with only white daisies.\n", + "\n", + "1. Add a relatively small (5\\%, blackconc = 0.05) initial fraction of black daisies to the\n", + " value in initial.yaml and see\n", + " what effect this has on the temperature and final daisy populations.\n", + " Do you still have a final non-zero daisy population?\n", + "\n", + "2. Set the initial black daisy population to 0.05 Attempt to adjust the initial white daisy population to obtain a\n", + " non-zero steady state. What value of initial white daisy population gives you a non-zero steady state for blackconc=0.05? Do you have to increase or decrease the initial fraction? What is your explanation for this behavior?\n", + "\n", + "3. Experiment with other initial fractions of daisies and look for\n", + " non-zero steady states. Describe and explain your results. Connect what you see here with the discussion of hysteresis towards the end of this lab - what determines which steady state is reached? " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:23.936568Z", + "start_time": "2022-02-17T03:40:23.927965Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# functions for problem initial\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "\n", + "\n", + "class Integ54(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'albedo_white chi S0 L albedo_black R albedo_ground'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'whiteconc blackconc'\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.whiteconc, self.initvars.blackconc])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeff_file_name):\n", + " super().__init__(coeff_file_name)\n", + " self.set_yinit()\n", + "\n", + " def find_temp(self, yvals):\n", + " \"\"\"\n", + " Calculate the temperatures over the white and black daisies\n", + " and the planetary equilibrium temperature given the daisy fractions\n", + "\n", + " input: yvals -- array of dimension [2] with the white [0] and black [1]\n", + " daisy fractiion\n", + " output: white temperature (K), black temperature (K), equilibrium temperature (K)\n", + " \"\"\"\n", + " sigma = 5.67e-8 # Stefan Boltzman constant W/m^2/K^4\n", + " user = self.uservars\n", + " bare = 1.0 - yvals[0] - yvals[1]\n", + " albedo_p = bare * user.albedo_ground + \\\n", + " yvals[0] * user.albedo_white + yvals[1] * user.albedo_black\n", + " Te_4 = user.S0 / 4.0 * user.L * (1.0 - albedo_p) / sigma\n", + " temp_e = Te_4**0.25\n", + " eta = user.R * user.L * user.S0 / (4.0 * sigma)\n", + " temp_b = (eta * (albedo_p - user.albedo_black) + Te_4)**0.25\n", + " temp_w = (eta * (albedo_p - user.albedo_white) + Te_4)**0.25\n", + " return (temp_w, temp_b, temp_e)\n", + "\n", + " def derivs5(self, y, t):\n", + " \"\"\"y[0]=fraction white daisies\n", + " y[1]=fraction black daisies\n", + " no feedback between daisies and\n", + " albedo_p (set to ground albedo)\n", + " \"\"\"\n", + " temp_w, temp_b, temp_e = self.find_temp(y)\n", + "\n", + " if (temp_b >= 277.5 and temp_b <= 312.5):\n", + " beta_b = 1.0 - 0.003265 * (295.0 - temp_b)**2.0\n", + " else:\n", + " beta_b = 0.0\n", + "\n", + " if (temp_w >= 277.5 and temp_w <= 312.5):\n", + " beta_w = 1.0 - 0.003265 * (295.0 - temp_w)**2.0\n", + " else:\n", + " beta_w = 0.0\n", + " user = self.uservars\n", + " bare = 1.0 - y[0] - y[1]\n", + " # create a 1 x 2 element vector to hold the derivitive\n", + " f = np.empty_like(y)\n", + " f[0] = y[0] * (beta_w * bare - user.chi)\n", + " f[1] = y[1] * (beta_b * bare - user.chi)\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:24.818994Z", + "start_time": "2022-02-17T03:40:23.938484Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot for problem initial\n", + "import matplotlib.pyplot as plt\n", + "import pandas as pd\n", + "\n", + "theSolver = Integ54('initial.yaml')\n", + "timevals, yvals, errorlist = theSolver.timeloop5fixed()\n", + "daisies = pd.DataFrame(yvals, columns=['white', 'black'])\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "line1, = theAx.plot(timevals, daisies['white'])\n", + "line2, = theAx.plot(timevals, daisies['black'])\n", + "line1.set(linestyle='--', color='r', label='white')\n", + "line2.set(linestyle='--', color='k', label='black')\n", + "theAx.set_title('lab 5 interactive 4, initial conditions')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('fractional coverage')\n", + "out = theAx.legend(loc='center right')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Temperature\n", + "The code above in Problem Initial adds a new method, ```find_temp``` that takes the white/black daisy fractions and calculates local and planetary temperatures.\n", + "\n", + "1. override ```timeloop5fixed``` so that it saves these three temperatures, plus the daisy growth rates\n", + " to new variables in the Integ54 instance\n", + "\n", + "2. Make plots of (temp_w, temp_b) and (beta_w, beta_b) vs. time for a case with non-zero equilibrium\n", + " concentrations of both black and white daisies" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Adaptive Stepsize in Runge-Kutta\n", + "\n", + "\n", + "\n", + "### Why Adaptive Stepsize?\n", + "\n", + "As a rule of thumb, accuracy increases in Runge-Kutta methods as stepsize decreases. At the same time, the number of function evaluations performed increases. This tradeoff between accuracy of the solution and computational cost always exists, but in the ODE solution algorithms presented earlier it often appears to be unnecessarily large. To see this, consider the solution to a problem in two different time intervals - in the first time interval, the solution is close to steady, whereas in the second one it changes quickly. For acceptable accuracy with a non-adaptive method the step size will have to be adjusted so that the approximate solution is close to the actual solution in the second interval. The stepsize will be fairly small, so that the approximate solution is able to follow the changes in the solution here. However, as there is no change in stepsize throughout the solution process, the same step size will be applied to approximate the solution in the first time interval, where clearly a much larger stepsize would suffice to achieve the same accuracy. Thus, in a region where the solution behaves nicely a lot of function evaluations are wasted because the stepsize is chosen in accordance with the most quickly changing part of the solution.\n", + "\n", + "The way to address this problem is the use of adaptive stepsize control. This class of algorithms adjusts the stepsize taken in a time interval according to the properties of the solution in that interval, making it useful for producing a solution that has a given accuracy in the minimum number of steps.\n", + "\n", + "\n", + "\n", + "### Designing Adaptive Stepsize Control\n", + "\n", + "Now that the goal is clear, the question remains of how to close in on it. As mentioned above, an adaptive algorithm is usually asked to solve a problem to a desired accuracy. To be able to adjust the stepsize in Runge-Kutta the algorithm must therefore calculate some estimate of how far its solution deviates from the actual solution. If with its initial stepsize this estimate is already well within the desired accuracy, the algorithm can proceed with a larger stepsize. If the error estimate is larger than the desired accuracy, the algorithm decreases the stepsize at this point and attempts to take a smaller step. Calculating this error estimate will always increase the amount of work done at a step compared to non-adaptive methods. Thus, the remaining problem is to devise a method of calculating this error estimate that is both\n", + "inexpensive and accurate.\n", + "\n", + "\n", + "\n", + "### Error Estimate by Step Doubling\n", + "\n", + "The first and simple approach to arriving at an error estimate is to simply take every step twice. The second time the step is divided up into two steps, producing a different estimate of the solution. The difference in the two solutions can be used to produce an estimate of the truncation error for this step.\n", + "\n", + "How expensive is this method to estimate the error? A single step of fourth order Runge-Kutta always takes four function evaluations. As the second time the step is taken in half-steps, it will take 8 evaluations.\n", + "However, the first function evaluation in taking a step twice is identical to both steps, and thus the overall cost for one step with step doubling is $12 - 1 = 11$ function evaluations. This should be compared to taking two normal half-steps as this corresponds to the overall accuracy achieved. So we are looking at 3 function evaluations more per step, or an increase of computational cost by a factor of $1.375$.\n", + "\n", + "Step doubling works in practice, but the next section presents a slicker way of arriving at an error estimate that is less computationally expensive. It is the commmonly used one today.\n", + "\n", + "\n", + "\n", + "### Error Estimate using Embedded Runge-Kutta\n", + "\n", + "Another way of estimating the truncation error of a step is due to the existence of the special fifth-order Runge-Kutta methods discussed earlier. These methods use six function evaluations which can be recombined to produce a fourth-order method . Again, the difference between the fifth and the fourth order solution is used to calculate an\n", + "estimate of the truncation error. Obviously this method requires fewer function evaluations than step doubling, as the two estimates use the same evaluation points. Originally this method was found by Fehlberg, and later Cash and Karp produced the set of constants presented earlier that produce an efficient and accurate error estimate." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Estimate\n", + "In the demo below, compare the error estimate to the true error, on the initial value problem from Lab 4,\n", + "\n", + "$$\\frac{dy}{dt} = -y +t +1, \\;\\;\\;\\; y(0) =1$$\n", + "\n", + "which has the exact solution\n", + "\n", + "$$y(t) = t + e^{-t}$$\n", + "\n", + "1. Play with the time step and final time, attempting small changes at first. How reasonable is the error estimate?\n", + "\n", + "2. Keep decreasing the time step. Does the error estimate diverge from the computed error? Why?\n", + "\n", + "3. Keep increasing the time step. Does the error estimate diverge? What is happening with the numerical solution?" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:24.826024Z", + "start_time": "2022-02-17T03:40:24.821203Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Functions for problem estimate\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "\n", + "\n", + "class Integ55(Integrator):\n", + " def set_yinit(self):\n", + " #\n", + " # read in 'c1 c2 c3'\n", + " #\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in initial yinit\n", + " #\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " self.yinit = np.array([self.initvars.yinit])\n", + " self.nvars = len(self.yinit)\n", + " return None\n", + "\n", + " def __init__(self, coeff_file_name):\n", + " super().__init__(coeff_file_name)\n", + " self.set_yinit()\n", + "\n", + " def derivs5(self, y, theTime):\n", + " \"\"\"\n", + " y[0]=fraction white daisies\n", + " \"\"\"\n", + " user = self.uservars\n", + " f = np.empty_like(self.yinit)\n", + " f[0] = user.c1 * y[0] + user.c2 * theTime + user.c3\n", + " return f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.701593Z", + "start_time": "2022-02-17T03:40:24.827927Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Solve and plot for problem estimate\n", + "import matplotlib.pyplot as plt\n", + "\n", + "theSolver = Integ55('expon.yaml')\n", + "\n", + "timeVals, yVals, yErrors = theSolver.timeloop5Err()\n", + "timeVals = np.array(timeVals)\n", + "exact = timeVals + np.exp(-timeVals)\n", + "yVals = np.array(yVals)\n", + "yVals = yVals.squeeze()\n", + "yErrors = np.array(yErrors)\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "line1 = theAx.plot(timeVals, yVals, label='adapt')\n", + "line2 = theAx.plot(timeVals, exact, 'r+', label='exact')\n", + "theAx.set_title('lab 5 interactive 5')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('y value')\n", + "theAx.legend(loc='center right')\n", + "\n", + "#\n", + "# we need to unpack yvals (a list of arrays of length 1\n", + "# into an array of numbers using a list comprehension\n", + "#\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "realestError = yVals - exact\n", + "actualErrorLine = theAx.plot(timeVals, realestError, label='actual error')\n", + "estimatedErrorLine = theAx.plot(timeVals, yErrors, label='estimated error')\n", + "theAx.legend(loc='best')\n", + "\n", + "timeVals, yVals, yErrors = theSolver.timeloop5fixed()\n", + "\n", + "np_yVals = np.array(yVals).squeeze()\n", + "yErrors = np.array(yErrors)\n", + "np_exact = timeVals + np.exp(-timeVals)\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "line1 = theAx.plot(timeVals, np_yVals, label='fixed')\n", + "line2 = theAx.plot(timeVals, np_exact, 'r+', label='exact')\n", + "theAx.set_title('lab 5 interactive 5 -- fixed')\n", + "theAx.set_xlabel('time')\n", + "theAx.set_ylabel('y value')\n", + "theAx.legend(loc='center right')\n", + "\n", + "thefig, theAx = plt.subplots(1, 1)\n", + "realestError = np_yVals - np_exact\n", + "actualErrorLine = theAx.plot(timeVals, realestError, label='actual error')\n", + "estimatedErrorLine = theAx.plot(timeVals, yErrors, label='estimated error')\n", + "theAx.legend(loc='best')\n", + "theAx.set_title('lab 5 interactive 5 -- fixed errors')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### Using Error to Adjust the Stepsize\n", + "\n", + "Both step doubling and embedded methods leave us with the difference\n", + "between two different order solutions to the same step. Provided is a\n", + "desired accuracy, $\\Delta_{des}$. The way this accuracy is specified\n", + "depends on the problem. It can be relative to the solution at step $i$,\n", + "\n", + "$$\\Delta_{des}(i) = RTOL\\cdot |y(i)|$$\n", + "\n", + "where $RTOL$ is the relative\n", + "tolerance desired. An absolute part should be added to this so that the\n", + "desired accuracy does not become zero. There are more ways to adjust the\n", + "error specification to the problem, but the overall goal of the\n", + "algorithm always is to make $\\Delta_{est}(i)$, the estimated error for a\n", + "step, satisfy\n", + "\n", + "$$|\\Delta_{est}(i)|\\leq\\Delta_{des}(i)|$$\n", + "\n", + "Note also that\n", + "for a system of ODEs $\\Delta_{des}$ is of course a vector, and it is\n", + "wise to replace the componentwise comparison by a vector norm.\n", + "\n", + "Note now that the calculated error term is $O(h^{5})$ as it was found as\n", + "an error estimate to fourth-order Runge-Kutta methods. This makes it\n", + "possible to scale the stepsize as\n", + "\n", + "
eq:hnew
\n", + "$$h_{new} = h_{old}[{\\Delta_{des}\\over \\Delta_{est}}]^{1/5}$$\n", + "\n", + "or,\n", + "to give an example of the suggested use of vector norms above, the new\n", + "stepsize is given by\n", + "\n", + "
eq:hnewnorm
\n", + "$$h_{new} = S h_{old}\\{[{1\\over N}\\sum_{i=1}^{N}({\\Delta_{est}(i)\\over \\Delta_{des}(i)})^{2}]^{1/2}\\}^{-1/5}\\}$$\n", + "\n", + "using the\n", + "root-mean-square norm. $S$ appears as a safety factor ($0
\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "\n", + "## Coding Runge-Kutta Adaptive Stepsize Control\n", + "\n", + "The Runge-Kutta code developed in Lab 4 solves the given ODE system in fixed timesteps. It is now necessary to exert adaptive timestep control over the solution. The python code for this is at given in\n", + "[these lines.](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab5/lab5_funs.py#L145-L197)\n", + "\n", + "In principle, this is pretty simple:\n", + "\n", + "1. As before, take a step specified by the Runge-Kutta algorithm.\n", + "\n", + "2. Determine whether the estimated error lies within the user specified tolerance\n", + "\n", + "3. If the error is too large, calculate the new stepsize using the equations above, e.g. $h_{new} = S h_{old}\\{[{1\\over N}\\sum_{i=1}^{N}({\\Delta_{est}(i)\\over \\Delta_{des}(i)})^{2}]^{1/2}\\}^{-1/5}\\}$ and retake the step.\n", + "\n", + "This can be accomplished by writing a new [timeloop5Err](https://github.com/rhwhite/numeric_2024/blob/main/numlabs/lab5/lab5_funs.py#L115-L117) method which evaluates each Runge-Kutta step. This routine must now also return the estimate of the truncation error.\n", + "\n", + "In practice, it is prudent to take a number of safeguards. This involves defining a number of variables that place limits on the change in stepsize:\n", + "\n", + "- A safety factor ($0
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem adaptive\n", + "The demos in the previous section solved the Daisyworld equations using the embedded Runge-Kutta methods with adaptive timestep control.\n", + "\n", + "1. Run the code and find solutions of Daisyworld with the default settings found in adapt.yaml using the timeloop5Err adaptive code\n", + "\n", + "2. Find the solutions again but this time with fixed stepsizes (you can copy and paste code for this if you don't want to code your own - be sure to read the earlier parts of the lab before attempting this question if you are stuck on how to do this!) and compare the solutions, the size of the timesteps, and the number of the timesteps between the fixed and adaptive timestep code.\n", + "\n", + "3. Given the difference in the number of timesteps, how much faster would the fixed timeloop need to be to give the same performance as the adaptive timeloop for this case?\n", + "\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Daisyworld Steady States\n", + "\n", + "We can now use the Runge-Kutta code with adaptive timestep control to find some steady states of Daisyworld by varying the luminosity $LS_0$ in the uservars section of adapt.yaml and recording the daisy fractions at the end of the integration. The code was used in the earlier sections to find some adhoc steady state solutions and the effect of altering some of the model parameters. What is of interest now is the effect of the daisy feedback on the range of parameter values for which non-zero steady states exist. That the feedback does have an effect on the steady states was readily seen in [Problem initial](#prob_initial).\n", + "\n", + "If we fix all other Daisyworld parameters, we find that non-zero steady states will exist for a range of solar luminosities which we characterize by the parameter L. Recall, that L is the multiple of the solar constant $S_0$ that Daisyworld receives. What we will investigate in the next few sections is the effect of the daisy feedback on the range of L for which non-zero steady states can exist.\n", + "\n", + "We accomplish this by fixing the Daisyworld parameters and finding the resulting steady state daisy population for a given value of L. A plot is then made of the steady-state daisy populations versus various values of L.\n", + "\n", + "\n", + "\n", + "### Neutral Daisies\n", + "\n", + "The first case we consider is the case investigated in a previous demo where the albedo of the daisies and the ground are set to the same value. This means the daisy population has no effect on the planetary temperature, i.e. there is no feedback ([Problem coupling](#prob_coupling)).\n", + "\n", + "$~$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Daisy fraction -- daisies have ground albedo\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Emission temperature\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### Black Daisies\n", + "\n", + "Now consider a population of black daisies. Note the sharp jump in the\n", + "graph when the first non-zero daisy steady states appear and the\n", + "corresponding rise in the planetary temperature. The appearance of the\n", + "black daisies results in a strong positive feedback on the temperature.\n", + "Note as well that the graph drops back to zero at a lower value of L\n", + "than in the case of neutral daisies." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Daisies darker than ground\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "Temperature\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "### White Daisies\n", + "\n", + "Consider now a population of purely white daisies. In this case there is\n", + "an abrupt drop in the daisy steady state when it approaches zero with a\n", + "corresponding jump in the emission temperature. Another interesting\n", + "feature is the appearance of hysteresis (the dependence of the state of a system on its history). I.e. at an L of 1.4, there are two steady state solutions:\n", + "\n", + "a. fractional coverage of white daisies of about 0.7, and an emission temperature of about 20C (look at the direction of the arrows on each plot to determine which emission temperature is linked to which fractional coverage)\n", + "\n", + "b. no white daisies, and an emission temperature of about 55C. \n", + "\n", + "This hysteresis arises since the plot of steady states is different when solar luminosity is lowered as opposed to being raised incrementally. So which steady state the planet will be in will depend on the value of the solar constant before it was 1.4 - was it lower, in which case we'd be in state a (white daisies at about 0.7; see the arrows for direction); or was it higher (state b, no white daisies). This is what we mean when we say the state depends on its _history_ - it matters what state it was in before, even though we're studying steady states." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Daisies brighter than ground\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "Temperature\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "\n", + "\n", + "### Black and White Daisies\n", + "\n", + "Finally, consider a population of both black and white daisies. This\n", + "blends in features from the cases where the daisy population was purely\n", + "white or black. Note how the appearance of a white daisy population\n", + "initially causes the planetary temperature to actually drop even though\n", + "the solar luminosity has been increased." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "fraction of black and white daisies\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "note extended temperature range with stabilizing feedbacks\n", + "" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## Conclusion\n", + "\n", + "Black daisies can survive at lower mean temperatures than the white daisies and the reverse is true for white daisies. The end result is that the range of L for which the non-zero daisy steady states exist is greater than the case of neutral (or no) daisies . In other words, the feedback from the daisies provide a stabilizing effect that extends the set of environmental conditions in which life on Daisyworld can exist.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Predator\n", + "To make life a little more interesting on Daisyworld, add a population of rabbits that feed upon the daisies. The\n", + "rabbit birth rate will be proportional to the area covered by the daisies while, conversely, the daisy *death rate* will be proportional to the rabbit population.\n", + "\n", + "Add another equation to the Daisyworld model which governs the rabbit population and make the appropriate modifications to the existing daisy equations. Modify the set of equations and solve it with the Runge-Kutta method with adaptive timesteps. Use it to look for steady states and to determine their dependence on the initial conditions and model parameters.\n", + "\n", + "Hand in notebook cells that:\n", + "\n", + "1. Show your modified Daisyworld equations and your new integrator class.\n", + "\n", + "2. At least one set of parameter values and initial conditions that leads to the steady state and a plot of the timeseries for the daisies and rabbits.\n", + "\n", + "3. A discussion of the steady state’s dependence on these values, i.e. what happens when they are altered. Include a few plots for illustration.\n", + "\n", + "4. Does adding this feedback extend the range of habital L values for which non-zero populations exist?\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Appendix: Note on Global Energy Balance\n", + "\n", + "The statement that the earth is in energy balance follows from the First\n", + "Law of Thermodynamics, i.e.\n", + "\n", + "**The energy absorbed by an isolated system is equal to the\n", + " change in the internal energy minus the work extracted**\n", + "\n", + "which itself is an expression of the conservation of energy.\n", + "\n", + "For the earth, the primary source of energy is radiation from the sun.\n", + "The power emitted by the sun, known as the solar luminosity, is\n", + "$L_0=3.9 \\times 10^{26}W$ while the energy flux received at the\n", + "mean distance of the earth from the sun ($1.5\\times 10^{11}m$) is called\n", + "the solar constant, $S_0=1367\\ W m^{-2}$. For Daisy World the solar\n", + "constant is taken to be $S_0=3668\\ W m^{-2}$.\n", + "\n", + "The emission temperature of a planet is the temperature the planet would\n", + "be at if it emitted energy like a blackbody. A blackbody, so-called\n", + "because it is a perfect absorber of radiation, obeys the\n", + "Stefan-Boltzmann Law:\n", + "\n", + "\n", + "\n", + "\\textbf{eq: Stefan-Boltzman}\n", + "$$ F_B\\ (Wm^{-2}) = \\sigma T^4_e$$\n", + "\n", + " where $\\epsilon$ is the energy density and\n", + "$\\sigma = 5.67\\times 10^{-8}Wm^{-2}K^{-4}$. Given the energy absorbed,\n", + "it is easy to calculate the emission temperature $T_e$ with\n", + "Stefan-Boltzman equation.\n", + "\n", + "In general, a planet will reflect some of the radiation it receives,\n", + "with the fraction reflected known as the albedo $\\alpha_p$. So the total\n", + "energy absorbed by the planet is actually flux density received times\n", + "the fraction absorbed times the perpendicular area to the sun ( the\n", + "’shadow area’), i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + " E_{\\rm absorbed}=S_0(1-\\alpha_p)\\pi r_p^2$$\n", + "\n", + "where $r^2_p$ is the\n", + "planet’s radius.\n", + "\n", + "If we still assume the planet emits like a blackbody, we can calculate\n", + "the corresponding blackbody emission temperature. The total power\n", + "emitted would be the flux $F_B$ of the blackbody times its\n", + "surface area, i.e.\n", + "\n", + "\n", + "\n", + "$$\n", + " E_{\\rm blackbody} = \\sigma T^4_e 4\\pi r_p^2$$\n", + "\n", + "Equating the energy absorbed with the energy emitted by a blackbody we\n", + "can calculate the emission temperature,\n", + "\n", + "\n", + "\n", + "$$\n", + " T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)$$\n", + "\n", + "\n", + "\n", + "## Summary: Daisy World Equations\n", + "\n", + "$$\\frac{dA_w}{dt} = A_w ( \\beta_w x - \\chi)$$\n", + "\n", + "$$\\frac{dA_b}{dt} = A_b ( \\beta_b x - \\chi)$$\n", + "\n", + "$$x = 1 - A_w - A_b$$\n", + "\n", + "$$\\beta_i = 1.0 - 0.003265(295.5 K -T_i)^2$$\n", + "\n", + "$$T^4_i = R L \\frac{S_0}{4\\sigma}(\\alpha_p-\\alpha_i) + T^4_e$$\n", + "\n", + "$$\\alpha_p = A_w\\alpha_w + A_b\\alpha_b + A_g\\alpha_g$$\n", + "\n", + "$$T^4_e = L \\frac{S_0}{4\\sigma}(1-\\alpha_p)$$" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Appendix: Organization of the adaptive Runge Kutta routines\n", + "\n", + "* The coding follows [Press et al.](adapt_ode.pdf), with the adaptive Runge Kutta defined\n", + " in the Integrator base class [here](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L43-L59)\n", + "\n", + "* The step size choice is made in [timeloop5err](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L115-L118)\n", + "\n", + "* To set up a specific problem, you need to overide two methods as demonstrated in the example code:\n", + "the member function that initalizes the concentrations: [yinit](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L115-L118) and the derivatives routine [derivs5](https://github.com/rhwhite/numeric_2022/blob/main/numlabs/lab5/lab5_funs.py#L66-L68)\n", + "\n", + "* In [Problem Initial](#prob_initial) we define a new member function:\n", + "\n", + "```python\n", + "\n", + "def find_temp(self, yvals):\n", + " \"\"\"\n", + " Calculate the temperatures over the white and black daisies\n", + " and the planetary equilibrium temperature given the daisy fractions\n", + "\n", + " input: yvals -- array of dimension [2] with the white [0] and black [1]\n", + " daisy fraction\n", + " output: white temperature (K), black temperature (K), equilibrium temperature (K)\n", + " \"\"\"\n", + "```\n", + "which give an example of how to use the instance variable data (self.uservars) in additional calculations." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Appendix: 2 minute intro to object oriented programming\n", + "\n", + "For a very brief introduction to python classes take a look at [these scipy lecture notes](http://www.scipy-lectures.org/intro/language/oop.html)\n", + "that define some of the basic concepts. For perhaps more detail than you want/need to know, see this 2 part\n", + "series on [object oriented programming](https://realpython.com/python3-object-oriented-programming) and inheritence ([supercharge your classes with super()](https://realpython.com/python-super/))\n", + "Briefly, we need a way to store a lot of information, for\n", + "example the Runge-Kutta coefficients, in an organized way that is accessible to multiple functions,\n", + "without having to pass all that information through the function arguments. Python solves this problem\n", + "by putting both the data and the functions together into an class, as in the Integrator class below.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "### Classes and constructors" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.864965Z", + "start_time": "2022-02-17T03:40:25.861468Z" + } + }, + "outputs": [], + "source": [ + "class Integrator:\n", + " def __init__(self, first, second, third):\n", + " print('Constructing Integrator')\n", + " self.a = first\n", + " self.b = second\n", + " self.c = third\n", + "\n", + " def dumpit(self, the_name):\n", + " printlist = [self.a, self.b, self.c]\n", + " print(f'dumping arguments for {the_name}: {printlist}')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* ```__init__()``` is called the class constructor\n", + "\n", + "* a,b,c are called class attributes\n", + "\n", + "* ```dumpit()``` is called a member function or method\n", + "\n", + "* We construct and instance of the class by passing the required arguments to ```__init__```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.870243Z", + "start_time": "2022-02-17T03:40:25.866862Z" + } + }, + "outputs": [], + "source": [ + "the_integ = Integrator(1, 2, 3)\n", + "print(dir(the_integ))\n", + "#note that the_integ now has a, b, c, and dumpit" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "* and we call the member function like this:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.874862Z", + "start_time": "2022-02-17T03:40:25.872305Z" + } + }, + "outputs": [], + "source": [ + "the_integ.dumpit('Demo object')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What does this buy us? Member functions only need arguments specific to them, and can use any\n", + "attribute or other member function attached to the self variable, which doesn't need to be\n", + "part of the function call." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### finding the attributes and methods of a class instance\n", + "\n", + "Python has a couple of functions that allow you to see the methods and\n", + "attributes of objects\n", + "\n", + "To get a complete listing of builtin and user-defined methods and attributes use\n", + "\n", + "```\n", + " dir\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.880876Z", + "start_time": "2022-02-17T03:40:25.877080Z" + } + }, + "outputs": [], + "source": [ + "dir(the_integ)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To see just the attributes, use\n", + "\n", + "```\n", + " vars\n", + "```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.886283Z", + "start_time": "2022-02-17T03:40:25.882379Z" + } + }, + "outputs": [], + "source": [ + "vars(the_integ)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The inspect.getmembers function gives you everything as a list of (name,object) tuples\n", + "so you can filter the items you're interested in. See:\n", + "\n", + "https://docs.python.org/3/library/inspect.html" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.892154Z", + "start_time": "2022-02-17T03:40:25.888471Z" + } + }, + "outputs": [], + "source": [ + "import inspect\n", + "all_info_the_integ = inspect.getmembers(the_integ)\n", + "only_methods = [\n", + " item[0] for item in all_info_the_integ if inspect.ismethod(item[1])\n", + "]\n", + "print('methods for the_integ: ', only_methods)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Inheritance" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 2 + }, + "source": [ + "We can also specialize a class by driving from a base and then adding more data or members,\n", + "or overriding existing values. For example:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.898142Z", + "start_time": "2022-02-17T03:40:25.894004Z" + } + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "class Trig(Integrator):\n", + "\n", + " def __init__(self, one, two, three, four):\n", + " print('constructing Trig')\n", + " #\n", + " # first construct the base class\n", + " #\n", + " super().__init__(one, two, three)\n", + " self.d = four\n", + "\n", + " def calc_trig(self):\n", + " self.trigval = np.sin(self.c * self.d)\n", + "\n", + " def print_trig(self, the_date):\n", + " print(f'on {the_date} the value of sin(a*b)=: {self.trigval:5.3f}')\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.903481Z", + "start_time": "2022-02-17T03:40:25.899897Z" + } + }, + "outputs": [], + "source": [ + "sample = Trig(1, 2, 3, 4)\n", + "sample.calc_trig()\n", + "sample.print_trig('July 5')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Initializing using yaml\n", + "\n", + "To specify the intial values for the class, we use a plain text\n", + "format called [yaml](http://www.yaml.org/spec/1.2/spec.html). To write a yaml\n", + "file, start with a dictionary that contains entries that are themselves dictionaries:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.911033Z", + "start_time": "2022-02-17T03:40:25.905692Z" + } + }, + "outputs": [], + "source": [ + "import yaml\n", + "out_dict = dict()\n", + "out_dict['vegetables'] = dict(carrots=5, eggplant=7, corn=2)\n", + "out_dict['fruit'] = dict(apples='Out of season', strawberries=8)\n", + "with open('groceries.yaml', 'w') as f:\n", + " yaml.safe_dump(out_dict, f)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.916796Z", + "start_time": "2022-02-17T03:40:25.912866Z" + } + }, + "outputs": [], + "source": [ + "#what's in the yaml file?\n", + "#each toplevel dictionary key became a category\n", + "import sys #output to sys.stdout because print adds blank lines\n", + "with open('groceries.yaml', 'r') as f:\n", + " for line in f.readlines():\n", + " sys.stdout.write(line)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.924798Z", + "start_time": "2022-02-17T03:40:25.919000Z" + } + }, + "outputs": [], + "source": [ + "#read into a dictionary\n", + "with open('groceries.yaml', 'r') as f:\n", + " init_dict = yaml.safe_load(f)\n", + "print(init_dict)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "### Overriding initial values in a derived class" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Suppose we want to change a value like the strength of the sun, $L$, after it's been\n", + "read in from the initail yaml file? Since a derived class can override the yinit function\n", + "in the Integrator class, we are free to change it to overwrite any variable by reassigning\n", + "the new value to self in the child constructor.\n", + "\n", + "Here's a simple example showing this kind of reinitialization:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.932658Z", + "start_time": "2022-02-17T03:40:25.929740Z" + } + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "\n", + "\n", + "class Base:\n", + " #\n", + " # this constructor is called first\n", + " #\n", + " def __init__(self, basevar):\n", + " self.L = basevar\n", + "\n", + "\n", + "class Child(Base):\n", + " #\n", + " # this class changes the initialization\n", + " # to add a new variable\n", + " #\n", + " def __init__(self, a, L):\n", + " super().__init__(a)\n", + " #\n", + " # change the L in the child class\n", + " #\n", + " self.L = L" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now we can use Child(a,Lval) to construct instances with any value of L we want:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-17T03:40:25.938863Z", + "start_time": "2022-02-17T03:40:25.934853Z" + } + }, + "outputs": [], + "source": [ + "Lvals = np.linspace(0, 100, 11)\n", + "\n", + "#\n", + "# now make 10 children, each with a different value of L\n", + "#\n", + "a = 5\n", + "for theL in Lvals:\n", + " newItem = Child(a, theL)\n", + " print(f'set L value in child class to {newItem.L:3.0f}')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To change L in the IntegCoupling class in [Problem Conduction](#prob_conduction) look at\n", + "changing the value above these lines:" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "```python\n", + "initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + "self.initvars = initvars(**self.config['initvars'])\n", + "```" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Specific example\n", + "\n", + "So to use this technique for [Problem Conduction](#prob_conduction), override `set_yinit` so that\n", + "it will take a new luminosity value newL, and add it to uservars, like this:\n", + "\n", + "```python\n", + "class IntegCoupling(Integrator):\n", + " \"\"\"rewrite the set_yinit method\n", + " to work with luminosity\n", + " \"\"\"\n", + "\n", + " def set_yinit(self, newL):\n", + " #\n", + " # change the luminocity\n", + " #\n", + " self.config[\"uservars\"][\"L\"] = newL # change solar incidence fraction\n", + " #\n", + " # make a new namedtuple factory called uservars that includes newL \n", + " #\n", + " uservars_fac = namedtuple('uservars', self.config['uservars'].keys())\n", + " #\n", + " # use the factory to make the augmented uservars named tuple\n", + " #\n", + " self.uservars = uservars_fac(**self.config['uservars'])\n", + " #\n", + "\n", + "\n", + " def __init__(self, coeffFileName, newL):\n", + " super().__init__(coeffFileName)\n", + " self.set_yinit(newL)\n", + " \n", + " ...\n", + "```\n", + "\n", + "then construct a new instance with a value of newL like this:\n", + "\n", + "```python\n", + "theSolver = IntegCoupling(\"coupling.yaml\", newL)\n", + "```\n", + "\n", + "The IntegCoupling constructor first constructs an instance of the Integrator\n", + "class by calling `super()` and passing it the name of the yaml file. Once this\n", + "is done then it\n", + "calls the `IntegCoupling.set_yinit` method which takes the Integrator class instance\n", + "(called \"self\" by convention) and modifies it by adding newL to the usersvars\n", + "attribute.\n", + "\n", + "Try executing\n", + "\n", + "```python\n", + "newL = 50\n", + "theSolver = IntegCoupling(\"coupling.yaml\", newL)\n", + "```\n", + "\n", + "and verify that:\n", + "\n", + "`theSolver.uservars.L` is indeed 50\n", + "\n", + "#### Check your understanding\n", + "\n", + "To see if you're really getting the zeitgeist, try an alternative design where\n", + "you leave the constructor as is, and instead add a new method called:\n", + "\n", + "```python\n", + "def reset_L(self,newL)\n", + "```\n", + "\n", + "so that you could do this:\n", + "\n", + "```python\n", + "newL = 50\n", + "theSolver = IntegCoupling(\"coupling.yaml\")\n", + "theSolver.reset_L(newL)\n", + "```\n", + "\n", + "and get `theSolver.uservars.L` set to 50." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Why bother?\n", + "\n", + "What does object oriented programming buy us? The dream was that companies/coders could ship\n", + "standard base classes, thoroughly tested and documented, and then users could adapt those\n", + "classes to their special needs using inheritence. This turned out to be too ambitous,\n", + "but a dialed-back version of this is definitely now part of many major programming languages." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": false, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "679.091px", + "left": "0px", + "top": "66.2926px", + "width": "207.145px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab5/adapt_ode.pdf b/notebooks/lab5/adapt_ode.pdf new file mode 100644 index 0000000..2d0956b Binary files /dev/null and b/notebooks/lab5/adapt_ode.pdf differ diff --git a/notebooks/lab6/01-lab6.html b/notebooks/lab6/01-lab6.html new file mode 100644 index 0000000..e0a916f --- /dev/null +++ b/notebooks/lab6/01-lab6.html @@ -0,0 +1,582 @@ + + + + + + + + Lab 6: The Lorenz equations — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Lab 6: The Lorenz equations

+
+

List of Problems

+

Problem Experiment: Investigation of the behaviour of solutions

+

Problem Steady-states: Find the stationary points of the Lorenz system

+

Problem Eigenvalues: Find the eigenvalues of the stationary point (0,0,0)

+

Problem Stability: Discuss the effect of r on the stability of the solution

+

Problem Adaptive: Adaptive time-stepping for the Lorenz equations

+

Problem Sensitivity: Sensitivity to initial conditions

+
+
+

Objectives

+

In this lab, you will investigate the transition to chaos in the Lorenz equations – a system of non-linear ordinary differential equations. Using interactive examples, and analytical and numerical techniques, you will determine the stability of the solutions to the system, and discover a rich variety in their behaviour. You will program both an adaptive and non-adaptive Runge-Kuttan code for the problem, and determine the relative merits of each.

+
+
+

Readings

+

There is no required reading for this lab, beyond the contents of the lab itself. Nevertheless, the original 1963 paper by Lorenz  is worthwhile reading from a historical standpoint.

+

If you would like additional background on any of the following topics, then refer to Appendix B for the following:

+
    +
  • Easy Reading:

    +
      +
    • Gleick  (1987) [pp. 9-31], an interesting overview of the science of chaos (with no mathematical details), and a look at its history.

    • +
    • Palmer (1993) has a short article on Lorenz’ work and concentrating on its consequences for weather prediction.

    • +
    +
  • +
  • Mathematical Details:

    +
      +
    • Sparrow (1982), an in-depth treatment of the mathematics behind the Lorenz equations, including some discussion of numerical methods.

    • +
    • The original equations by Saltzman (1962) and the first Lorentz (1963) paper on the computation.

    • +
    +
  • +
+
+
+

Introduction

+

For many people working in the physical sciences, the butterfly effect is a well-known phrase. But even if you are unacquainted with the term, its consequences are something you are intimately familiar with. Edward Lorenz investigated the feasibility of performing accurate, long-term weather forecasts, and came to the conclusion that even something as seemingly insignificant as the flap of a butterfly’s wings can have an influence on the weather on the other side of the globe. This implies +that global climate modelers must take into account even the tiniest of variations in weather conditions in order to have even a hope of being accurate. Some of the models used today in weather forecasting have up to a million unknown variables!

+

With the advent of modern computers, many people believed that accurate predictions of systems as complicated as the global weather were possible. Lorenz’ studies (Lorenz, 1963), both analytical and numerical, were concerned with simplified models for the flow of air in the atmosphere. He found that even for systems with considerably fewer variables than the weather, the long-term behaviour of solutions is intrinsically unpredictable. He found that this type of non-periodic, or chaotic +behaviour, appears in systems that are described by non-linear differential equations.

+

The atmosphere is just one of many hydrodynamical systems, which exhibit a variety of solution behaviour: some flows are steady; others oscillate between two or more states; and still others vary in an irregular or haphazard manner. This last class of behaviour in a fluid is known as turbulence, or in more general systems as chaos. Examples of chaotic behaviour in physical systems include

+
    +
  • thermal convection in a tank of fluid, driven by a heated plate on the bottom, which displays an irregular patter of “convection rolls” for certain ranges of the temperature gradient;

  • +
  • a rotating cylinder, filled with fluid, that exhibits regularly-spaced waves or irregular, nonperiodic flow patterns under different conditions;

  • +
  • the Lorenzian water wheel, a mechanical system, described in Appendix A.

  • +
+

One of the simplest systems to exhibit chaotic behaviour is a system of three ordinary differential equations, studied by Lorenz, and which are now known as the Lorenz equations (see equations (eq: lorentz). They are an idealization of a more complex hydrodynamical system of twelve equations describing turbulent flow in the atmosphere, but which are still able to capture many of the important aspects of the behaviour of atmospheric flows. The Lorenz equations determine the +evolution of a system described by three time-dependent state variables, \(x(t)\), \(y(t)\) and \(z(t)\). The state in Lorenz’ idealized climate at any time, \(t\), can be given by a single point, \((x,y,z)\), in phase space. As time varies, this point moves around in the phase space, and traces out a curve, which is also called an orbit or trajectory.

+

The video below shows an animation of the 3-dimensional phase space trajectories of \(x, y, z\) for the Lorenz equations presented below. It is calculated with the python script by written by Jake VanderPlas: lorenz_ode.py

+
+
[ ]:
+
+
+
from IPython.display import YouTubeVideo
+YouTubeVideo('DDcCiXLAk2U')
+
+
+
+
+
+

Using the Integrator class

+

lorenz_ode.py uses the odeint package from scipy. That’s fine if we are happy with a black box, but we can also use the Integrator class from lab 5. Here is the sub-class Integrator61 that is specified for the Lorenz equations:

+
+
[ ]:
+
+
+
import context
+from numlabs.lab5.lab5_funs import Integrator
+from collections import namedtuple
+import numpy as np
+
+
+
+class Integ61(Integrator):
+
+    def __init__(self, coeff_file_name,initvars=None,uservars=None,
+                timevars=None):
+        super().__init__(coeff_file_name)
+        self.set_yinit(initvars,uservars,timevars)
+
+    def set_yinit(self,initvars,uservars,timevars):
+        #
+        # read in 'sigma beta rho', override if uservars not None
+        #
+        if uservars:
+            self.config['uservars'].update(uservars)
+        uservars = namedtuple('uservars', self.config['uservars'].keys())
+        self.uservars = uservars(**self.config['uservars'])
+        #
+        # read in 'x y z'
+        #
+        if initvars:
+            self.config['initvars'].update(initvars)
+        initvars = namedtuple('initvars', self.config['initvars'].keys())
+        self.initvars = initvars(**self.config['initvars'])
+        #
+        # set dt, tstart, tend if overiding base class values
+        #
+        if timevars:
+            self.config['timevars'].update(timevars)
+            timevars = namedtuple('timevars', self.config['timevars'].keys())
+            self.timevars = timevars(**self.config['timevars'])
+        self.yinit = np.array(
+            [self.initvars.x, self.initvars.y, self.initvars.z])
+        self.nvars = len(self.yinit)
+
+    def derivs5(self, coords, t):
+        x,y,z = coords
+        u=self.uservars
+        f=np.empty_like(coords)
+        f[0] = u.sigma * (y - x)
+        f[1] = x * (u.rho - z) - y
+        f[2] = x * y - u.beta * z
+        return f
+
+
+
+

The main difference with daisyworld is that I’ve changed the __init__ function to take optional arguments to take initvars, uservars and timevars, to give us more flexibility in overriding the default configuration specified in lorenz.yaml

+

I also want to be able to plot the trajectories in 3d, which means that I need the Axes3D class from matplotlib. I’ve written a convenience function called plot_3d that sets start and stop points and the viewing angle:

+
+
[ ]:
+
+
+
import warnings
+warnings.simplefilter(action = "ignore", category = FutureWarning)
+%matplotlib inline
+from matplotlib import pyplot as plt
+from mpl_toolkits.mplot3d import Axes3D
+plt.style.use('ggplot')
+
+def plot_3d(ax,xvals,yvals,zvals):
+    """
+        plot a 3-d trajectory with start and stop markers
+    """
+    line,=ax.plot(xvals,yvals,zvals,'r-')
+    ax.set_xlim((-20, 20))
+    ax.set_ylim((-30, 30))
+    ax.set_zlim((5, 55))
+    ax.grid(True)
+    #
+    # look down from 30 degree elevation and an azimuth of
+    #
+    ax.view_init(30,5)
+    line,=ax.plot(xvals,yvals,zvals,'r-')
+    ax.plot([-20,15],[-30,-30],[0,0],'k-')
+    ax.scatter(xvals[0],yvals[0],zvals[0],marker='o',c='green',s=75)
+    ax.scatter(xvals[-1],yvals[-1],zvals[-1],marker='^',c='blue',s=75)
+    out=ax.set(xlabel='x',ylabel='y',zlabel='z')
+    line.set(alpha=0.2)
+    return ax
+

+
+
+

In the code below I set timevars, uservars and initvars to illustrate a sample orbit in phase space (with initial value \((5,5,5)\)). Notice that the orbit appears to be lying in a surface composed of two “wings”. In fact, for the parameter values used here, all orbits, no matter the initial conditions, are eventually attracted to this surface; such a surface is called an attractor, and this specific one is termed the butterfly attractor … a very fitting name, both for its appearance, +and for the fact that it is a visualization of solutions that exhibit the “butterfly effect.” The individual variables are plotted versus time in Figure xyz-vs-t.

+
+
[ ]:
+
+
+
#
+# make a nested dictionary to hold parameters
+#
+timevars=dict(tstart=0,tend=27,dt=0.01)
+uservars=dict(sigma=10,beta=2.6666,rho=28)
+initvars=dict(x=5,y=5,z=5)
+params=dict(timevars=timevars,uservars=uservars,initvars=initvars)
+#
+# expand the params dictionary into key,value pairs for
+# the Integ61 constructor using dictionary expansion
+#
+theSolver = Integ61('lorenz.yaml',**params)
+timevals, coords, errorlist = theSolver.timeloop5fixed()
+xvals,yvals,zvals=coords[:,0],coords[:,1],coords[:,2]
+
+
+fig = plt.figure(figsize=(6,6))
+ax = fig.add_axes([0, 0, 1, 1], projection='3d')
+ax=plot_3d(ax,xvals,yvals,zvals)
+out=ax.set(title='starting point: {},{},{}'.format(*coords[0,:]))
+#help(ax.view_init)
+
+
+
+

A plot of the solution to the Lorenz equations as an orbit in phase space. Parameters: \(\sigma=10\), \(\beta=\frac{8}{3}\), \(\rho=28\); initial values: \((x,y,z)=(5,5,5)\).

+
+
[ ]:
+
+
+
fig,ax = plt.subplots(1,1,figsize=(8,6))
+ax.plot(timevals,xvals,label='x')
+ax.plot(timevals,yvals,label='y')
+ax.plot(timevals,zvals,label='z')
+ax.set(title='x, y, z for trajectory',xlabel='time')
+out=ax.legend()
+
+
+
+

Figure xyz-vs-t: A plot of the solution to the Lorenz equations versus time. Parameters: \(\sigma=10\), \(\beta=\frac{8}{3}\), \(\rho=28\); initial values: \((x,y,z)=(5,5,5)\).

+

As you saw in the movie, the behaviour of the solution, even though it seems to be confined to a specific surface, is anything but regular. The solution seems to loop around and around forever, oscillating around one of the wings, and then jump over to the other one, with no apparent pattern to the number of revolutions. This example is computed for just one choice of parameter values, and you will see in the problems later on in this lab, that there are many other types of solution behaviour. +In fact, there are several very important characteristics of the solution to the Lorenz equations which parallel what happens in much more complicated systems such as the atmosphere:

+
    +
  1. The solution remains within a bounded region (that is, none of the values of the solution “blow up”), which means that the solution will always be physically reasonable.

  2. +
  3. The solution flips back and forth between the two wings of the butterfly diagram, with no apparent pattern. This “strange” way that the solution is attracted towards the wings gives rise to the name strange attractor.

  4. +
  5. The resulting solution depends very heavily on the given initial conditions. Even a very tiny change in one of the initial values can lead to a solution which follows a totally different trajectory, if the system is integrated over a long enough time interval.

  6. +
  7. The solution is irregular or chaotic, meaning that it is impossible, based on parameter values and initial conditions (which may contain small measurement errors), to predict the solution at any future time.

  8. +
+
+
+

The Lorenz Equations

+

As mentioned in the previous section, the equations we will be considering in this lab model an idealized hydrodynamical system: two-dimensional convection in a tank of water which is heated at the bottom (as pictured in Figure Convection below).

+
+
[ ]:
+
+
+
from IPython.display import Image
+Image(filename="images/convection.png")
+
+
+
+

Figure Convection Lorenz studied the flow of fluid in a tank heated at the bottom, which results in “convection rolls”, where the warm fluid rises, and the cold fluid is drops to the bottom.

+

Lorenz wrote the equations in the form

+
+\[\begin{split}\begin{aligned} + \frac{dx}{dt} &=& \sigma(y-x) \\ + \frac{dy}{dt} &=& \rho x-y-xz \\ + \frac{dz}{dt} &=& xy-\beta z +\end{aligned}\end{split}\]
+

where \(\sigma\), \(\rho\) and \(\beta\) are real, positive parameters. The variables in the problem can be interpreted as follows:

+
    +
  • \(x\) is proportional to the intensity of the convective motion (positive for clockwise motion, and a larger magnitude indicating more vigorous circulation),

  • +
  • \(y\) is proportional to the temperature difference between the ascending and descending currents (it’s positive if the warm water is on the bottom),

  • +
  • \(z\) is proportional to the distortion of the vertical temperature profile from linearity (a value of 0 corresponds to a linear gradient in temperature, while a positive value indicates that the temperature is more uniformly mixed in the middle of the tank and the strongest gradients occur near the boundaries),

  • +
  • \(t\) is the dimensionless time,

  • +
  • \(\sigma\) is called the Prandtl number (it involves the viscosity and thermal conductivity of the fluid),

  • +
  • \(\rho\) is a control parameter, representing the temperature difference between the top and bottom of the tank, and

  • +
  • \(\beta\) measures the width-to-height ratio of the convection layer.

  • +
+

Notice that these equations are non-linear in \(x\), \(y\) and \(z\), which is a result of the non-linearity of the fluid flow equations from which this simplified system is obtained.

+

Mathematical Note: This system of equations is derived by Saltzman (1962) for the thermal convection problem. However, the same equations (eq:lorenz) arise in other physical systems as well. One example is the whose advantage over the original derivation by Saltzman (which is also used in Lorenz’ 1963 paper ) is that the system of ODEs is obtained directly from the physics, rather than as an approximation to a partial differential equation.

+

Remember from Section Introduction that the Lorenz equations exhibit nonperiodic solutions which behave in a chaotic manner. Using analytical techniques, it is actually possible to make some qualitative predictions about the behaviour of the solution before doing any computations. However, before we move on to a discussion of the stability of the problem in Section [lab6:sec:stability], you should do the following exercise, which will give you a hands-on introduction to +the behaviour of solutions to the Lorenz equations.

+
+
+

Boundedness of the Solution

+

The easiest way to see that the solution is bounded in time is by looking at the motion of the solution in phase space, \((x,y,z)\), as the flow of a fluid, with velocity \((\dot{x}, \dot{y}, \dot{z})\) (the “dot” is used to represent a time derivative, in order to simplify notation in what follows). The divergence of this flow is given by

+
+\[\frac{\partial \dot{x}}{\partial x} + + \frac{\partial \dot{y}}{\partial y} + + \frac{\partial \dot{z}}{\partial z},\]
+

and measures how the volume of a fluid particle or parcel changes – a positive divergence means that the fluid volume is increasing locally, and a negative volume means that the fluid volume is shrinking locally (zero divergence signifies an incompressible fluid, which you will see more of in and ). If you look back to the Lorenz equations (eq:lorenz), and take partial derivatives, it is clear that the divergence of this flow is given by

+
+\[\frac{\partial \dot{x}}{\partial x} + +\frac{\partial \dot{y}}{\partial y} + +\frac{\partial \dot{z}}{\partial z} = -(\sigma + b + 1).\]
+

Since \(\sigma\) and \(b\) are both positive, real constants, the divergence is a negative number, which is always less than \(-1\). Therefore, each small volume shrinks to zero as the time \(t\rightarrow\infty\), at a rate which is independent of \(x\), \(y\) and \(z\). The consequence for the solution, \((x,y,z)\), is that every trajectory in phase space is eventually confined to a region of zero volume. As you saw in Problem experiment, +this region, or attractor, need not be a point – in fact, the two wings of the “butterfly diagram” are a surface with zero volume.

+

The most important consequence of the solution being bounded is that none of the physical variables, \(x\), \(y\), or \(z\) “blows up.” Consequently, we can expect that the solution will remain with physically reasonable limits.

+

Problem Experiment

+

Lorenz’ results are based on the following values of the physical parameters taken from Saltzman’s paper (Saltzman, 1962):

+
+\[\sigma=10 \quad \mathrm{and} \quad b=\frac{8}{3}.\]
+

As you will see in Section stability, there is a critical value of the parameter :math:`rho`, \(\rho^\ast=470/19\approx 24.74\) (for these values of \(\sigma\) and \(\beta\)); it is critical in the sense that for any value of \(\rho>\rho^\ast\), the flow is unstable.

+

To allow you to investigate the behaviour of the solution to the Lorenz equations, you can try out various parameter values in the following interactive example. Initially, leave :math:`sigma` and :math:`beta` alone, and modify only :math:`rho` and the initial conditions. If you have time, you can try varying the other two parameters, and see what happens. Here are some suggestions:

+
    +
  • Fix the initial conditions at \((5,5,5)\) and vary \(\rho\) between \(0\) and \(100\).

  • +
  • Fix \(\rho=28\), and vary the initial conditions; for example, try \((0,0,0)\), \((0.1,0.1,0.1)\), \((0,0,20)\), \((100,100,100)\), \((8.5,8.5,27)\), etc.

  • +
  • Anything else you can think of …

  • +
+
    +
  1. Describe the different types of behaviour you see and compare them to what you saw in Figure fixed-plot. Also, discuss the results in terms of what you read in Section Introduction regarding the four properties of the solution.

  2. +
  3. One question you should be sure to ask yourself is: Does changing the initial condition affect where the solution ends up? The answer to this question will indicate whether there really is an attractor which solutions approach as \(t\rightarrow\infty\).

  4. +
  5. Finally, for the different types of solution behaviour, can you interpret the results physically in terms of the thermal convection problem?

  6. +
+

Now, we’re ready to find out why the solution behaves as it does. In Section Intro, you were told about four properties of solutions to the Lorenz equations that are also exhibited by the atmosphere, and in the problem you just worked though, you saw that these were also exhibited by solutions to the Lorenz equations. In the remainder of this section, you will see mathematical reasons for two of those characteristics, namely the boundedness and stability (or instability) +of solutions.

+
+
+

Steady States

+

A steady state of a system is a point in phase space from which the system will not change in time, once that state has been reached. In other words, it is a point, \((x,y,z)\), such that the solution does not change, or where

+
+\[\frac{dx}{dt} = 0 \quad\ \mathrm{and}\ \quad \frac{dy}{dt} = 0 \quad \ \mathrm{and}\ \quad \frac{dz}{dt} = 0.\]
+

This point is usually referred to as a stationary point of the system.

+

Problem_steady-states

+

Set the time derivatives equal to zero in the Lorenz equations (eq:lorenz), and solve the resulting system to show that there are three possible steady states, namely the points

+
    +
  • \((0,0,0)\),

  • +
  • \((\sqrt{\beta(\rho-1)},\sqrt{\beta(\rho -1)},\rho -1)\), and

  • +
  • \((-\sqrt{\beta (\rho -1)},-\sqrt{\beta(\rho-1)},\rho-1)\).

  • +
+

Remember that \(\rho\) is a positive real number, so that that there is only one stationary point when \(0\leq \rho \leq 1\), but all three stationary points are present when \(\rho >1\).

+

While working through Problem experiment, did you notice the change in behaviour of the solution as \(\rho\) passes through the value 1? If not, then go back to the interactive example and try out some values of \(\rho\) both less than and greater than 1 to see how the solution changes.

+

A steady state tells us the behaviour of the solution only at a single point. But what happens to the solution if it is perturbed slightly away from a stationary point? Will it return to the stationary point; or will it tend to move away from the point; or will it oscillate about the steady state; or something else … ? All of these questions are related to the long-term, asymptotic behaviour or stability of the solution near a given point. You already should have seen some examples of +different asymptotic solution behaviour in the Lorenz equations for different parameter values. The next section describes a general method for determining the stability of a solution near a given stationary point.

+
+
+

Linearization about the Steady States

+

The difficult part of doing any theoretical analysis of the Lorenz equations is that they are non-linear. So, why not approximate the non-linear problem by a linear one?

+

This idea should remind you of what you read about Taylor series in Lab #2. There, we were approximating a function, \(f(x)\), around a point by expanding the function in a Taylor series, and the first order Taylor approximation was simply a linear function in \(x\). The approach we will take here is similar, but will get into Taylor series of functions of more than one variable: \(f(x,y,z,\dots)\).

+

The basic idea is to replace the right hand side functions in (eq:lorenz) with a linear approximation about a stationary point, and then solve the resulting system of linear ODE’s. Hopefully, we can then say something about the non-linear system at values of the solution close to the stationary point (remember that the Taylor series is only accurate close to the point we’re expanding about).

+

So, let us first consider the stationary point \((0,0,0)\). If we linearize a function \(f(x,y,z)\) about \((0,0,0)\) we obtain the approximation:

+
+\[f(x,y,z) \approx f(0,0,0) + f_x(0,0,0) \cdot (x-0) + +f_y(0,0,0) \cdot (y-0) + f_z(0,0,0) \cdot (z-0).\]
+

If we apply this formula to the right hand side function for each of the ODE’s in (eq: lorenz), then we obtain the following linearized system about \((0,0,0)\):

+

(note that each right hand side is now a linear function of \(x\), \(y\) and \(z\)). It is helpful to write this system in matrix form as

+

the reason for this being that the eigenvalues of the matrix give us valuable information about the solution to the linear system. In fact, it is a well-known result from the study of dynamical systems is that if the matrix in eq:lorenz_linear_matrix has distinct eigenvalues \(\lambda_1\), \(\lambda_2\) and \(\lambda_3\), then the solution to this equation is given by

+
+\[x(t) = c_1 e^{\lambda_1 t} + c_2 e^{\lambda_2 t} + c_3 e^{\lambda_3 t},\]
+

and similarly for the other two solution components, \(y(t)\) and \(z(t)\) (the \(c_i\)’s are constants that are determined by the initial conditions of the problem). This should not seem too surprising, if you think that the solution to the scalar equation \(dx/dt=\lambda x\) is \(x(t) = e^{\lambda t}\).

+

Problem eigenvalues:

+

Remember from Lab #3 that the eigenvalues of a matrix, \(A\), are given by the roots of the characteristic equation, \(det(A-\lambda I)=0\). Determine the characteristic equation of the matrix in eq:lorenz_linear_matrix, and show that the eigenvalues of the linearized problem are

+

eq_eigen0 \begin{equation} +\lambda_1 = -\beta, \quad \mathrm{and} \quad \lambda_2, \lambda_3 = +\frac{1}{2} \left( -\sigma - 1 \pm \sqrt{(\sigma-1)^2 + 4 \sigma \rho} +\right). +\end{equation}

+

When \(\rho>1\), the same linearization process can be applied at the remaining two stationary points, which have eigenvalues that satisfy another characteristic equation:

+

eq_eigen01 \begin{equation} +\lambda^3+(\sigma+\beta +1)\lambda^2+(\rho+\sigma)\beta \lambda+2\sigma \beta(\rho-1)=0. +\end{equation}

+

If you need a reminder about odes and eignevalues, the following resources may be useful: - Linear ODE review - Link between eigenvectors and ODEs - Stability theory for ODEs

+
+

Stability of the Linearized Problem

+

Now that we know the eigenvalues of the system around each stationary point, we can write down the solution to the linearized problem. However, it is not the exact form of the linearized solution that we’re interested in, but rather its stability. In fact, the eigenvalues give us all the information we need to know about how the linearized solution behaves in time, and so we’ll only talk about the eigenvalues from now on.

+

It is possible that two of the eigenvalues in the characteristic equations above can be complex numbers – what does this mean for the solution? The details are a bit involved, but the important thing to realize is that if \(\lambda_2,\lambda_3=a\pm i\beta\) are complex (remember that complex roots always occur in conjugate pairs) then the solutions can be rearranged so that they are of the form

+
+\[x(t) = c_1 e^{\lambda_1 t} + c_2 e^{a t} \cos(bt) + c_3 e^{a t} + \sin(bt).\]
+

In terms of the asymptotic stability of the problem, we need to look at the asymptotic behaviour of the solution as \(t\rightarrow \infty\), from which several conclusions can be drawn:

+
    +
  1. If the eigenvalues are real and negative, then the solution will go to zero as \(t \rightarrow\infty\). In this case the linearized solution is stable.

  2. +
  3. If the eigenvalues are real, and at least one is positive, then the solution will blow up as \(t \rightarrow\infty\). In this case the linearized solution is unstable.

  4. +
  5. If there is a complex conjugate pair of eigenvalues, \(a\pm ib\), then the solution exhibits oscillatory behaviour (with the appearance of the terms \(\sin{bt}\) and \(\cos{bt}\)). If the real part, \(a\), of all eigenvalues is negative, the oscillations will decay in time and the solution is stable; if the real part is positive, then the oscillations will grow, and the solution is unstable. If the complex eigenvalues have zero real part, then the oscillations will neither +decay nor increase in time – the resulting linearized problem is periodic, and we say the solution is marginally stable.

  6. +
+

Now, an important question:

+

Does the stability of the non-linear system parallel that of the linearized systems near the stationary points?

+

The answer is “almost always”. We won’t go into why, or why not, but just remember that you can usually expect the non-linear system to behave just as the linearized system near the stationary states.

+

The discussion of stability of the stationary points for the Lorenz equations will be divided up based on values of the parameter \(\rho\) (assuming \(\sigma=10\) and \(\beta=\frac{8}{3}\)). You’ve already seen that the behaviour of the solution changes significantly, by the appearance of two additional stationary points, when \(r\) passes through the value 1. You’ll also see an explanation for the rest of the behaviour you observed:

+

\(0<\rho<1\):

+
    +
  • there is only one stationary state, namely the point \((0,0,0)\). You can see from eq:eigen0 that for these values of \(\rho\), there are three, real, negative roots. The origin is a stable stationary point; that is, it attracts nearby solutions to itself.

  • +
+

\(\rho>1\):

+
    +
  • The origin has one positive, and two negative, real eigenvalues. Hence, the origin is unstable. Now, we need only look at the other two stationary points, whose behaviour is governed by the roots of eq:eigen01

  • +
+

\(1<\rho<\frac{470}{19}\):

+
    +
  • The other two stationary points have eigenvalues that have negative real parts. So these two points are stable.

    +

    It’s also possible to show that two of these eigenvalues are real when \(\rho<1.346\), and they are complex otherwise (see Sparrow 1982 for a more complete discussion). Therefore, the solution begins to exhibit oscillatory behaviour beyond a value of \(\rho\) greater than 1.346.

    +
  • +
+

\(\rho>\frac{470}{19}\):

+
    +
  • The other two stationary points have one real, negative eigenvalue, and two complex eigenvalues with positive real part. Therefore, these two points are unstable. In fact, all three stationary points are unstable for these values of \(\rho\).

  • +
+

The stability of the stationary points is summarized in table below.

+ + + + + + + + + + + + + + + + + + + + + +

(0,0,0)

\((\pm\sqrt{(\beta(\rho-1))},\pm\sqrt{\beta(\rho-1)},\beta-1)\)

\(0<\rho<1\)

stable

\(-\)

\(1<\rho<\frac{470}{19}\)

unstable

stable

\(\rho>\frac{470}{19}\)

unstable

unstable

+

Summary of the stability of the stationary points for the Lorenz equations; parameters :math:`sigma=10`, :math:`beta=frac{8}{3}`

+

This “critical value” of \(\rho^\ast= \frac{470}{19}\) is actually found using the formula

+
+\[\rho^\ast= \frac{\sigma(\sigma+\beta+3)}{\sigma-\beta-1}.\]
+

See Sparrow (1982) for more details.

+

A qualitative change in behaviour of in the solution when a parameter is varied is called a bifurcation. Bifurcations occur at:

+
    +
  • \(\rho=1\), when the origin switches from stable to unstable, and two more stationary points appear.

  • +
  • \(\rho=\rho^\ast\), where the remaining two stationary points switch from being stable to unstable.

  • +
+

Remember that the linear results apply only near the stationary points, and do not apply to all of the phase space. Nevertheless, the behaviour of the orbits near these points can still say quite a lot about the behaviour of the solutions.

+

Problem Stability

+

Based on the analytical results from this section, you can now go back to your results from Problem Experiment and look at them in a new light. Write a short summary of your results (including a few plots or sketches), describing how the solution changes with \(\rho\) in terms of the existence and stability of the stationary points.

+

There have already been hints at problems with the linear stability analysis. One difficulty that hasn’t been mentioned yet is that for values of \(\rho>\rho^\ast\), the problem has oscillatory solutions, which are unstable. Linear theory does not reveal what happens when these oscillations become large! In order to study more closely the long-time behaviour of the solution, we must turn to numerical integration (in fact, all of the plots you produced in Problem [lab6:prob:experiment] were +generated using a numerical code).

+
+
+
+

Numerical Integration

+

In Lorenz’ original paper, he discusses the application of the forward Euler and leap frog time-stepping schemes, but his actual computations are done using the second order Heun’s method (you were introduced to this method in Lab #4. Since we already have a lot of experience with Runge-Kutta methods for systems of ODE’s from earlier labs, you’ll be using this approach to solve the Lorenz equations as well. You already have a code from that solves the Daisy World equations, so you can jump +right into the programming for the Lorenz equations with the following exercises …

+

Problem Adaptive

+

You saw in that adaptive time-stepping saved a considerable amount of computing time for the Daisy World problem. In this problem, you will be investigating whether or not an adaptive Runge-Kutta code is the best choice for the Lorenz equations.

+

Use the Integrator61 object to compute in both adaptive and fixed timeloop solutions for an extended integration. Compare the number of time steps taken (plot the time step vs. the integration time for both methods). Which method is more efficient? Which is fastest? A simple way to time a portion of a script is to use the time module to calculate the elapsed time:

+
import time
+tic = time.time()
+##program here
+elapsed = time.time() - tic
+
+
+

To answer this last question, you will have to consider the cost of the adaptive scheme, compared to the non-adaptive one. The adaptive scheme is obviously more expensive, but by how much? You should think in terms of the number of multiplicative operations that are required in every time step for each method. You don’t have to give an exact operation count, round figures will do.

+

Optional extra: Finally, we mentioned that the code that produced the animation uses a C module called odeint. It is called here using derivatives defined in lorenz_deriv. Use odeint to solve the same problem you did for the fixed and adaptive timeloops. What is the speed increase you see by using the compiled +module?

+

Problem Sensitivity

+

One property of chaotic systems such as the Lorenz equations is their sensitivity to initial conditions – a consequence of the “butterfly effect.” Modify your code from Problem adaptive to compute two trajectories (in the chaotic regime \(r>r^\ast\)) with different initial conditions simultaneously. Use two initial conditions that are very close to each other, say \((1,1,20)\) and \((1,1,20.001)\). Use your “method of choice” (adaptive/non-adaptive), and plot +the distance between the two trajectories as a function of time. What do you see?

+

One important limitation of numerical methods is immediately evident when approximating non-periodic dynamical systems such as the Lorenz equations: namely, every computed solution is periodic. That is, when we’re working in floating point arithmetic, there are only finitely many numbers that can be represented, and the solution must eventually repeat itself. When using single precision arithmetic, a typical computer can represent many more floating point numbers than we could ever perform +integration steps in a numerical scheme. However, it is still possible that round-off error might introduce a periodic orbit in the numerical solution where one does not really exist. In our computations, this will not be a factor, but it is something to keep in mind.

+
+
+

Other Chaotic Systems

+

There are many other ODE systems that exhibit chaos. An example is one studied by Rössler, which obeys a similar-looking system of three ODE’s:

+
+\[\begin{split}\begin{aligned} + \dot{x}&=&-y-z \\ + \dot{y}&=&x+ay \\ + \dot{z}&=&b+z(x-c) + \end{aligned}\end{split}\]
+

Suppose that \(b=2\), \(c=4\), and consider the behaviour of the attractor as \(a\) is varied. When \(a\) is small, the attractor is a simple closed curve. As \(a\) is increased, however, this splits into a double loop, then a quadruple loop, and so on. Thus, a type of period-doubling takes place, and when \(a\) reaches about 0.375, there is a fractal attractor in the form of a band, that looks something like what is known in mathematical circles as a Möbius strip.

+

If you’re really keen on this topic, you might be interested in using your code to investigate the behaviour of this system of equations, though you are not required to hand anything in for this!

+

First, you could perform a stability analysis for ([lab6:eq:rossler]), like you saw above for the Lorenz equations. Then, modify your code to study the Rössler attractor. Use the code to compare your analytical stability results to what you actually see in the computations.

+
+
+

Summary

+

In this lab, you have had the chance to investigate the solutions to the Lorenz equations and their stability in quite some detail. You saw that for certain parameter values, the solution exhibits non-periodic, chaotic behaviour. The question to ask ourselves now is: What does this system tell us about the dynamics of flows in the atmosphere? In fact, this system has been simplified so much that it is no longer an accurate model of the physics in the atmosphere. However, we have seen that the +four characteristics of flows in the atmosphere (mentioned in the Introduction) are also present in the Lorenz equations.

+

Each state in Lorenz’ idealized “climate” is represented by a single point in phase space. For a given set of initial conditions, the evolution of a trajectory describes how the weather varies in time. The butterfly attractor embodies all possible weather conditions that can be attained in the Lorenzian climate. By changing the value of the parameter \(\rho\) (and, for that matter, \(\sigma\) or \(\beta\)), the shape of the attractor changes. Physically, we can interpret this as a +change in some global property of the weather system resulting in a modification of the possible weather states.

+

The same methods of analysis can be applied to more complicated models of the weather. One can imagine a model where the depletion of ozone and the increased concentration of greenhouse gases in the atmosphere might be represented by certain parameters. Changes in these parameters result in changes in the shape of the global climate attractor for the system. By studying the attractor, we could determine whether any new, and possibly devastating, weather states are present in this new +ozone-deficient atmosphere.

+

We began by saying in the Introduction that the butterfly effect made accurate long-term forecasting impossible. Nevertheless, it is still possible to derive meaningful qualitative information from such a chaotic dynamical system.

+
+
+

A. Mathematical Notes

+
+

A.1 The Lorenzian Water Wheel Model

+

This derivation is adapted from Sparrow  [Appendix B].

+
+
[ ]:
+
+
+
Image(filename="images/water-wheel.png")
+
+
+
+

Figure: The Lorenzian water wheel.

+

Imagine a wheel which is free to rotate about a horizontal axis, as depicted in Figure water-wheel.

+

To the circumference of the wheel is attached a series of leaky buckets. Water flows into the buckets at the top of the wheel, and as the buckets are filled with water, the wheel becomes unbalanced and begins to rotate. Depending on the physical parameters in this system, the wheel may remain motionless, rotate steadily in a clockwise or counter-clockwise direction, or reverese its motion in irregular intervals. This should begin to remind you of the type of behaviour exhibited in the Lorenz +system for various parameters.

+

The following are the variables and parameters in the system:

+

\(r\): the radius of the wheel (constant),

+

\(g\): the acceleration due to gravity (constant),

+

\(\theta(t)\): is the angular displacement (not a fixed point on the wheel) (unknown),

+

\(m(\theta,t)\): the mass of the water per unit arc, which we assume is a continuous function of the angle (unknown),

+

\(\Omega(t)\): the angular velocity of the wheel,

+

We also make the following assumptions:

+
    +
  • water is added to the wheel at a constant rate.

  • +
  • the points on the circumference of the wheel gain water at a rate proportional to their height.

  • +
  • water leaks out at a rate proportional to \(m\).

  • +
  • there is frictional damping in the wheel proportional to the angular velocity, \(k \Omega\),

  • +
  • \(A\), \(B\), \(h\) are additional positive constants.

  • +
+

We’ll pass over some details here, and go right to the equations of motion. The equation describing the evloution of the angular momentum is

+
+\[\frac{d\Omega}{dt} = -k \Omega - \left( \frac{gh}{2\pi aA} \right) + m \cos\theta.\]
+

The requirement of conservation of mass in the system leads to two equations

+
+\[\frac{d (m \sin\theta)}{dt} = \Omega m \cos\theta - h m \sin\theta + + 2\pi B\]
+

and

+
+\[\frac{d (m \cos\theta)}{dt} = -\Omega m \sin\theta - h m \cos\theta,\]
+

(where all variables dependent on the angle have been averaged over \(\theta\)).

+

Using a suitable change of variables, these three equations can be written in the same form as the Lorenz equations (with \(\beta=1\)).

+
+
+
+

B. References

+

Gleick, J., 1987: Chaos: Making a New Science. Penguin Books.

+

Lorenz, E. N., 1963: Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20, 130–141.

+

Palmer, T., 1993: A weather eye on unpredictability. in N. Hall, editor, Exploring Chaos: A Guide to the New Science of Disorder, chapter 6. W. W. Norton & Co.

+

Saltzman, B., 1962: Finite amplitude free convection as an initial value problem – I. Journal of the Atmospheric Sciences, 19, 329–341.

+

Sparrow, C., 1982: The Lorenz Equations: Bifurcations, Chaos, and Strange Attractors. volume 41 of Applied Mathematical Sciences. Springer-Verlag.

+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab6/01-lab6.ipynb b/notebooks/lab6/01-lab6.ipynb new file mode 100644 index 0000000..74f0c14 --- /dev/null +++ b/notebooks/lab6/01-lab6.ipynb @@ -0,0 +1,1241 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "lines_to_next_cell": 0 + }, + "source": [ + "# Lab 6: The Lorenz equations" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems \n", + "\n", + "[Problem Experiment: Investigation of the behaviour of solutions](#prob_experiment)\n", + "\n", + "[Problem Steady-states: Find the stationary points of the Lorenz system](#prob_steady-states)\n", + "\n", + "[Problem Eigenvalues: Find the eigenvalues of the stationary point (0,0,0)](#prob_eigenvalues)\n", + "\n", + "[Problem Stability: Discuss the effect of r on the stability of the solution](#prob_stability)\n", + "\n", + "[Problem Adaptive: Adaptive time-stepping for the Lorenz equations](#prob_adaptive)\n", + "\n", + "[Problem Sensitivity: Sensitivity to initial conditions](#prob_sensitivity)\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "## Objectives \n", + "\n", + "In this lab, you will investigate the transition to chaos in the Lorenz\n", + "equations – a system of non-linear ordinary differential equations.\n", + "Using interactive examples, and analytical and numerical techniques, you\n", + "will determine the stability of the solutions to the system, and\n", + "discover a rich variety in their behaviour. You will program both an\n", + "adaptive and non-adaptive Runge-Kuttan code for the problem, and\n", + "determine the relative merits of each.\n", + "\n", + "
\n", + "\n", + "## Readings\n", + "\n", + "There is no required reading for this lab, beyond the contents of the\n", + "lab itself. Nevertheless, the original 1963 paper by Lorenz  is\n", + "worthwhile reading from a historical standpoint.\n", + "\n", + "If you would like additional background on any of the following topics,\n", + "then refer to Appendix B for the following:\n", + "\n", + "- **Easy Reading:**\n", + "\n", + " - Gleick  (1987) [pp. 9-31], an interesting overview of the\n", + " science of chaos (with no mathematical details), and a look at\n", + " its history.\n", + "\n", + " - Palmer (1993) has a short article on Lorenz’ work and\n", + " concentrating on its consequences for weather prediction.\n", + "\n", + "- **Mathematical Details:**\n", + "\n", + " - Sparrow (1982), an in-depth treatment of the mathematics\n", + " behind the Lorenz equations, including some discussion of\n", + " numerical methods.\n", + " \n", + " - The original equations by Saltzman (1962) and the\n", + " first Lorentz (1963) paper on the computation.\n", + "\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "
\n", + "\n", + "## Introduction \n", + "\n", + "\n", + "For many people working in the physical sciences, the *butterfly\n", + "effect* is a well-known phrase. But even if you are unacquainted\n", + "with the term, its consequences are something you are intimately\n", + "familiar with. Edward Lorenz investigated the feasibility of performing\n", + "accurate, long-term weather forecasts, and came to the conclusion that\n", + "*even something as seemingly insignificant as the flap of a\n", + "butterfly’s wings can have an influence on the weather on the other side\n", + "of the globe*. This implies that global climate modelers must\n", + "take into account even the tiniest of variations in weather conditions\n", + "in order to have even a hope of being accurate. Some of the models used\n", + "today in weather forecasting have up to *a million unknown\n", + "variables!*\n", + "\n", + "With the advent of modern computers, many people believed that accurate\n", + "predictions of systems as complicated as the global weather were\n", + "possible. Lorenz’ studies (Lorenz, 1963), both analytical and numerical, were\n", + "concerned with simplified models for the flow of air in the atmosphere.\n", + "He found that even for systems with considerably fewer variables than\n", + "the weather, the long-term behaviour of solutions is intrinsically\n", + "unpredictable. He found that this type of non-periodic, or\n", + "*chaotic* behaviour, appears in systems that are described\n", + "by non-linear differential equations.\n", + "\n", + "The atmosphere is just one of many hydrodynamical systems, which exhibit\n", + "a variety of solution behaviour: some flows are steady; others oscillate\n", + "between two or more states; and still others vary in an irregular or\n", + "haphazard manner. This last class of behaviour in a fluid is known as\n", + "*turbulence*, or in more general systems as\n", + "*chaos*. Examples of chaotic behaviour in physical systems\n", + "include\n", + "\n", + "- thermal convection in a tank of fluid, driven by a heated plate on\n", + " the bottom, which displays an irregular patter of “convection rolls”\n", + " for certain ranges of the temperature gradient;\n", + "\n", + "- a rotating cylinder, filled with fluid, that exhibits\n", + " regularly-spaced waves or irregular, nonperiodic flow patterns under\n", + " different conditions;\n", + "\n", + "- the Lorenzian water wheel, a mechanical system, described in\n", + " [Appendix A](#sec_water-wheel).\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "One of the simplest systems to exhibit chaotic behaviour is a system of\n", + "three ordinary differential equations, studied by Lorenz, and which are\n", + "now known as the *Lorenz equations* (see\n", + "equations ([eq: lorentz](#eq_lorentz)). They are an idealization of\n", + "a more complex hydrodynamical system of twelve equations describing\n", + "turbulent flow in the atmosphere, but which are still able to capture\n", + "many of the important aspects of the behaviour of atmospheric flows. The\n", + "Lorenz equations determine the evolution of a system described by three\n", + "time-dependent state variables, $x(t)$, $y(t)$ and $z(t)$. The state in\n", + "Lorenz’ idealized climate at any time, $t$, can be given by a single\n", + "point, $(x,y,z)$, in *phase space*. As time varies, this\n", + "point moves around in the phase space, and traces out a curve, which is\n", + "also called an *orbit* or *trajectory*. \n", + "\n", + "The video below shows an animation of the 3-dimensional phase space trajectories\n", + "of $x, y, z$ for the Lorenz equations presented below. It is calculated with\n", + "the python script by written by Jake VanderPlas: [lorenz_ode.py](https://github.com/rhwhite/numeric_2022/blob/main/notebooks/lab6/lorenz_ode.py)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:19:12.919866Z", + "start_time": "2022-02-28T17:19:12.683509Z" + } + }, + "outputs": [], + "source": [ + "from IPython.display import YouTubeVideo\n", + "YouTubeVideo('DDcCiXLAk2U')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "\n", + "## Using the Integrator class" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " [lorenz_ode.py](https://github.com/rhwhite/numeric_2022/blob/main/notebooks/lab6/lorenz_ode.py) uses the\n", + " odeint package from scipy. That's fine if we are happy with a black box, but we\n", + " can also use the Integrator class from lab 5. Here is the sub-class Integrator61 \n", + " that is specified for the Lorenz equations:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:19:32.024832Z", + "start_time": "2022-02-28T17:19:31.720112Z" + } + }, + "outputs": [], + "source": [ + "import context\n", + "from numlabs.lab5.lab5_funs import Integrator\n", + "from collections import namedtuple\n", + "import numpy as np\n", + "\n", + "\n", + "\n", + "class Integ61(Integrator):\n", + "\n", + " def __init__(self, coeff_file_name,initvars=None,uservars=None,\n", + " timevars=None):\n", + " super().__init__(coeff_file_name)\n", + " self.set_yinit(initvars,uservars,timevars)\n", + "\n", + " def set_yinit(self,initvars,uservars,timevars):\n", + " #\n", + " # read in 'sigma beta rho', override if uservars not None\n", + " #\n", + " if uservars:\n", + " self.config['uservars'].update(uservars)\n", + " uservars = namedtuple('uservars', self.config['uservars'].keys())\n", + " self.uservars = uservars(**self.config['uservars'])\n", + " #\n", + " # read in 'x y z'\n", + " #\n", + " if initvars:\n", + " self.config['initvars'].update(initvars)\n", + " initvars = namedtuple('initvars', self.config['initvars'].keys())\n", + " self.initvars = initvars(**self.config['initvars'])\n", + " #\n", + " # set dt, tstart, tend if overiding base class values\n", + " #\n", + " if timevars:\n", + " self.config['timevars'].update(timevars)\n", + " timevars = namedtuple('timevars', self.config['timevars'].keys())\n", + " self.timevars = timevars(**self.config['timevars'])\n", + " self.yinit = np.array(\n", + " [self.initvars.x, self.initvars.y, self.initvars.z])\n", + " self.nvars = len(self.yinit)\n", + "\n", + " def derivs5(self, coords, t):\n", + " x,y,z = coords\n", + " u=self.uservars\n", + " f=np.empty_like(coords)\n", + " f[0] = u.sigma * (y - x)\n", + " f[1] = x * (u.rho - z) - y\n", + " f[2] = x * y - u.beta * z\n", + " return f" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The main difference with daisyworld is that I've changed the ```__init__``` function to\n", + "take optional arguments to take initvars, uservars and timevars, to give us\n", + "more flexibility in overriding the default configuration specified in\n", + "[lorenz.yaml](./lorenz.yaml)\n", + "\n", + "I also want to be able to plot the trajectories in 3d, which means that I\n", + "need the Axes3D class from matplotlib. I've written a convenience function\n", + "called plot_3d that sets start and stop points and the viewing angle:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:20:22.889548Z", + "start_time": "2022-02-28T17:20:21.819508Z" + } + }, + "outputs": [], + "source": [ + "import warnings\n", + "warnings.simplefilter(action = \"ignore\", category = FutureWarning)\n", + "%matplotlib inline\n", + "from matplotlib import pyplot as plt\n", + "from mpl_toolkits.mplot3d import Axes3D\n", + "plt.style.use('ggplot')\n", + "\n", + "def plot_3d(ax,xvals,yvals,zvals):\n", + " \"\"\"\n", + " plot a 3-d trajectory with start and stop markers\n", + " \"\"\"\n", + " line,=ax.plot(xvals,yvals,zvals,'r-')\n", + " ax.set_xlim((-20, 20))\n", + " ax.set_ylim((-30, 30))\n", + " ax.set_zlim((5, 55))\n", + " ax.grid(True)\n", + " #\n", + " # look down from 30 degree elevation and an azimuth of\n", + " #\n", + " ax.view_init(30,5)\n", + " line,=ax.plot(xvals,yvals,zvals,'r-')\n", + " ax.plot([-20,15],[-30,-30],[0,0],'k-')\n", + " ax.scatter(xvals[0],yvals[0],zvals[0],marker='o',c='green',s=75)\n", + " ax.scatter(xvals[-1],yvals[-1],zvals[-1],marker='^',c='blue',s=75)\n", + " out=ax.set(xlabel='x',ylabel='y',zlabel='z')\n", + " line.set(alpha=0.2)\n", + " return ax\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the code below I set timevars, uservars and initvars\n", + "to illustrate a sample orbit in phase\n", + "space (with initial value $(5,5,5)$). Notice that the orbit appears to\n", + "be lying in a surface composed of two “wings”. In fact, for the\n", + "parameter values used here, all orbits, no matter the initial\n", + "conditions, are eventually attracted to this surface; such a surface is\n", + "called an *attractor*, and this specific one is termed the\n", + "*butterfly attractor* … a very fitting name, both for its\n", + "appearance, and for the fact that it is a visualization of solutions\n", + "that exhibit the “butterfly effect.” The individual variables are\n", + "plotted versus time in [Figure xyz-vs-t](#fig_xyz-vs-t)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:20:28.951870Z", + "start_time": "2022-02-28T17:20:28.258149Z" + } + }, + "outputs": [], + "source": [ + "#\n", + "# make a nested dictionary to hold parameters\n", + "#\n", + "timevars=dict(tstart=0,tend=27,dt=0.01)\n", + "uservars=dict(sigma=10,beta=2.6666,rho=28)\n", + "initvars=dict(x=5,y=5,z=5)\n", + "params=dict(timevars=timevars,uservars=uservars,initvars=initvars)\n", + "#\n", + "# expand the params dictionary into key,value pairs for\n", + "# the Integ61 constructor using dictionary expansion\n", + "#\n", + "theSolver = Integ61('lorenz.yaml',**params)\n", + "timevals, coords, errorlist = theSolver.timeloop5fixed()\n", + "xvals,yvals,zvals=coords[:,0],coords[:,1],coords[:,2]\n", + "\n", + "\n", + "fig = plt.figure(figsize=(6,6))\n", + "ax = fig.add_axes([0, 0, 1, 1], projection='3d')\n", + "ax=plot_3d(ax,xvals,yvals,zvals)\n", + "out=ax.set(title='starting point: {},{},{}'.format(*coords[0,:]))\n", + "#help(ax.view_init)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "A plot of the solution to the Lorenz equations as an orbit in phase\n", + "space. Parameters: $\\sigma=10$, $\\beta=\\frac{8}{3}$, $\\rho=28$; initial values:\n", + "$(x,y,z)=(5,5,5)$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:20:33.127958Z", + "start_time": "2022-02-28T17:20:32.922941Z" + } + }, + "outputs": [], + "source": [ + "fig,ax = plt.subplots(1,1,figsize=(8,6))\n", + "ax.plot(timevals,xvals,label='x')\n", + "ax.plot(timevals,yvals,label='y')\n", + "ax.plot(timevals,zvals,label='z')\n", + "ax.set(title='x, y, z for trajectory',xlabel='time')\n", + "out=ax.legend()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "**Figure xyz-vs-t**: A plot of the solution to the Lorenz equations versus time.\n", + "Parameters: $\\sigma=10$, $\\beta=\\frac{8}{3}$, $\\rho=28$; initial values:\n", + "$(x,y,z)=(5,5,5)$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "As you saw in the movie, the behaviour of the solution, even though it\n", + "seems to be confined to a specific surface, is anything but regular. The\n", + "solution seems to loop around and around forever, oscillating around one\n", + "of the wings, and then jump over to the other one, with no apparent\n", + "pattern to the number of revolutions. This example is computed for just\n", + "one choice of parameter values, and you will see in the problems later\n", + "on in this lab, that there are many other types of solution behaviour.\n", + "In fact, there are several very important characteristics of the\n", + "solution to the Lorenz equations which parallel what happens in much\n", + "more complicated systems such as the atmosphere:\n", + "\n", + "1. The solution remains within a bounded region (that is, none of the\n", + " values of the solution “blow up”), which means that the solution\n", + " will always be physically reasonable.\n", + "\n", + "2. The solution flips back and forth between the two wings of the\n", + " butterfly diagram, with no apparent pattern. This “strange” way that\n", + " the solution is attracted towards the wings gives rise to the name\n", + " *strange attractor*.\n", + "\n", + "3. The resulting solution depends very heavily on the given initial\n", + " conditions. Even a very tiny change in one of the initial values can\n", + " lead to a solution which follows a totally different trajectory, if\n", + " the system is integrated over a long enough time interval.\n", + "\n", + "4. The solution is irregular or *chaotic*, meaning that it\n", + " is impossible, based on parameter values and initial conditions\n", + " (which may contain small measurement errors), to predict the\n", + " solution at any future time.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "
\n", + "## The Lorenz Equations \n", + "\n", + "\n", + "As mentioned in the previous section, the equations we will be\n", + "considering in this lab model an idealized hydrodynamical system:\n", + "two-dimensional convection in a tank of water which is heated at the\n", + "bottom (as pictured in [Figure Convection](#fig_convection) below).\n", + "\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T17:31:16.352041Z", + "start_time": "2022-02-28T17:31:16.339061Z" + } + }, + "outputs": [], + "source": [ + "from IPython.display import Image\n", + "Image(filename=\"images/convection.png\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Figure Convection** Lorenz studied the flow of fluid in a tank heated at the bottom, which\n", + "results in “convection rolls”, where the warm fluid rises, and the cold\n", + "fluid is drops to the bottom.\n", + "\n", + "Lorenz wrote the equations in the form\n", + "\n", + "\n", + "\n", + "
\n", + "$$\n", + "\\begin{aligned}\n", + " \\frac{dx}{dt} &=& \\sigma(y-x) \\\\\n", + " \\frac{dy}{dt} &=& \\rho x-y-xz \\\\\n", + " \\frac{dz}{dt} &=& xy-\\beta z \n", + "\\end{aligned}\n", + "$$\n", + "where $\\sigma$, $\\rho$\n", + "and $\\beta$ are real, positive parameters. The variables in the problem can\n", + "be interpreted as follows:\n", + "\n", + "- $x$ is proportional to the intensity of the convective motion (positive\n", + " for clockwise motion, and a larger magnitude indicating more\n", + " vigorous circulation),\n", + "\n", + "- $y$ is proportional to the temperature difference between the ascending\n", + " and descending currents (it’s positive if the warm water is on the\n", + " bottom),\n", + "\n", + "- $z$ is proportional to the distortion of the vertical temperature\n", + " profile from linearity (a value of 0 corresponds to a linear\n", + " gradient in temperature, while a positive value indicates that the\n", + " temperature is more uniformly mixed in the middle of the tank and\n", + " the strongest gradients occur near the boundaries),\n", + "\n", + "- $t$ is the dimensionless time,\n", + "\n", + "- $\\sigma$ is called the Prandtl number (it involves the viscosity and thermal\n", + " conductivity of the fluid),\n", + "\n", + "- $\\rho$ is a control parameter, representing the temperature difference\n", + " between the top and bottom of the tank, and\n", + "\n", + "- $\\beta$ measures the width-to-height ratio of the convection layer." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that these equations are *non-linear* in $x$, $y$\n", + "and $z$, which is a result of the non-linearity of the fluid flow\n", + "equations from which this simplified system is obtained.\n", + "\n", + "**Mathematical Note**: This system of equations is derived by Saltzman (1962) for the\n", + "thermal convection problem. However, the same\n", + "equations ([eq:lorenz](#eq_lorenz)) arise in other physical\n", + "systems as well. One example is the whose advantage over the original\n", + "derivation by Saltzman (which is also used in Lorenz’ 1963 paper ) is\n", + "that the system of ODEs is obtained *directly from the\n", + "physics*, rather than as an approximation to a partial\n", + "differential equation.\n", + "\n", + "Remember from [Section Introduction](#sec_introduction) that the Lorenz equations exhibit\n", + "nonperiodic solutions which behave in a chaotic manner. Using analytical\n", + "techniques, it is actually possible to make some qualitative predictions\n", + "about the behaviour of the solution before doing any computations.\n", + "However, before we move on to a discussion of the stability of the\n", + "problem in Section [lab6:sec:stability], you should do the following\n", + "exercise, which will give you a hands-on introduction to the behaviour\n", + "of solutions to the Lorenz equations.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Boundedness of the Solution\n", + "\n", + "\n", + "The easiest way to see that the solution is bounded in time is by\n", + "looking at the motion of the solution in phase space, $(x,y,z)$, as the\n", + "flow of a fluid, with velocity $(\\dot{x}, \\dot{y}, \\dot{z})$ (the “dot”\n", + "is used to represent a time derivative, in order to simplify notation in\n", + "what follows). The *divergence of this flow* is given by\n", + "$$\\frac{\\partial \\dot{x}}{\\partial x} +\n", + " \\frac{\\partial \\dot{y}}{\\partial y} +\n", + " \\frac{\\partial \\dot{z}}{\\partial z},$$ \n", + "and measures how the volume of\n", + "a fluid particle or parcel changes – a positive divergence means that\n", + "the fluid volume is increasing locally, and a negative volume means that\n", + "the fluid volume is shrinking locally (zero divergence signifies an\n", + "*incompressible fluid*, which you will see more of in and\n", + "). If you look back to the Lorenz\n", + "equations ([eq:lorenz](#eq_lorenz)), and take partial derivatives,\n", + "it is clear that the divergence of this flow is given by\n", + "$$\n", + "\\frac{\\partial \\dot{x}}{\\partial x} +\n", + "\\frac{\\partial \\dot{y}}{\\partial y} +\n", + "\\frac{\\partial \\dot{z}}{\\partial z} = -(\\sigma + b + 1).\n", + "$$\n", + "Since\n", + "$\\sigma$ and $b$ are both positive, real constants, the divergence is a\n", + "negative number, which is always less than $-1$. Therefore, each small\n", + "volume shrinks to zero as the time $t\\rightarrow\\infty$, at a rate which\n", + "is independent of $x$, $y$ and $z$. The consequence for the solution,\n", + "$(x,y,z)$, is that every trajectory in phase space is eventually\n", + "confined to a region of zero volume. As you saw in\n", + "[Problem experiment](#prob_experiment), this region, or\n", + "*attractor*, need not be a point – in fact, the two wings\n", + "of the “butterfly diagram” are a surface with zero volume.\n", + "\n", + "The most important consequence of the solution being bounded is that\n", + "none of the physical variables, $x$, $y$, or $z$ “blows up.”\n", + "Consequently, we can expect that the solution will remain with\n", + "physically reasonable limits.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "[Problem Experiment](#prob_experiment) \n", + "\n", + "Lorenz’ results are based on the following values of the physical parameters taken from Saltzman’s paper (Saltzman, 1962): $$\\sigma=10 \\quad \\mathrm{and} \\quad b=\\frac{8}{3}.$$ \n", + "As you will see in [Section stability](#sec_stability), there is a *critical value of the parameter $\\rho$*, $\\rho^\\ast=470/19\\approx 24.74$ (for these values of $\\sigma$ and $\\beta$); it is *critical* in the sense that for\n", + "any value of $\\rho>\\rho^\\ast$, the flow is unstable.\n", + "\n", + "To allow you to investigate the behaviour of the solution to the Lorenz\n", + "equations, you can try out various parameter values in the following\n", + "interactive example. *Initially, leave $\\sigma$ and $\\beta$ alone, and\n", + "modify only $\\rho$ and the initial conditions.* If you have time,\n", + "you can try varying the other two parameters, and see what happens. Here\n", + "are some suggestions:\n", + "\n", + "- Fix the initial conditions at $(5,5,5)$ and vary $\\rho$ between $0$ and\n", + " $100$.\n", + "\n", + "- Fix $\\rho=28$, and vary the initial conditions; for example, try\n", + " $(0,0,0)$, $(0.1,0.1,0.1)$, $(0,0,20)$, $(100,100,100)$,\n", + " $(8.5,8.5,27)$, etc.\n", + "\n", + "- Anything else you can think of …\n", + "\n", + "1. Describe the different types of behaviour you see and compare them\n", + " to what you saw in [Figure fixed-plot](#fig_fixed-plot). Also, discuss the\n", + " results in terms of what you read in [Section Introduction](#sec_introduction)\n", + " regarding the four properties of the solution.\n", + "\n", + "2. One question you should be sure to ask yourself is: *Does\n", + " changing the initial condition affect where the solution ends\n", + " up?* The answer to this question will indicate whether there\n", + " really is an attractor which solutions approach as\n", + " $t\\rightarrow\\infty$.\n", + "\n", + "3. Finally, for the different types of solution behaviour, can you\n", + " interpret the results physically in terms of the thermal convection\n", + " problem?\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we’re ready to find out why the solution behaves as it does. In [Section Intro](#sec_introduction), you were told about four properties of solutions to the Lorenz equations that are also exhibited by the atmosphere, and in the problem you just worked though, you saw that these were also exhibited by solutions to the Lorenz equations. In the remainder of this section, you will see mathematical reasons for two of those characteristics, namely the boundedness and stability (or instability) of solutions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "
\n", + "\n", + "## Steady States \n", + "\n", + "\n", + "A *steady state* of a system is a point in phase space from which the system will not change in time, once that state has been reached. In other words, it is a point, $(x,y,z)$, such that the solution does not change, or where\n", + "$$\\frac{dx}{dt} = 0 \\quad\\ \\mathrm{and}\\ \\quad \\frac{dy}{dt} = 0 \\quad \\ \\mathrm{and}\\ \\quad \\frac{dz}{dt} = 0.$$ \n", + "This point is usually referred to as a *stationary point* of the system.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "**[Problem_steady-states](#prob_steady-states)**\n", + "\n", + "Set the time derivatives equal to zero in the Lorenz equations ([eq:lorenz](#eq_lorenz)), and solve the resulting system to show that there are three possible steady states, namely the points\n", + "\n", + "- $(0,0,0)$,\n", + "\n", + "- $(\\sqrt{\\beta(\\rho-1)},\\sqrt{\\beta(\\rho -1)},\\rho -1)$, and\n", + "\n", + "- $(-\\sqrt{\\beta (\\rho -1)},-\\sqrt{\\beta(\\rho-1)},\\rho-1)$.\n", + "\n", + "Remember that $\\rho$ is a positive real number, so that that there is *only one* stationary point when $0\\leq \\rho \\leq 1$, but all three stationary points are present when $\\rho >1$.\n", + "\n", + "While working through [Problem experiment](#prob_experiment), did you notice the change in behaviour of the solution as $\\rho$ passes through the value 1? If not, then go back to the interactive example and try out some values\n", + "of $\\rho$ both less than and greater than 1 to see how the solution changes.\n", + "\n", + "A steady state tells us the behaviour of the solution only at a single point. *But what happens to the solution if it is perturbed slightly away from a stationary point? Will it return to the stationary point; or will it tend to move away from the point; or will it oscillate about the steady state; or something else … ?* All of these questions are related to the long-term, *asymptotic* behaviour or *stability* of the solution near a given point. You already should have seen some examples of different asymptotic solution behaviour in the Lorenz equations for different parameter values. The next section describes a general method for determining the stability of a solution near a given stationary point." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "\n", + "
\n", + "\n", + "## Linearization about the Steady States \n", + "\n", + "\n", + "The difficult part of doing any theoretical analysis of the Lorenz equations is that they are *non-linear*. *So, why not approximate the non-linear problem by a linear one?*\n", + "\n", + "This idea should remind you of what you read about Taylor series in Lab \\#2. There, we were approximating a function, $f(x)$, around a point by expanding the function in a Taylor series, and the first order Taylor approximation was simply a linear function in $x$. The approach we will take here is similar, but will get into Taylor series of functions of more than one variable: $f(x,y,z,\\dots)$.\n", + "\n", + "The basic idea is to replace the right hand side functions in ([eq:lorenz](#eq_lorenz)) with a linear approximation about a stationary point, and then solve the resulting system of *linear ODE’s*. Hopefully, we can then say something about the non-linear system at values of the solution *close to the stationary point* (remember that the Taylor series is only accurate close to the point we’re expanding about).\n", + "\n", + "So, let us first consider the stationary point $(0,0,0)$. If we linearize a function $f(x,y,z)$ about $(0,0,0)$ we obtain the approximation: \n", + "\n", + "$$f(x,y,z) \\approx f(0,0,0) + f_x(0,0,0) \\cdot (x-0) + \n", + "f_y(0,0,0) \\cdot (y-0) + f_z(0,0,0) \\cdot (z-0).$$ \n", + "\n", + "If we apply this formula to the right hand side function for each of the ODE’s in ([eq: lorenz](#eq_lorenz)), then we obtain the following linearized system about $(0,0,0)$: \n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "\\begin{aligned}\n", + " \\frac{dx}{dt} &= -\\sigma x + \\sigma y \\\\\n", + " \\frac{dy}{dt} &= \\rho x-y \\\\\n", + " \\frac{dz}{dt} &= -\\beta z \n", + "\\end{aligned}\n", + "\n", + "\n", + "(note that each right hand side is now a linear function of $x$, $y$ and $z$). It is helpful to write this system in matrix form as\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "\\begin{aligned}\n", + " \\frac{d}{dt} \\left(\n", + " \\begin{array}{c} x \\\\ y \\\\ z \\end{array} \\right) = \n", + " \\left( \\begin{array}{ccc}\n", + " -\\sigma & \\sigma & 0 \\\\\n", + " \\rho & -1 & 0 \\\\\n", + " 0 & 0 & -\\beta \n", + " \\end{array} \\right) \\;\n", + " \\left(\\begin{array}{c} x \\\\ y \\\\ z \\end{array} \\right)\n", + "\\end{aligned}\n", + "\n", + "\n", + "the reason for this being that the *eigenvalues* of the matrix give us valuable information about the solution to the linear system. In fact, it is a well-known result from the study of dynamical systems is that if the matrix in [eq:lorenz_linear_matrix](#eq:lorenz_linear_matrix) has *distinct* eigenvalues $\\lambda_1$, $\\lambda_2$ and $\\lambda_3$, then the solution to this equation is given by\n", + "\n", + "\n", + "\n", + "$$\n", + "x(t) = c_1 e^{\\lambda_1 t} + c_2 e^{\\lambda_2 t} + c_3 e^{\\lambda_3 t},\n", + "$$\n", + "and similarly for the other two solution components, $y(t)$ and $z(t)$ (the $c_i$’s are constants that are determined by the initial conditions of the problem). This should not seem too surprising, if you think that the solution to the scalar equation $dx/dt=\\lambda x$ is $x(t) = e^{\\lambda t}$.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "**[Problem eigenvalues:](prob_eigenvalues)**\n", + "\n", + "Remember from Lab \\#3 that the eigenvalues of a matrix, $A$, are given by the roots of the characteristic equation, $det(A-\\lambda I)=0$. Determine the characteristic equation of the matrix in [eq:lorenz_linear_matrix](#eq:lorenz_linear_matrix), and show that the eigenvalues of the linearized problem are\n", + "\n", + "**eq_eigen0**\n", + "\\begin{equation}\n", + "\\lambda_1 = -\\beta, \\quad \\mathrm{and} \\quad \\lambda_2, \\lambda_3 =\n", + "\\frac{1}{2} \\left( -\\sigma - 1 \\pm \\sqrt{(\\sigma-1)^2 + 4 \\sigma \\rho}\n", + "\\right). \n", + "\\end{equation}\n", + "\n", + "\n", + "When $\\rho>1$, the same linearization process can be applied at the remaining two stationary points, which have eigenvalues that satisfy another characteristic equation:\n", + "\n", + "**eq_eigen01**\n", + "\\begin{equation}\n", + "\\lambda^3+(\\sigma+\\beta +1)\\lambda^2+(\\rho+\\sigma)\\beta \\lambda+2\\sigma \\beta(\\rho-1)=0.\n", + "\\end{equation}\n", + "\n", + "\n", + "If you need a reminder about odes and eignevalues, the following resources may be useful:\n", + "- [Linear ODE review](http://tutorial.math.lamar.edu/Classes/DE/SolutionsToSystems.aspx)\n", + "- [Link between eigenvectors and ODEs](https://math.stackexchange.com/questions/23312/what-is-the-importance-of-eigenvalues-eigenvectors)\n", + "- [Stability theory for ODEs](https://en.wikipedia.org/wiki/Stability_theory)\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "### Stability of the Linearized Problem\n", + "\n", + "\n", + "Now that we know the eigenvalues of the system around each stationary point, we can write down the solution to the linearized problem. However, it is not the exact form of the linearized solution that we’re interested in, but rather its *stability*. In fact, the eigenvalues give us all the information we need to know about how the linearized solution behaves in time, and so we’ll only talk about the eigenvalues from now on.\n", + "\n", + "It is possible that two of the eigenvalues in the characteristic equations above can be complex numbers – *what does this mean for the solution?* The details are a bit involved, but the important thing to realize is that if $\\lambda_2,\\lambda_3=a\\pm i\\beta$ are complex (remember that complex roots always occur in conjugate pairs) then the solutions can be rearranged so that they are of the form\n", + "\n", + "\n", + "$$\n", + "x(t) = c_1 e^{\\lambda_1 t} + c_2 e^{a t} \\cos(bt) + c_3 e^{a t}\n", + " \\sin(bt). \n", + " $$ \n", + "In terms of the asymptotic stability of the problem, we need to look at the asymptotic behaviour of the solution as $t\\rightarrow \\infty$, from which several conclusions can be drawn:\n", + "\n", + "1. If the eigenvalues are *real and negative*, then the solution will go to zero as $t \\rightarrow\\infty$. In this case the linearized solution is *stable*.\n", + "\n", + "2. If the eigenvalues are real, and *at least one* is positive, then the solution will blow up as $t \\rightarrow\\infty$. In this case the linearized solution is *unstable*.\n", + "\n", + "3. If there is a complex conjugate pair of eigenvalues, $a\\pm ib$, then the solution exhibits oscillatory behaviour (with the appearance of the terms $\\sin{bt}$ and $\\cos{bt}$). If the real part, $a$, of all eigenvalues is negative, the oscillations will decay in time and the solution is *stable*; if the real part is positive, then the oscillations will grow, and the solution is *unstable*. If the complex eigenvalues have zero real part, then the oscillations will neither decay nor increase in time – the resulting linearized problem is periodic, and we say the solution is *marginally stable*.\n", + "\n", + "Now, an important question:\n", + "\n", + "*Does the stability of the non-linear system parallel that of the linearized systems near the stationary points?*\n", + "\n", + "The answer is “almost always”. We won’t go into why, or why not, but just remember that you can usually expect the non-linear system to behave just as the linearized system near the stationary states.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The discussion of stability of the stationary points for the Lorenz equations will be divided up based on values of the parameter $\\rho$ (assuming $\\sigma=10$ and $\\beta=\\frac{8}{3}$). You’ve already seen that the behaviour of the solution changes significantly, by the appearance of two additional stationary points, when $r$ passes through the value 1. You’ll also see an explanation for the rest of the behaviour you observed:\n", + "\n", + "$0<\\rho<1$:\n", + "\n", + "- there is only one stationary state, namely the point $(0,0,0)$. You can see from [eq:eigen0](eq_eigen0) that for these values of $\\rho$, there are three, real, negative roots. The origin is a *stable* stationary point; that is, it attracts nearby solutions to itself.\n", + "\n", + "$\\rho>1$:\n", + "\n", + "- The origin has one positive, and two negative, real eigenvalues. Hence, the origin is *unstable*. Now, we need only look at the other two stationary points, whose behaviour is governed by the roots of [eq:eigen01](eq_eigen01)\n", + "\n", + "$1<\\rho<\\frac{470}{19}$:\n", + "\n", + "- The other two stationary points have eigenvalues that have negative real parts. So these two points are *stable*.\n", + "\n", + " It’s also possible to show that two of these eigenvalues are real when $\\rho<1.346$, and they are complex otherwise (see Sparrow 1982 for a more complete discussion). Therefore, the solution begins to exhibit oscillatory behaviour beyond a value of $\\rho$ greater than 1.346.\n", + "\n", + "$\\rho>\\frac{470}{19}$:\n", + "\n", + "- The other two stationary points have one real, negative eigenvalue, and two complex eigenvalues with positive real part. Therefore, these two points are *unstable*. In fact, all three stationary points are unstable for these values of $\\rho$.\n", + "\n", + "The stability of the stationary points is summarized in table below." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "| | (0,0,0) | $(\\pm\\sqrt{(\\beta(\\rho-1))},\\pm\\sqrt{\\beta(\\rho-1)},\\beta-1)$ |\n", + "|------------------------|----------|-------------------------------------------------------------------|\n", + "| $0<\\rho<1$ | stable | $-$ |\n", + "| $1<\\rho<\\frac{470}{19}$| unstable| stable |\n", + "| $\\rho>\\frac{470}{19}$| unstable| unstable |\n", + "\n", + "_Summary of the stability of the stationary points for the Lorenz equations; parameters $\\sigma=10$, $\\beta=\\frac{8}{3}$_" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "This “critical value” of $\\rho^\\ast= \\frac{470}{19}$ is actually found\n", + "using the formula $$\\rho^\\ast= \\frac{\\sigma(\\sigma+\\beta+3)}{\\sigma-\\beta-1}.$$ See\n", + "Sparrow (1982) for more details.\n", + "\n", + "A qualitative change in behaviour of in the solution when a parameter is\n", + "varied is called a *bifurcation*. Bifurcations occur at:\n", + "\n", + "- $\\rho=1$, when the origin switches from stable to unstable, and two\n", + " more stationary points appear.\n", + "\n", + "- $\\rho=\\rho^\\ast$, where the remaining two stationary points switch from\n", + " being stable to unstable.\n", + "\n", + "Remember that the linear results apply only near the stationary points,\n", + "and do not apply to all of the phase space. Nevertheless, the behaviour\n", + "of the orbits near these points can still say quite a lot about the\n", + "behaviour of the solutions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "**[Problem Stability](#prob_stability)** \n", + "\n", + "Based on the analytical results from this section, you can now go back to your results from [Problem Experiment](#prob_experiment) and look at them in a new light. Write a short summary of your results (including a few plots or sketches), describing how the solution changes with $\\rho$ in terms of the existence and stability of the stationary points.\n", + "\n", + "There have already been hints at problems with the linear stability analysis. One difficulty that hasn’t been mentioned yet is that for values of $\\rho>\\rho^\\ast$, the problem has oscillatory solutions, which are unstable. *Linear theory does not reveal what happens when these oscillations become large!* In order to study more closely the\n", + "long-time behaviour of the solution, we must turn to numerical integration (in fact, all of the plots you produced in\n", + "Problem [lab6:prob:experiment] were generated using a numerical code).\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Numerical Integration \n", + "\n", + "In Lorenz’ original paper, he discusses the application of the forward Euler and leap frog time-stepping schemes, but his actual computations are done using the second order *Heun’s method* (you were introduced to this method in Lab \\#4. Since we already have a lot of experience with Runge-Kutta methods for systems of ODE’s from earlier labs, you’ll be using this approach to solve the Lorenz equations as well. You already have a code from that solves the Daisy World equations, so you can jump right into the programming for the Lorenz equations with the following exercises …\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "ExecuteTime": { + "end_time": "2022-02-28T19:16:33.115638Z", + "start_time": "2022-02-28T19:16:33.036595Z" + } + }, + "source": [ + "
\n", + "\n", + "**[Problem Adaptive](#prob_adaptive)** \n", + " \n", + "You saw in that adaptive time-stepping saved a considerable amount of computing time for the Daisy World problem. In\n", + "this problem, you will be investigating whether or not an adaptive Runge-Kutta code is the best choice for the Lorenz equations.\n", + "\n", + "Use the Integrator61 object to compute in both adaptive and fixed timeloop solutions for an extended integration. \n", + "Compare the number of time steps taken (plot the time step vs. the integration time for both methods). Which method is\n", + "more efficient? Which is fastest? A simple way to time a portion of a script is to use the ```time``` module to calculate the elapsed time:\n", + "\n", + "```\n", + "import time\n", + "tic = time.time()\n", + "##program here\n", + "elapsed = time.time() - tic\n", + "```\n", + "\n", + "To answer this last question, you will have to consider the cost of the adaptive scheme, compared to the non-adaptive one. The adaptive scheme is obviously more expensive, but by how much? You should think in terms of the number of multiplicative operations that are required in every time step for each method. You don’t have to give an exact operation count, round figures will do.\n", + "\n", + "Optional extra: Finally, we mentioned that the code that produced the animation uses a C module called odeint. It is called [here](https://github.com/rhwhite/numeric_2024/blob/main/notebooks/lab6/lorenz_ode.py#L22-L23) using derivatives defined in \n", + "[lorenz_deriv](https://github.com/rhwhite/numeric_2024/blob/main/notebooks/lab6/lorenz_ode.py#L11-L14).\n", + "Use odeint to solve the same problem you did for the fixed and adaptive timeloops. What is the speed increase you see by using the compiled module?" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "**[Problem Sensitivity](#prob_sensitivity)** \n", + " \n", + "One property of chaotic systems such as the Lorenz equations is their *sensitivity to initial\n", + "conditions* – a consequence of the “butterfly effect.” Modify your code from [Problem adaptive](#prob_adaptive) to compute two trajectories (in the chaotic regime $r>r^\\ast$) with different initial conditions *simultaneously*. Use two initial conditions that are very close to each other, say $(1,1,20)$ and $(1,1,20.001)$. Use your “method of choice” (adaptive/non-adaptive), and plot the distance between the two trajectories as a function of time. What do you see?\n", + "\n", + "One important limitation of numerical methods is immediately evident when approximating non-periodic dynamical systems such as the Lorenz equations: namely, *every computed solution is periodic*. That is, when we’re working in floating point arithmetic, there are only finitely many numbers that can be represented, and the solution must eventually repeat itself. When using single precision arithmetic, a typical computer can represent many more floating point numbers than we could ever perform integration steps in a numerical scheme. However, it is still possible that round-off error might introduce a periodic orbit in the numerical solution where one does not really exist. In our computations, this will not be a factor, but it is something to keep in mind.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Other Chaotic Systems\n", + "\n", + "\n", + "There are many other ODE systems that exhibit chaos. An example is one\n", + "studied by Rössler, which obeys a similar-looking system of three ODE’s:\n", + "\n", + "\n", + "\n", + "$$\n", + "\\begin{aligned}\n", + " \\dot{x}&=&-y-z \\\\ \n", + " \\dot{y}&=&x+ay \\\\\n", + " \\dot{z}&=&b+z(x-c) \n", + " \\end{aligned}\n", + " $$ \n", + "\n", + "Suppose that $b=2$, $c=4$,\n", + "and consider the behaviour of the attractor as $a$ is varied. When $a$\n", + "is small, the attractor is a simple closed curve. As $a$ is increased,\n", + "however, this splits into a double loop, then a quadruple loop, and so\n", + "on. Thus, a type of *period-doubling* takes place, and when\n", + "$a$ reaches about 0.375, there is a fractal attractor in the form of a\n", + "band, that looks something like what is known in mathematical circles as\n", + "a *Möbius strip*.\n", + "\n", + "If you’re really keen on this topic, you might be interested in using\n", + "your code to investigate the behaviour of this system of equations,\n", + "*though you are not required to hand anything in for this!*\n", + "\n", + "First, you could perform a stability analysis for\n", + "([lab6:eq:rossler]), like you saw above for the Lorenz\n", + "equations. Then, modify your code to study the Rössler attractor. Use\n", + "the code to compare your analytical stability results to what you\n", + "actually see in the computations.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "
\n", + "\n", + "## Summary \n", + "\n", + "In this lab, you have had the chance to investigate the solutions to the Lorenz equations and their stability in quite some detail. You saw that for certain parameter values, the solution exhibits non-periodic, chaotic behaviour. The question to ask ourselves now is: *What does this system tell us about the dynamics of flows in the atmosphere?* In fact, this system has been simplified so much that it is no longer an accurate model of the physics in the atmosphere.\n", + "However, we have seen that the four characteristics of flows in the atmosphere (mentioned in [the Introduction](#sec_intro)) are also present in the Lorenz equations.\n", + "\n", + "Each state in Lorenz’ idealized “climate” is represented by a single point in phase space. For a given set of initial conditions, the evolution of a trajectory describes how the weather varies in time. The butterfly attractor embodies all possible weather conditions that can be attained in the Lorenzian climate. By changing the value of the parameter $\\rho$ (and, for that matter, $\\sigma$ or $\\beta$), the shape of the attractor changes. Physically, we can interpret this as a change in some global property of the weather system resulting in a modification of the possible weather states.\n", + "\n", + "The same methods of analysis can be applied to more complicated models of the weather. One can imagine a model where the depletion of ozone and the increased concentration of greenhouse gases in the atmosphere might be represented by certain parameters. Changes in these parameters result in changes in the shape of the global climate attractor for the system. By studying the attractor, we could determine whether any new, and possibly devastating, weather states are present in this new ozone-deficient atmosphere.\n", + "\n", + "We began by saying in the Introduction that the butterfly effect made accurate long-term forecasting impossible. Nevertheless, it is still possible to derive meaningful qualitative information from such a chaotic dynamical system.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + " \n", + "
\n", + "\n", + "## A. Mathematical Notes\n", + "\n", + "\n", + "\n", + "
\n", + "\n", + "### A.1 The Lorenzian Water Wheel Model\n", + "\n", + "\n", + "*This derivation is adapted from Sparrow \n", + "[Appendix B].*\n", + "\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename=\"images/water-wheel.png\")" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "**Figure: The Lorenzian water wheel.**\n", + "\n", + "Imagine a wheel which is free to rotate about a horizontal axis, as\n", + "depicted in [Figure water-wheel](#fig_water-wheel).\n", + "\n", + "\n", + "\n", + "To the circumference of the wheel is attached a series of leaky buckets.\n", + "Water flows into the buckets at the top of the wheel, and as the buckets\n", + "are filled with water, the wheel becomes unbalanced and begins to\n", + "rotate. Depending on the physical parameters in this system, the wheel\n", + "may remain motionless, rotate steadily in a clockwise or\n", + "counter-clockwise direction, or reverese its motion in irregular\n", + "intervals. This should begin to remind you of the type of behaviour\n", + "exhibited in the Lorenz system for various parameters.\n", + "\n", + "The following are the variables and parameters in the system:\n", + "\n", + "$r$: the radius of the wheel (constant),\n", + "\n", + "$g$: the acceleration due to gravity (constant),\n", + "\n", + "$\\theta(t)$: is the angular displacement (not a fixed point on the wheel)\n", + " (unknown),\n", + "\n", + "$m(\\theta,t)$: the mass of the water per unit arc, which we assume is a continuous\n", + " function of the angle (unknown),\n", + "\n", + "$\\Omega(t)$: the angular velocity of the wheel,\n", + "\n", + "We also make the following assumptions:\n", + "\n", + "- water is added to the wheel at a constant rate.\n", + "\n", + "- the points on the circumference of the wheel gain water at a rate\n", + " proportional to their height.\n", + "\n", + "- water leaks out at a rate proportional to $m$.\n", + "\n", + "- there is frictional damping in the wheel proportional to the angular\n", + " velocity, $k \\Omega$,\n", + "\n", + "- $A$, $B$, $h$ are additional positive constants.\n", + "\n", + "We’ll pass over some details here, and go right to the equations of\n", + "motion. The equation describing the evloution of the angular momentum is\n", + "\n", + "\n", + "$$\n", + "\\frac{d\\Omega}{dt} = -k \\Omega - \\left( \\frac{gh}{2\\pi aA} \\right)\n", + " m \\cos\\theta.\n", + " $$ \n", + " \n", + " The requirement of conservation\n", + " \n", + "of mass in the system leads to two equations\n", + "$$\\frac{d (m \\sin\\theta)}{dt} = \\Omega m \\cos\\theta - h m \\sin\\theta +\n", + " 2\\pi B\n", + "$$ \n", + "\n", + "and\n", + "$$\n", + "\\frac{d (m \\cos\\theta)}{dt} = -\\Omega m \\sin\\theta - h m \\cos\\theta,\n", + "$$ \n", + "(where all variables dependent on the angle\n", + "have been averaged over $\\theta$).\n", + "\n", + "Using a suitable change of variables, these three equations\n", + "can be written in the same form as the\n", + "Lorenz equations (with $\\beta=1$)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "\n", + "## B. References\n", + "\n", + "\n", + "Gleick, J., 1987: *Chaos: Making a New Science*. Penguin\n", + "Books.\n", + "\n", + "Lorenz, E. N., 1963: Deterministic nonperiodic flow. *Journal of\n", + "the Atmospheric Sciences*, **20**, 130–141.\n", + "\n", + "Palmer, T., 1993: A weather eye on unpredictability. in N. Hall, editor,\n", + "*Exploring Chaos: A Guide to the New Science of Disorder*,\n", + "chapter 6. W. W. Norton & Co.\n", + "\n", + "Saltzman, B., 1962: Finite amplitude free convection as an initial value\n", + "problem – I. *Journal of the Atmospheric\n", + "Sciences*, **19**, 329–341.\n", + "\n", + "Sparrow, C., 1982: *The Lorenz Equations: Bifurcations, Chaos, and\n", + "Strange Attractors*. volume 41 of *Applied Mathematical\n", + "Sciences*. Springer-Verlag.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": true, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "333.636px", + "left": "10px", + "top": "150px", + "width": "165px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab6/lorenz.yaml b/notebooks/lab6/lorenz.yaml new file mode 100644 index 0000000..399b8b7 --- /dev/null +++ b/notebooks/lab6/lorenz.yaml @@ -0,0 +1,22 @@ +timevars: + dt: 0.02 + tstart: 0.0 + tend: 50.0 +uservars: + sigma: 10 + beta: 2.666666 + rho: 28. +initvars: + x: -20.1 + y: -10.1 + z: 10.1 +adaptvars: + dtpassmin: 0.1 + dtfailmax: 0.5 + dtfailmin: 0.01 + s: 0.9 + rtol: 1.0e-08 + atol: 1.0e-08 + maxsteps: 2000 + maxfail: 60 + dtpassmax: 5.0 diff --git a/notebooks/lab7/01-lab7.html b/notebooks/lab7/01-lab7.html new file mode 100644 index 0000000..cb6fd71 --- /dev/null +++ b/notebooks/lab7/01-lab7.html @@ -0,0 +1,1022 @@ + + + + + + + + Laboratory 7: Solving partial differential equations using an explicit, finite difference method. — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Laboratory 7: Solving partial differential equations using an explicit, finite difference method.

+

Lin Yang & Susan Allen & Carmen Guo

+

This laboratory is long and is typically assigned in two halves. See break after Problem 5 and before Full Equations

+
+

List of Problems

+
    +
  • Problem 1: Numerical solution on a staggered grid.

  • +
  • Problem 2: Stability of the difference scheme

  • +
  • Problem 3: Dispersion relation for grid 2

  • +
  • Problem 4: Choosing most accurate grid

  • +
  • Problem 5: Numerical solution for no y variation

  • +
  • Problem 6: Stability on the 2-dimensional grids

  • +
  • Problem 7: Finite difference form of equations

  • +
  • Problem 8: Dispersion relation for D-grid

  • +
  • Problem 9: Accuracy of the approximation on various grids

  • +
+
+
+

Learning Objectives

+

When you have completed reading and working through this lab you will be able to:

+
    +
  • find the dispersion relation for a set of differential equations (the “real” dispersion relation).

  • +
  • find the dispersion relation for a set of difference equations (the numerical dispersion relation).

  • +
  • describe a leap-frog scheme

  • +
  • construct a predictor-corrector method

  • +
  • use the given differential equations to determine unspecified boundary conditions as necessary

  • +
  • describe a staggered grid

  • +
  • state one reason why staggered grids are used

  • +
  • explain the physical principle behind the CFL condition

  • +
  • find the CFL condition for a linear, explicit, numerical scheme

  • +
  • state one criteria that should be considered when choosing a grid

  • +
+
+
+

Readings

+

These are the suggested readings for this lab. For more details about the books and papers, click on the reference link.

+
    +
  • Rotating Navier Stokes Equations

    +
      +
    •  Pond and Pickard, 1983, Chapters 3,4 and 6

    • +
    +
  • +
  • Shallow Water Equations

    +
      +
    •  Gill, 1982, Section 5.6 and 7.2 (not 7.2.1 etc)

    • +
    +
  • +
  • Poincaré Waves

    +
      +
    •  Gill, 1982, Section 7.3 to just after equation (7.3.8), section 8.2 and 8.3

    • +
    +
  • +
  • Introduction to Numerical Solution of PDE’s

    +
      +
    •  Press et al, 1992, Section 17.0

    • +
    +
  • +
  • Waves

    +
      +
    •  Cushman-Roision, 1994, Appendix A

    • +
    +
  • +
+
+
[ ]:
+
+
+
import context
+from IPython.display import Image
+import IPython.display as display
+# import plotting package and numerical python package for use in examples later
+import matplotlib.pyplot as plt
+# make the plots happen inline
+%matplotlib inline
+# import the numpy array handling library
+import numpy as np
+# import the quiz script
+from numlabs.lab7 import quiz7 as quiz
+# import the pde solver for a simple 1-d tank of water with a drop of rain
+from numlabs.lab7 import rain
+# import the dispersion code plotter
+from numlabs.lab7 import accuracy2d
+# import the 2-dimensional drop solver
+from numlabs.lab7 import interactive1
+# import the 2-dimensional dispersion relation plotter
+from numlabs.lab7 import dispersion_2d
+
+
+
+
+
+

Physical Example, Poincaré Waves

+

One of the obvious examples of a physical phenomena governed by a partial differential equation is waves. Consider a shallow layer of water and the waves on the surface of that layer. If the depth of the water is much smaller than the wavelength of the waves, the velocity of the water will be the same throughout the depth. So then we can describe the state of the water by three variables: \(u(x,y,t)\), the east-west velocity of the water, \(v(x,y,t)\), the north-south velocity of the +water and \(h(x,y,t)\), the height the surface of the water is deflected. As specified, each of these variables are functions of the horizontal position, \((x,y)\) and time \(t\) but, under the assumption of shallow water, not a function of \(z\).

+

In oceanographic and atmospheric problems, the effect of the earth’s rotation is often important. We will first introduce the governing equations including the Coriolis force (Full Equations). However, most of the numerical concepts can be considered without all the complications in these equations. We will also consider two simplier sets; one where we assume there is no variation of the variables in the y-direction (No variation in y) and one +where, in addition, we assume that the Coriolis force is negligible (Simple Equations).

+

The solution of the equations including the Coriolis force are Poincaré waves whereas without the Coriolis force, the resulting waves are called shallow water gravity waves.

+

The remainder of this section will present the equations and discuss the dispersion relation for the two simplier sets of equations. If your wave theory is rusty, consider reading Appendix A in Cushman-Roisin, 1994.

+
+

Introduce Full Equations

+

The linear shallow water equations on an f-plane over a flat bottom are

+

(Full Equations, Eqn 1)

+
+\[\frac{\partial u}{\partial t} - fv = -g\frac{\partial h}{\partial x}\]
+

(Full Equations, Eqn 2)

+
+\[\frac{\partial v}{\partial t} + fu = -g\frac{\partial h}{\partial y}\]
+

(Full Equations, Eqn 3)

+
+\[\frac{\partial h}{\partial t} + H\frac{\partial u}{\partial x} + H\frac{\partial v}{\partial y} = 0\]
+

where

+
    +
  • \(\vec{u} = (u,v)\) is the horizontal velocity,

  • +
  • \(f\) is the Coriolis frequency,

  • +
  • \(g\) is the acceleration due to gravity,

  • +
  • \(h\) is the surface elevation, and

  • +
  • \(H\) is the undisturbed depth of the fluid.

  • +
+

We will return to these equations in section Full Equations.

+
+
+

No variation in y

+

To simplify the problem assume there is no variation in y. This simplification gives:

+

(No variation in y, first eqn)

+
+\[\frac{\partial u}{\partial t} - fv = -g\frac{\partial h}{\partial x}\]
+

(No variation in y, second eqn)

+
+\[\frac{\partial v}{\partial t} + fu = 0\]
+

(No variation in y, third eqn)

+
+\[\frac{\partial h}{\partial t} + H\frac{\partial u}{\partial x} = 0\]
+
+
+

Introduce Simple Equations

+

If we consider waves in the absence of the earth’s rotation, \(f=0\), which implies \(v=0\) and we get

+
+\[\frac{\partial u}{\partial t} = -g\frac{\partial h}{\partial x}\]
+
+\[\frac{\partial h}{\partial t} + H\frac{\partial u}{\partial x} = 0\]
+

These simplified equations give shallow water gravity waves. For example, a solution is a simple sinusoidal wave:

+

(wave solution- h)

+
+\[h = h_{0}\cos{(kx - \omega t)}\]
+

(wave solution- u)

+
+\[u = \frac{h_{0}\omega}{kH}\cos{(kx - \omega t)}\]
+

where \(h_{0}\) is the amplitude, \(k\) is the wavenumber and \(\omega\) is the frequency (See Cushman-Roisin, 1994 for a nice review of waves in Appendix A).

+

Substitution of (wave solution- h) and (wave solution- u) back into the differential equations gives a relation between \(\omega\) and k. Confirm that

+

(Analytic Dispersion Relation)

+
+\[\omega^2 = gHk^2,\]
+

which is the dispersion relation for these waves.

+
+
+

No variation in y

+

Now consider \(f\not = 0\).

+

By assuming

+
+\[h= h_{0}e^{i(kx - \omega t)}\]
+
+\[u= u_{0}e^{i(kx - \omega t)}\]
+
+\[v= v_{0}e^{i(kx - \omega t)}\]
+

and substituting into the differential equations, eg, for (No variation in y, first eqn)

+
+\[-iwu_{0}e^{i(kx - \omega t)} - fv_{0}e^{i(kx - \omega t)} + ikgh_{0}e^{i(kx - \omega t)} = 0\]
+

and cancelling the exponential terms gives 3 homogeneous equations for \(u_{0}\), \(v_{0}\) and \(h_{0}\). If the determinant of the matrix derived from these three equations is non-zero, the only solution is \(u_{0} = v_{0} = h_{0} = 0\), NO WAVE! Therefore the determinant must be zero.

+
+
+

Quiz: Find the Dispersion Relation

+

What is the dispersion relation for 1-dimensional Poincare waves?

+
    +
  1. \(\omega^2 = f^2 + gH (k^2 + \ell^2)\)

  2. +
  3. \(\omega^2 = gH k^2\)

  4. +
  5. \(\omega^2 = f^2 + gH k^2\)

  6. +
  7. \(\omega^2 = -f^2 + gH k^2\)

  8. +
+

In the following, replace ‘x’ by ‘A’, ‘B’, ‘C’ or ‘D’ and run the cell.

+
+
[ ]:
+
+
+
print (quiz.dispersion_quiz(answer = 'XXX'))
+
+
+
+
+
+
+

Numerical Solution

+
+

Simple Equations

+

Consider first the simple equations with \(f = 0\). In order to solve these equations numerically, we need to discretize in 2 dimensions, one in space and one in time. Consider first the most obvious choice, shown in Figure Unstaggered Grid.

+
+
[ ]:
+
+
+
Image(filename='images/nonstagger.png',width='40%')
+
+
+
+

Figure Unstaggered Grid.

+

We will use centred difference schemes in both \(x\) and \(t\). The equations become:

+

(Non-staggered, Eqn One)

+
+\[\frac {u(t+dt, x)-u(t-dt, x)}{2 dt} + g \frac {h(t, x+dx) - h(t, x-dx)}{2dx} = 0\]
+

(Non-staggered, Eqn Two)

+
+\[\frac {h(t+dt, x)-h(t-dt, x)}{2 dt} + H \frac {u(t, x+dx) - u(t, x-dx)}{2dx} = 0\]
+

We can rearrange these equations to give \(u(t+dt, x)\) and \(h(t+dt, x)\). For a small number of points, the resulting problem is simple enough to solve in a notebook.

+

For a specific example, consider a dish, 40 cm long, of water 1 cm deep. Although the numerical code presented later allows you to vary the number of grid points, in the discussion here we will use only 5 spatial points, a distance of 10 cm apart. The lack of spatial resolution means the wave will have a triangular shape. At \(t=0\) a large drop of water lands in the centre of the dish. So at \(t=0\), all points have zero velocity and zero elevation except at \(x=3dx\), where we have

+
+\[h(0, 3dx) = h_{0} = 0.01 cm\]
+

A centred difference scheme in time, such as defined by equations (Non-staggered, Eqn One) and (Non-staggered, Eqn Two), is usually refered to as a Leap frog scheme. The new values, \(h(t+dt)\) and \(u(t+dt)\) are equal to values two time steps back \(h(t-dt)\) and \(u(t-dt)\) plus a correction based on values calculated one time step back. Hence the time scheme “leap-frogs” ahead. More on the consequences of this +process can be found in section Computational Mode.

+

As a leap-frog scheme requires two previous time steps, the given conditions at \(t=0\) are not sufficient to solve (Non-staggered, Eqn One) and (Non-staggered, Eqn Two). We need the solutions at two time steps in order to step forward.

+
+
+

Predictor-Corrector to Start

+

In section 4.2.2 of Lab 2, predictor-corrector methods were introduced. We will use a predictor-corrector based on the forward Euler scheme, to find the solution at the first time step, \(t=dt\). Then the second order scheme (Non-staggered, Eqn One), (Non-staggered, Eqn Two) can be used.

+

Using the forward Euler Scheme, the equations become

+
+\[\frac {u(t+dt, x)-u(t, x)}{dt} + g \frac {h(t, x+dx) - h(t, x-dx)}{2dx} = 0\]
+
+\[\frac {h(t+dt, x)-h(t, x)}{dt} + H \frac {u(t, x+dx) - u(t, x-dx)}{2dx} = 0\]
+
    +
  1. Use this scheme to predict \(u\) and \(h\) at \(t=dt\).

  2. +
  3. Average the solution at \(t=0\) and that predicted for \(t=dt\), to estimate the solution at \(t=\frac{1}{2}dt\). You should confirm that this procedure gives:

    +
    +\[\begin{split}u(\frac{dt}{2}) = \left\{ \begin{array}{ll} +0 & { x = 3dx}\\ +\left({-gh_{0}dt}\right)/\left({4dx}\right) & { x = 2dx}\\ +\left({gh_{0}dt}\right)/\left({4dx}\right) & { x = 4dx} +\end{array} +\right.\end{split}\]
    +
    +\[\begin{split}h(\frac{dt}{2}) = \left\{ \begin{array}{ll} +h_{0} & { x = 3dx}\\ +0 & { x \not= 3dx} +\end{array} +\right.\end{split}\]
    +
  4. +
  5. The corrector step uses the centred difference scheme in time (the leap-frog scheme) with a time step of \({dt}/{2}\) rather than dt. You should confirm that this procedure gives:

    +
    +\[\begin{split}u(dt) = \left\{ \begin{array}{ll} +0 & { x = 3dx}\\ +\left({-gh_{0}dt}\right)/\left({2dx}\right) & { x = 2dx}\\ +\left({gh_{0}dt}\right)/\left({2dx}\right) & { x = 4dx} +\end{array} +\right.\end{split}\]
    +
    +\[\begin{split}h(dt) = \left\{ \begin{array}{ll} +0 & { x = 2dx, 4dx}\\ +h_{0} - \left({gHdt^2 h_{0}}\right)/\left({4dx^2}\right) & { x = 3dx} +\end{array} +\right.\end{split}\]
    +
  6. +
+

Note that the values at \(x=dx\) and \(x=5dx\) have not been specified. These are boundary points and to determine these values we must consider the boundary conditions.

+
+
+

Boundary Conditions

+

If we are considering a dish of water, the boundary conditions at \(x=dx, 5dx\) are those of a wall. There must be no flow through the wall.

+
+\[u(dx) = 0\]
+
+\[u(5dx) = 0\]
+

But these two conditions are not sufficient; we also need \(h\) at the walls. If \(u=0\) at the wall for all time then \(\partial u/\partial t=0\) at the wall, so \(\partial h/\partial x=0\) at the wall. Using a one-sided difference scheme this gives

+
+\[\frac {h(2dx) - h(dx)}{dx} = 0\]
+

or

+
+\[h(dx) = h(2dx)\]
+

and

+
+\[\frac {h(4dx) - h(5dx)}{dx} = 0\]
+

or

+
+\[h(5dx) = h(4dx)\]
+

which gives the required boundary conditions on \(h\) at the wall.

+
+
+

Simple Equations on a Non-staggered Grid

+
    +
  1. Given the above equations and boundary conditions, we can find the values of \(u\) and \(h\) at all 5 points when \(t = 0\) and \(t = dt\).

  2. +
  3. From (Non-staggered, Eqn One) and (Non-staggered, Eqn Two), we can find the values of \(u\) and \(h\) for \(t = 2dt\) using \(u(0, x)\), \(u(dt, x)\), \(h(0, x)\), and \(h(dt, x)\).

  4. +
  5. Then we can find the values of \(u\) and \(h\) at \(t = 3dt\) using \(u(dt, x)\), \(u(2dt, x)\), \(h(dt, x)\), and \(h(2dt, x)\).

  6. +
+

We can use this approach recursively to determine the values of \(u\) and \(h\) at any time \(t = n * dt\). The python code that solves this problem is provided in the file rain.py. It takes two arguments, the first is the number of time steps and the second is the number of horizontal grid points.

+

The output is two coloured graphs. The color represents time with black the earliest times and red later times. The upper plot shows the water velocity (u) and the lower plot shows the water surface. To start with the velocity is 0 (black line at zero across the whole domain) and the water surface is up at the mid point and zero at all other points (black line up at midpoint and zero elsewhere)

+

Not much happens in 6 time-steps. Do try longer and more grid points.

+
+
[ ]:
+
+
+
rain.rain([6, 5])
+
+
+
+

If you want to change something in the script (say the colormap I’ve chosen, viridis, doesn’t work for you), you can edit rain.py in an editor or spyder. To make it take effect here though, you have to reload rain. See next cell for how to. You will also need to do this if you do problem one or other tests changing rain.py but running in a notebook.

+
+
[ ]:
+
+
+
import importlib
+importlib.reload(rain)
+
+
+
+
+
+

Staggered Grids

+

After running the program with different numbers of spatial points, you will discover that the values of \(u\) are always zero at the odd numbered points, and that the values of \(h\) are always zero at the even numbered points. In other words, the values of \(u\) and \(h\) are zero in every other column starting from \(u(t, dx)\) and \(h(t, 2dx)\), respectively.

+

A look at (Non-staggered, Eqn One) and (Non-staggered, Eqn Two) can help us understand why this is the case:

+

\(u(t + dt, x)\) is dependent on \(h(t , x + dx)\) and \(h(t , x - dx)\),

+

but \(h(t , x + dx)\) is in turn dependent on \(u\) at \(x + 2dx\) and at \(x\),

+

and \(h(t , x - dx)\) is in turn dependent on \(u\) at \(x - 2dx\) and at \(x\).

+

Thus, if we just look at \(u\) at a particular \(x\), \(u(x)\) will depend on \(u(x + 2dx)\), \(u(x - 2dx)\), \(u(x + 4dx)\), \(u(x - 4dx)\), \(u(x + 6dx)\), \(u(x - 6dx),\) … but not on \(u(x + dx)\) or \(u(x - dx)\). Therefore, the problem is actually decoupled and consists of two independent problems: one problem for all the \(u\)’s at odd numbered points and all the \(h\)’s at even numbered points, and the other problem for all the +\(u\)’s at even numbered points and all the \(h\)’s at odd numbered points, as shown in Figure Unstaggered Dependency.

+
+
[ ]:
+
+
+
Image(filename='images/dependency.png',width='50%')
+
+
+
+

Figure Unstaggered Dependency

+

In either problem, only the variable that is relevant to that problem will be considered at each point. So for one problem, if at point \(x\) we consider the \(u\) variable, at \(x + dx\) and \(x -dx\) we consider \(h\). In the other problem, at the same point \(x\), we consider the variable \(h\).

+

Now we can see why every second \(u\) point and \(h\) point are zero for rain. We start with all of \(u(dx), h(2dx), u(3dx), h(4dx), u(5dx) = 0\), which means they remain at zero.

+

Since the original problem can be decoupled, we can solve for \(u\) and \(h\) on each decoupled grid separately. But why solve two problems? Instead, we solve for \(u\) and \(h\) on a single staggered grid; whereas before we solved for \(u\) and \(h\) on the complete, non-staggered grid. Figure Decoupling shows the decoupling of the grids.

+
+
[ ]:
+
+
+
Image(filename='images/decoupling.png',width='50%')
+
+
+
+

Figure Decoupling: The two staggered grids and the unstaggered grid. Note that the unstaggered grid has two variables at each grid/time point whereas the staggered grids only have one.

+

Now consider the solution of the same problem on a staggered grid. The set-up of the problem is slightly different this time; we are considering 4 spatial points in our discussion instead of 5, shown in Figure Staggered Grid. We will also be using \(h_{i}\) and \(u_{i}\) to denote the spatial points instead of \(x = dx * i\).

+
+
[ ]:
+
+
+
Image(filename='images/stagger.png',width='50%')
+
+
+
+

Figure Staggered Grid: The staggered grid for the drop in the pond problem.

+

The original equations, boundary and initial conditions are changed to reflect the staggered case. The equations are changed to the following:

+

(Staggered, Eqn 1)

+
+\[\frac {u_{i}(t+dt)-u_{i}(t-dt)}{2 dt} + g \frac {h_{i + 1}(t) - h_{i}(t)}{dx} = 0\]
+

(Staggered, Eqn 2)

+
+\[\frac {h_{i}(t+dt)-h_{i}(t-dt)}{2 dt} + H \frac {u_{i}(t) - u_{i - 1}(t)}{dx} = 0\]
+

The initial conditions are: At \(t = 0\) and \(t = dt\), all points have zero elevation except at \(h_{3}\), where

+
+\[h_{3}(0) = h_{0}\]
+
+\[h_{3}(dt) = h_{3}(0) - h_{0} Hg \frac{dt^2}{dx^2}\]
+

At \(t = 0\) and \(t = dt\), all points have zero velocity except at \(u_{2}\) and \(u_{3}\), where

+
+\[u_{2}(dt) = - h_{0} g \frac{dt}{dx}\]
+
+\[u_{3}(dt) = - u_{2}(dt)\]
+

This time we assume there is a wall at \(u_{1}\) and \(u_{4}\), so we will ignore the value of \(h_{1}\). The boundary conditions are:

+
+\[u_{1}(t) = 0\]
+
+\[u_{4}(t) = 0\]
+
+
+

Problem One

+

Modify rain.py to solve this problem (Simple equations on a staggered grid). Submit your code and a final plot for one case.

+
+
+
+

Stability: the CFL condition

+

In the previous problem, \(dt = 0.001\)s is used to find \(u\) and \(h\). If you increase \(dt\) by 100 times, and run rain.py on your staggered grid code again, you can see that the magnitude of \(u\) has increased from around 0.04 to \(10^8\)! Try this by changing \(dt = 0.001\) to \(dt = 0.1\) in the code and compare the values of \(u\) when run with different values of \(dt\). This tells us that the scheme we have been using so far is unstable for +large values of \(dt\).

+

To understand this instability, consider a spoked wagon wheel in an old western movie (or a car wheel with a pattern in a modern TV movie) such as that shown in Figure Wheel.

+
+
[ ]:
+
+
+
Image(filename='images/wheel_static.png',width='35%')
+
+
+
+

Figure Wheel: A spoked wagon wheel.

+

Sometimes the wheels appear to be going backwards. Both TV and movies come in frames and are shown at something like 30 frames a second. So a movie discretizes time. If the wheel moves just a little in the time step between frames, your eye connects the old position with the new position and the wheel moves forward \(-\) a single frame is shown in Figure Wheel Left.

+
+
[ ]:
+
+
+
Image(filename='images/wheel_left.png',width='35%')
+
+
+
+

Figure Wheel Left: The wheel appears to rotate counter-clockwise if its speed is slow enough.

+
+
[ ]:
+
+
+
vid = display.YouTubeVideo("hgQ66frbBEs", modestbranding=1, rel=0, width=500)
+display.display(vid)
+
+
+
+

However, if the wheel is moving faster, your eye connects each spoke with the next spoke and the wheel seems to move backwards \(-\) a single frame is depicted in Figure Wheel Right.

+
+
[ ]:
+
+
+
Image(filename='images/wheel_right.png',width='35%')
+
+
+
+

Figure Wheel Right: When the wheel spins quickly enough, it appears to rotate clockwise!

+
+
[ ]:
+
+
+
vid = display.YouTubeVideo("w8iQIwX-ek8", modestbranding=1, rel=0, width=500)
+display.display(vid)
+
+
+
+

In a similar manner, the time discretization of any wave must be small enough that a given crest moves less than half a grid point in a time step. Consider the wave pictured in Figure Wave.

+
+
[ ]:
+
+
+
Image(filename='images/wave_static.png',width='65%')
+
+
+
+

Figure Wave: A single frame of the wave.

+

If the wave moves slowly, it seems to move in the correct direction (i.e. to the left), as shown in Figure Wave Left.

+
+
[ ]:
+
+
+
Image(filename='images/wave_left.png',width='65%')
+
+
+
+

Figure Wave Left: The wave moving to the left also appears to be moving to the left if it’s speed is slow enough.

+
+
[ ]:
+
+
+
vid = display.YouTubeVideo("CVybMbfYRXM", modestbranding=1, rel=0, width=500)
+display.display(vid)
+
+
+
+

However if the wave moves quickly, it seems to move backward, as in Figure Wave Right.

+
+
[ ]:
+
+
+
Image(filename='images/wave_right.png',width='65%')
+
+
+
+

Figure Wave Right: If the wave moves too rapidly, then it appears to be moving in the opposite direction.

+
+
[ ]:
+
+
+
vid = display.YouTubeVideo("al2VrnkYyD0", modestbranding=1, rel=0, width=500)
+display.display(vid)
+
+
+
+

In summary, an explicit numerical scheme is unstable if the time step is too large. Such a large time step does not resolve the process being modelled. Now we need to calculate, for our problem, the maximum value of the time step is for which the scheme remains stable. To do this calculation we will derive the dispersion relation of the waves in the numerical scheme. The maximum time step for stability will be the maximum time step for which all waves either maintain their magnitude or decay.

+

Mathematically, consider the equations (Staggered, Eqn 1) and (Staggered, Eqn 2). Let \(x=md\) and \(t=p\, dt\) and consider a solution

+

(u-solution)

+
+\[\begin{split}\begin{aligned} +u_{mp} &=& {\cal R}e \{ {\cal U} \exp [i(kx - \omega t)] \}\\ +&=& {\cal R}e \{ {\cal U} \exp [i(kmd - \omega p\, dt)] \} \nonumber\end{aligned}\end{split}\]
+
+\[\begin{split}\begin{aligned} +h_{mp} &=& {\cal R}e \{ {\cal H} \exp [i(k[x - dx/2] - \omega t)] \}\\ +&=& {\cal R}e \{ {\cal H} \exp [i(k[m - 1/2]d - \omega p\, dt)] \} \nonumber\end{aligned}\end{split}\]
+

where \({\cal R}e\) means take the real part and \({\cal U}\) and \({\cal H}\) are constants. Substitution into (Staggered, Eqn 1) and (Staggered, Eqn 2) gives two algebraic equations in \({\cal U}\) and \({\cal H}\) which can be written:

+
+\[\begin{split}\left[ +\begin{array}{cc} - \sin(\omega dt)/ dt & 2 g \sin(kd/2)/d \\ +2 H \sin(kd/2)/d & -\sin(\omega \, dt)/ dt \\ +\end{array} +\right] +\left[ +\begin{array}{c} {\cal U}\\ {\cal H}\\ +\end{array} \right] += 0.\end{split}\]
+

where \(\exp(ikd)-\exp(-ikd)\) has been written \(2 i \sin(kd)\) etc. In order for there to be a non-trivial solution, the determinant of the matrix must be zero. This determinant gives the dispersion relation

+

(Numerical Dispersion Relation)

+
+\[\frac{\sin^2(\omega \, dt)}{dt^2} = 4 gH \frac {\sin^2(kd/2)}{d^2}\]
+

Which can be compared to the (Analytic Dispersion Relation), the “real” dispersion relation. In particular, if we decrease both the time step and the space step, \(dt \rightarrow 0\) and \(d \rightarrow 0\), (Numerical Dispersion Relation) approaches (Analytic Dispersion Relation). The effect of just the discretization in space can be found by letting just \(dt \rightarrow 0\) which gives

+

(Continuous Time, Discretized Space Dispersion Relation)

+
+\[\omega^2 = 4 gH \frac {\sin^2(kd/2)}{d^2}\]
+

The “real” dispersion relation, (Analytic Dispersion Relation), and the numerical dispersion relation with continuous time, (Continuous Time, Discretized Space Dispersion Relation), both give \(\omega^2\) positive and therefore \(\omega\) real. However, this is not necessarily true for the numerical dispersion relation (Numerical Dispersion Relation). What does a complex \(\omega\) mean? Well, go back to +(u-solution). A complex \(\omega = a + ib\) gives \(u \propto \exp(-iat)\exp(bt)\). The first exponential is oscillatory (like a wave) but the second gives exponential growth if \(b\) is positive or exponential decay if \(b\) is negative. Obviously, for a stable solution we must have \(b \le 0\). So, using (Numerical Dispersion Relation) we must find \(\omega\) and determine if it is real.

+

Now, because (Numerical Dispersion Relation) is a transcendental equation, how to determine \(\omega\) is not obvious. The following works:

+
    +
  • Re-expand \(\sin(\omega\,dt)\) as \((\exp(i \omega\,dt)-\exp(-i\omega\,dt))/2i\).

  • +
  • Write \(\exp(-i\omega\,dt)\) as \(\lambda\) and note that this implies \(\exp(i\omega\, dt) = 1/\lambda\). If \(\omega dt = a + ib\) then \(b = ln |\lambda|\). For stability the magnitude of \(\lambda\) must be less than one.

  • +
  • Write \(4 gH \sin^2(kd/2)/d^2\) as \(q^2\), for brevity.

  • +
  • Substitute in (Numerical Dispersion Relation) which gives:

    +
    +\[-(\lambda-1/\lambda)^2 = 4 q^2 dt^2\]
    +

    or

    +
    +\[\lambda^4 - 2 (1 - 2 q^2 dt^2) \lambda^2 + 1 = 0\]
    +

    or

    +

    (Lambda Eqn)

    +
    +\[\lambda = \pm \left(1-2q^2 dt^2 \pm 2 q dt \left( q^2 + dt^2 - 1 \right)^{1/2} \right)^{1/2}\]
    +
  • +
+

A plot of the four roots for \(\lambda\) is shown below in Figure Roots.

+
+
[ ]:
+
+
+
Image(filename='images/allmag.png',width='45%')
+
+
+
+

Figure Roots: Magnitude of the four roots of \(\lambda\) as a function of \(q dt\) (not \(\omega dt\)).

+

The four roots correspond to the “real” waves travelling to the right and left, as well two computational modes (see Section Computational Mode for more information). The plots for the four roots overlap, so it is most helpful to view separate plots for each of the roots.

+
+
[ ]:
+
+
+
Image(filename='images/multimag.png',width='60%')
+
+
+
+

Figure Separate Roots: Magnitude of the four roots of \(\lambda\) as a function of \(q dt\) (not \(\omega dt\)).

+

Now for stability \(\lambda\) must have a magnitude less than or equal to one. From Figure Separate Roots, it is easy to see that this is the same as requiring that \(|q dt|\) be less than 1.0.

+

Substituting for \(q\)

+
+\[1 > q^2 dt^2 = \frac {4gH}{d^2} \sin^2(kd/2) dt^2\]
+

for all \(k\).

+

The maximum wavenumber that can be resolved by a grid of size \(d\) is \(k = \pi/d\). At this wavenumber, the sine takes its maximum value of 1. So the time step

+
+\[dt^2 < \frac { d^2}{4 g H}\]
+

For this case (\(f = 0\)) the ratio of the space step to the time step must be greater than the wave speed \(\sqrt +{gH}\), or

+
+\[d / dt > 2 \sqrt{gH}.\]
+

This stability condition is known as the CFL condition (named after Courant, Friedrich and Levy).

+

On a historical note, the first attempts at weather prediction were organized by Richardson using a room full of human calculators. Each person was responsible for one grid point and passed their values to neighbouring grid points. The exercise failed dismally, and until the theory of CFL, the exact reason was unknown. The equations Richardson used included fast sound waves, so the CFL condition was

+
+\[d/dt > 2 \times 300 {\rm m/s}.\]
+

Richardson’s spatial step, \(d\), was too small compared to \(dt\) and the problem was unstable.

+
+

Problem Two

+
+
    +
  1. Find the CFL condition (in seconds) for \(dt\) for the Python example in Problem One.

  2. +
+
+

Test your value.

+
    +
  1. Find the CFL condition (in seconds) for \(dt\) for the Python example in rain.py, ie., for the non-staggered grid. Test your value.

  2. +
+
+
+
+

Accuracy

+

A strong method to determine the accuracy of a scheme is to compare the numerical solution to an analytic solution. The equations we are considering are wave equations and so we will compare the properties of the waves. Wave properties are determined by the dispersion relation and we will compare the numerical dispersion relation and the exact, continuous analytic dispersion relation. Both the time step and the space step (and as we’ll see below the grid) affect the accuracy. Here we will +only consider the effect of the space step. So, consider the numerical dispersion relation assuming \(dt \rightarrow 0\) (reproduced here from (Continuous Time, Discretized Space Dispersion Relation))

+
+\[\omega^2 = 4 gH \frac {\sin^2(kd/2)}{d^2}\]
+

with the exact, continuous dispersion relation (Analytic Dispersion Relation)

+
+\[\omega^2 = gHk^2\]
+

We can plot the two dispersion relations as functions of \(kd\), The graph is shown in Figure Simple Accuracy.

+
+
[ ]:
+
+
+
Image(filename='images/simple_accuracy.png',width='50%')
+
+
+
+

Figure Simple Accuracy

+

We can see that the accuracy is good for long waves (\(k\) small) but for short waves, near the limit of the grid resolutions, the discrete frequency is too small. As the phase speed is \({\omega}/{k}\), the phase speed is also too small and most worrying, the group speed \({\partial \omega}/ +{\partial k}\) goes to zero!

+
+

Choosing a Grid

+
+

No variation in y

+

For the simple case above, there is little choice in grid. Let’s consider the more complicated case, \(f \not = 0\). Then \(v \not = 0\) and we have to choose where on the grid we wish to put \(v\). There are two choices:

+
+
[ ]:
+
+
+
Image(filename='images/simple_grid1.png',width='50%')
+
+
+
+
+
[ ]:
+
+
+
Image(filename='images/simple_grid2.png',width='50%')
+
+
+
+

Figure Two Simple Grids

+

For each of these, we can calculate the discrete dispersion relation discussed above.

+

For grid 1

+
+\[\omega ^2 = f^2 \cos^2(\frac{kd}{2}) + \frac{4gH \sin^2(\frac{kd}{2})}{d^2}\]
+
+
+
+

Problem Three

+

Show that for grid 2

+
+\[\omega ^2 = f^2 + \frac{4gH \sin^2(\frac{kd}{2})}{d^2}\]
+

We can plot the two dispersion relations as a function of \(k\) given the ratio of \(d/R\), where \(d\) is the grid size and \(R\) is the Rossby radius which is given by

+
+\[R = \frac {\sqrt{gH}}{f}.\]
+
+
[ ]:
+
+
+
accuracy2d.main(0.5)
+
+
+
+
+
+

Problem Four

+

Which grid gives the best accuracy for \(d=R/2\)? Explain in what ways it is more accurate. Consider the accuracy of the frequency, but also the accuracy of the group speed (the gradient of the frequency with respect to wavenumber). Describe what ranges of wavenumber the accuracy is good and what ranges it is less good.

+
+
+

Problem Five

+

Modify rain.py to solve equations (No variation in y, first eqn), (No variation in y, second eqn) and (No variation in y, third eqn) on the most accurate grid.

+
+
+
+
+

End of Lab 7a and Beginning of Lab 7b

+
+

Full Equations

+

In order to solve the full equations (Full Equations, Eqn 1), (Full Equations, Eqn 2) and (Full Equations, Eqn 3) numerically, we need to discretize in 3 dimensions, two in space and one in time. Mesinger and Arakawa, 1976 introduced five different spatial discretizations.

+

Consider first the most obvious choice an unstaggered grid or Arakawa A grid, shown in Figure Arakawa A Grid. We might expect, from the studies above, that an unstaggered grid may not be the best choice. The grid A is not two de-coupled grids because of weak coupling through the Coriolis force. However, we will see that this grid is not as accurate as some of the staggered grids (B, C, D and E).

+
+
[ ]:
+
+
+
Image(filename='images/grid1.png',width='50%')
+
+
+
+

Figure Arakawa A Grid.

+

As the problem becomes more complicated, we need to simplify the notation; hence define a discretization operator:

+
+\[(\delta_x \alpha)_{(m,n)} \equiv \frac 1 {2d} ( \alpha_{m+1,n} - \alpha_{m-1,n} )\]
+

where \(d\) is the grid spacing in both the \(x\) and \(y\) directions. Note that this discretization is the same centered difference we have used throughout this lab.

+

The finite difference approximation to the full shallow water equations on the A grid becomes:

+
+\[\frac{\partial u}{\partial t} = -g \delta_x h + fv\]
+
+\[\frac{\partial v}{\partial t} = -g \delta_y h - fu\]
+
+\[\frac{\partial h}{\partial t} = -H (\delta_x u + \delta_y v)\]
+

As before, consider a centre difference scheme (leap-frog method) for the time step as well, so that

+
+\[\frac{\partial u}{\partial t}(t) = \frac {u(t+1)-u(t-1)}{2 dt}\]
+

Putting this together with the spatial scheme we have: (Numerical Scheme: Grid A)

+
+\[\frac {u(t+1)-u(t-1)}{2 dt} = -g \delta_x h(t) + fv(t)\]
+
+\[\frac{v(t+1)-v(t-1)}{2 dt} = -g \delta_y h(t) - fu(t)\]
+
+\[\frac{h(t+1)-h(t-1)}{2 dt} = -H (\delta_x u(t) + \delta_y v(t))\]
+

Each of these equations can be rearranged to give \(u(t+1)\), \(v(t+1)\) and \(h(t+1)\), respectively. Then given the values of the three variables at every grid point at two times, (\(t\) and \(t-1\)), these three equations allow you to calculate the updated values, the values of the variables at \(t+1\). Once again, the following questions arise regarding the scheme:

+
    +
  • Is it stable?

  • +
  • Is it accurate?

  • +
+
+

Stability

+

To determine the stability of the scheme, we need to repeat the analysis of section Stability for the 2 spatial dimensions used here. The first step is to assume a form of the solutions:

+
+\[\begin{split}\begin{aligned} +z_{mnp} &=& {\cal R}e \{ {\cal Z} \exp [i(kx + \ell y - \omega t)] \}\\ +&=& {\cal R}e \{ {\cal Z} \exp [i(kmd + \ell n d- \omega p\, dt)] \} \nonumber\end{aligned}\end{split}\]
+

where \(z\) represents any of \(u\), \(v\) and \(h\) and we have let \(x=md\), \(y=nd\) and \(t=p\,dt\). Substitution into (Numerical Scheme: Grid A) gives three algebraic equation in \({\cal U}\), \({\cal V}\) and \({\cal H}\):

+
+\[\begin{split}\left[ +\begin{array}{ccc} - i \sin(\omega dt)/ dt & - f & i g \sin(kd)/d \\ +f & - i \sin(\omega dt)/ dt & i g \sin(\ell d)/d \\ +i H \sin(kd)/d & i H \sin(\ell d)/d & -i \sin(\omega \, dt)/ dt \\ +\end{array} +\right] +\left[ +\begin{array}{c} {\cal U}\\ {\cal V} \\ {\cal H}\\ +\end{array} \right] += 0.\end{split}\]
+

Setting the determinate to zero gives the dispersion relation:

+

(Full Numerical Dispersion Relation)

+
+\[\frac {\sin^2(\omega\,dt)}{dt^2} = f^2 + \frac{gH}{d^2} \left( \sin^2(kd) + \sin^2(\ell d) \right)\]
+

Still following section Stability, let \(\lambda = \exp (i \omega\, dt)\) and let \(q^2 = f^2 + {gH}/{d^2} \left( \sin^2(kd) + \sin^2(\ell d) \right)\), substitution into (Full Numerical Dispersion Relation) gives

+
+\[-(\lambda-1/\lambda)^2 = 4 q^2 dt^2\]
+

or equation (Lambda Eqn) again. For stability \(\lambda\) must be less than 1, so

+
+\[1 > q^2 dt^2 = {dt^2} \left(f^2 + {gH}/{d^2} \left( \sin^2(kd) + \sin^2(\ell d) \right) +\right)\]
+

The sines take their maximum values at \(k=\pi/(2d)\) and \(\ell=\pi/(2d)\) giving

+
+\[dt^2 < \frac{1}{f^2 + 2 gH/d^2}\]
+

This is the CFL condition for the full equations on the Arakawa A grid. Note that the most unstable mode moves at \(45^o\) to the grid.

+
+
[ ]:
+
+
+
Image(filename='images/fourgrids.png', width='80%')
+
+
+
+

Figure Four More Grids. Note that E is simply a rotation of grid B by \(45^\circ\).

+
+
+

Problem Six

+

Use the interactive example below to investigate stability of the various grids. Calculate the stability for each grids A, B, and C. Find the dt for stability (to one significant figure) given \(f = 1 \times 10^{-4}\)s\(^{-1}\), \(g = 10\) m s\(^{-2}\), \(H = 4000\) m and \(dx = 20\) km. Is it the same for all four grids? Why not?

+
+
[ ]:
+
+
+
# grid A is grid 1, grid B is grid 2 and and grid C is grid 3
+# ngrid is the number of grid points in x and y
+# dt is the time step in seconds
+# T is the time plotted is seconds to 4*3600 is 4 hours
+interactive1.interactive1(grid=3, ngrid=11, dt=150, T=4*3600)
+
+
+
+
+
+

Accuracy

+

To determine the accuracy of the spatial discretization, we will compare the numerical dispersion relation (Full Numerical Dispersion Relation) for \(dt \rightarrow 0\)

+
+\[\omega^2 = f^2 + gH \frac {\sin^2(kd)}{d^2} + +gH \frac {\sin^2 (\ell d)}{d^2}\]
+

with the exact, continuous dispersion relation

+

(Full Analytic Dispersion Relation)

+
+\[\omega^2 = f^2 + gH(k^2+\ell^2)\]
+

We can plot the two dispersion relations as functions of \(k\) and \(\ell\), given the ratio of \(d/R\), where \(d\) is the grid size and \(R\) is the Rossby radius defined in the previous section. For example, the exact \(\omega\) and its discrete approximation, using Grid A and \(d/R = 1/2\), can be compared in Figure Accuracy Comparison.

+
+
[ ]:
+
+
+
Image(filename='images/accuracy_demo.png',width='60%')
+
+
+
+

Figure Accuracy Comparison: A comparison of the exact \(\omega\) and the discrete approximation using Grid A and with \(d/R=1/2\).

+

It is easy to see that the Grid A approximation is not accurate enough. There are a number of other possibilities for grids, all of which stagger the unknowns; that is, different variables are placed at different spatial positions as discussed in section Staggered Grids.

+

Four other grids, which are known as Mesinger and Arakawa B, C, D and E grids as shown above Figure Four More Grids.

+

To work with these grids, we must introduce an averaging operator, defined in terms of half-points on the grid:

+
+\[\overline{\alpha}_{mn}^{x} = \frac{\alpha_{m+\frac{1}{2},n} + \alpha_{m-\frac{1}{2},n}}{2}\]
+

and modify the difference operator

+
+\[(\delta_{x}\alpha)_{mn} = \frac{\alpha_{m+\frac{1}{2},n} - + \alpha_{m-\frac{1}{2},n}}{d}\]
+
+
+

Problem Seven

+
    +
  1. For grid B, write down the finite difference form of the shallow water equations.

  2. +
  3. For grid C, write down the finite difference form of the shallow water equations.

  4. +
  5. For grid D, write down the finite difference form of the shallow water equations.

  6. +
+

The dispersion relation for each grid can be found in a manner analgous to that for the A grid. For the B grid the dispersion relation is

+
+\[\left( \frac {\omega}{f} \right)^2 += 1 + 4 \left( \frac {R}{d} \right)^2 \left( \sin^2 \frac {k d}{2}\cos^2 \frac{\ell d}{2} + \cos^2 \frac {k d}{2}\sin^2 \frac{\ell d}{2} \right)\]
+

and for the C grid it is

+
+\[\left( \frac {\omega}{f} \right)^2 += \cos^2 \frac {k d}{2} \cos^2 \frac{\ell d}{2} + 4 \left( \frac {R}{d} \right)^2 \left( \sin^2 \frac {k d}{2} + \sin^2 \frac {\ell d}{2} \right)\]
+
+
+

Problem Eight

+

Find the dispersion relation for the D grid.

+

In the interactive exercise below, you will enter the dispersion for each of the grids. Study each plot carefully for accuracy of phase and group speed.

+
+
[ ]:
+
+
+
def disp_analytic(kd, ld, Rod=0.5):
+    Omegaof = 1 + Rod**2 * (kd**2 + ld**2)
+    return Omegaof
+# define disp_A, disp_B, disp_C here and run the cell
+
+
+
+
+
[ ]:
+
+
+
# replace the second disp_analytic with one of your numerical dispersion functions, e.g. disp_A
+dispersion_2d.dispersion_2d(disp_analytic, disp_analytic, Rod=0.5)
+
+
+
+
+
+

Problem Nine

+
    +
  1. For \(R/d = 2\) which grid gives the most accurate solution? As well as the closeness of fit of \(\omega/f\), also consider the group speed (gradient of the curve). The group speed is the speed at which wave energy propagates.

  2. +
  3. For \(R/d = 0.2\) which grid gives the most accurate solution? As well as the closeness of fit of \(\omega/f\), also consider the group speed (gradient of the curve). The group speed is the speed at which wave energy propagates.

  4. +
+
+
+
+

Details

+
+

Starting the Simulation Full Equations

+

The leap-frog scheme requires values of \(u\), \(v\) and \(h\) at step 1 as well as at step 0. However, the initial conditions only provide starting values at step 0, so we must find some other way to obtain values at step 1. There are various methods of obtaining the second set of starting values, and the code used in this laboratory uses a predictor/corrector method to obtain values at step 1 from the values at step 0. For the simple equations this process was discussed in +section Predictor-Corrector to Start. For the full equations the procedure goes as follows:

+
    +
  • the solution at step \(1\) is predicted using a forward Euler step:

    +
    +\[\begin{split}\begin{array}{l} + u(1) = u(0) + dt (f v(0)-g \delta_x h(0))\\ + v(1) = v(0) + dt (-f u(0)-g \delta_y h(0))\\ + h(1) = h(0) + dt (-H(\delta_x u(0) + \delta_y v(0))) +\end{array}\end{split}\]
    +
  • +
  • then, step \(\frac{1}{2}\) is estimated by averaging step \(0\) and the predicted step \(1\):

    +
    +\[\begin{split}\begin{array}{l} + u(1/2) = 1/2 (u(0)+u(1)) \\ + v(1/2) = 1/2 (v(0)+v(1)) \\ + h(1/2) = 1/2 (h(0)+h(1)) +\end{array}\end{split}\]
    +
  • +
  • finally, the step \(1\) approximation is corrected using leap frog from \(0\) to \(1\) (here, we use only a half time-step \(\frac{1}{2} dt\)):

    +
    +\[\begin{split}\begin{array}{l} + u(1) = u(0) + dt (f v(1/2)-g\delta_x h(1/2)) \\ + v(1) = v(0) + dt(-f u(1/2)-g\delta_y h(1/2)) \\ + h(1) = h(0) + dt(-H (\delta_x u(1/2)+\delta_y v(1/2))) +\end{array}\end{split}\]
    +
  • +
+
+
+

Initialization

+

The initial conditions used for the stability demo for the full equations are Poincare waves as described in the physical example in Section Physical Example, Poincaré Waves.

+

Assuming a surface height elevation

+
+\[h = \cos (kx+\ell y)\]
+

equations (Full Eqns, Eqn 1,Full Eqns, Eqn 2) give

+
+\[\begin{split}\begin{aligned} + u &=& \frac {-1} {H(k^2+\ell^2)} \, \left( k \omega \cos(kx+\ell + y)+f \ell \sin(kx+\ell y) \right) \nonumber \\ + v &=& \frac 1 {H(k^2+\ell^2)} \, \left( -\ell \omega \cos(kx+\ell + y)+f k \sin(kx+\ell y)\right) \nonumber \end{aligned}\end{split}\]
+

where \(\ell\) and \(k\) are selected by the user. It is assumed \(g = 9.8\)m/s\(^2\), \(H = 400\)m and \(f = 10^{-4}\)/s. The value of the frequency \(\omega\) is given by the dispersion relation, (Full Analytic Dispersion Relation).

+
+
+

Boundary Conditions

+

The boundary conditions used for the stability demo for the full equations are “periodic” in both x and y. Anything which propagates out the left hand side comes in the right hand side, etc. This condition forces a periodicity on the flow, the wavelengths of the simulated waves must be sized so that an integral number of waves fits in the domain.

+

Specifically, for a \(m \times n\) grid, the boundary conditions for the variable \(h\) along the right and left boundaries are

+
+\[\begin{split}\begin{aligned} + h(i=1,j) &=& h(i=m-1,j)\,\,\, {\rm for}\,\,\, j=2 \,\, {\rm to} \,\,n-1\\ \nonumber + h(i=m,j) &=& h(i=2,j) \,\,\, {\rm for}\,\,\, j=2 \,\, {\rm to} \,\,n-1 + \end{aligned}\end{split}\]
+

and along the top and bottom boundaries

+
+\[\begin{split}\begin{aligned} + h(i,j=1) &=& h(i,j=n-1)\,\,\, {\rm for} \,\,\, i=2 \,\, {\rm to} \,\,m-1\\ \nonumber + h(i,j=n) &=& h(i,j=2) \,\,\, {\rm for} \,\,\, i=2 \,\, {\rm to} \,\,m-1 . + \end{aligned}\end{split}\]
+

The conditions for \(u\) and \(v\) are identical.

+
+
+

Computational Mode

+

In section Stability it was determined that there are four oscillatory modes. Consider letting \(dt \rightarrow 0\) in (Lambda Eqn). Two of the roots give \(+1\) and two roots give \(-1\).

+

Consider the variable \(u\) at time \(p \, dt\) and one time step later at time \((p+1) \, dt\).

+
+\[\frac{u_{m(p+1)}}{u_{mp}} = \frac {{\cal R}e \{ {\cal U} \exp [i(kmd - \omega (p+1)\, dt)] \} } {{\cal R}e \{ {\cal U} \exp [i(kmd - \omega p\, dt)] \} } = \exp(i \omega \, dt) = \lambda\]
+

Thus, \(\lambda\) gives the ratio of the velocity at the next time step to its value at the current time step. Therefore, as the time step gets very small, physically we expect the system not to change much between steps. In the limit of zero time step, the value of \(u\) or \(h\) should not change between steps; that is \(\lambda\) should be \(1\).

+

A value of \(\lambda = -1\) implies \(u\) changes sign between each step, no matter how small the step. This mode is not a physical mode of oscillation, but rather a computational mode, which is entirely non-physical. It arises from using a centre difference scheme in time and not staggering the grid in time. There are schemes that avoid introducing such spurious modes (by staggering in time), but we won’t discuss them here (for more information, see Mesinger and Arakawa, +1976 [Ch. II]). However, the usual practice in geophysical fluid dynamics is to use the leap-frog scheme anyway (since it is second order in time) and find a way to keep the computational modes “small”, in some sense.

+

For a reasonably small value of \(dt\), the computational modes have \(\lambda \approx -1\). Therefore, these modes can be eliminated almost completely by averaging two adjacent time steps. To understand why this is so, think of a computational mode \(\hat{u}_{mp}\) at time level \(p\), which is added to its counterpart, \(\hat{u}_{m(p+1)} \approx -\hat{u}_{mp}\) at the next time step: their sum is approximately zero! For the code in this lab, it is adequate to average the +solution in this fashion only every 101 time steps (though larger models may need to be averaged more often). After the averaging is performed, the code must be restarted; see Section on Starting.

+
+
+
+

Glossary

+

Poincare waves: These are waves that obey the dispersion relation \(\omega ^2=f^2+k^2 c^2\), where \(c\) is the wave speed, \(k\) is the magnitude of the wavenumber vector, \(f\) is the Coriolis frequency, and \(\omega\) is the wave frequency.

+

CFL condition: named after Courant, Friedrichs and Levy, who first derived the relationship. This is a stability condition for finite difference schemes (for propagation problems) that corresponds physically to the idea that the continuous domain of dependence must contain the corresponding domain of dependence for the discrete problem.

+

dispersive wave: Any wave whose speed varies with the wavenumber. As a consequence, waves of different wavelengths that start at the same location will move away at different speeds, and hence will spread out, or disperse.

+

dispersion relation: For dispersive waves, this relation links the frequency (and hence also the phase speed) to the wavenumber, for a given wave. See also, dispersive wave.

+

dispersive wave: Any wave whose speed varies with the wavenumber. As a consequence, waves of different wavelengths that start at the same location will move away at different speeds, and hence will spread out, or disperse.

+

dispersion relation: For dispersive waves, this relation links the frequency (and hence also the phase speed) to the wavenumber, for a given wave. See also, dispersive wave.

+

staggered grid: This refers to a problem with several unknown functions, where the discrete unknowns are not located at same grid points; rather, they are staggered from each other. For example, it is often the case that in addition to the grid points themselves, some uknowns will be placed at the center of grid cells, or at the center of the sides of the grid cells.

+

leap-frog scheme: This term is used to refer to time discretization schemes that use the centered difference formula to discretize the first time derivative in PDE problems. The resulting difference scheme relates the solution at the next time step to the solution two time steps previous. Hence, the even- and odd-numbered time steps are linked together, with the resulting computation performed in a leap-frog fashion.

+

periodic boundary conditions: Spatial boundary conditions where the value of the solution at one end of the domain is required to be equal to the value on the other end (compare to dirichlet boundary values, where the the solution at both ends is fixed at a specific value or values). This enforces periodicity on the solution, and in terms of a fluid flow problem, these conditions can be thought of more intuitively as requiring that any flow out of the one boundary must return through the +corresponding boundary on the other side.

+

computational mode: When performing a modal analysis of a numerical scheme, this is a mode in the solution that does not correspond to any of the “real” (or physical) modes in the continuous problem. It is an artifact of the discretization process only, and can sometimes lead to spurious computational results (for example, with the leap-frog time stepping scheme).

+
+
+

References

+

Cushman-Roisin, B., 1994: Introduction to Geophysical Fluid Dynamics, Prentice Hall.

+

Gill, A.E., 1982: Atmosphere-Ocean Dynamics, Academic Press, Vol. 30 International Geophysics Series, New York.

+

Mesinger, F. and A. Arakawa, 1976: Numerical Methods Used in Atmospheric Models,GARP Publications Series No.~17, Global Atmospheric Research Programme.

+

Pond, G.S. and G.L. Pickard, 1983: Introductory Dynamic Oceanography, Pergamon, Great Britain, 2nd Edition.

+

Press, W.H., S.A. Teukolsky, W.T. Vetterling and B.P. Flannery, 1992: Numerical Recipes in FORTRAN: The Art of Scientific Computing, Cambridge University Press, Cambridge, 2n Edition.

+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab7/01-lab7.ipynb b/notebooks/lab7/01-lab7.ipynb new file mode 100644 index 0000000..020b6b5 --- /dev/null +++ b/notebooks/lab7/01-lab7.ipynb @@ -0,0 +1,1755 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 7: Solving partial differential equations using an explicit, finite difference method.\n", + "\n", + "Lin Yang & Susan Allen & Carmen Guo" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "*This laboratory is long and is typically assigned in two halves. See break after Problem 5 and before Full Equations*" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems ##\n", + "- [Problem 1](#Problem-One): Numerical solution on a staggered grid.\n", + "- [Problem 2](#Problem-Two): Stability of the difference scheme\n", + "- [Problem 3](#Problem-Three): Dispersion relation for grid 2\n", + "- [Problem 4](#Problem-Four): Choosing most accurate grid\n", + "- [Problem 5](#Problem-Five): Numerical solution for no y variation\n", + "- [Problem 6](#Problem-Six): Stability on the 2-dimensional grids\n", + "- [Problem 7](#Problem-Seven): Finite difference form of equations\n", + "- [Problem 8](#Problem-Eight): Dispersion relation for D-grid\n", + "- [Problem 9](#Problem-Nine): Accuracy of the approximation on various grids" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Learning Objectives ##\n", + "\n", + "When you have completed reading and working through this lab you will be able to:\n", + "\n", + "- find the dispersion relation for a set of differential equations\n", + " (the “real” dispersion relation).\n", + "\n", + "- find the dispersion relation for a set of difference equations (the\n", + " numerical dispersion relation).\n", + "\n", + "- describe a leap-frog scheme\n", + "\n", + "- construct a predictor-corrector method\n", + "\n", + "- use the given differential equations to determine unspecified\n", + " boundary conditions as necessary\n", + "\n", + "- describe a staggered grid\n", + "\n", + "- state one reason why staggered grids are used\n", + "\n", + "- explain the physical principle behind the CFL condition\n", + "\n", + "- find the CFL condition for a linear, explicit, numerical scheme\n", + "\n", + "- state one criteria that should be considered when choosing a grid\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "\n", + "These are the suggested readings for this lab. For more details about\n", + "the books and papers, click on the reference link.\n", + "\n", + "- **Rotating Navier Stokes Equations**\n", + "\n", + " -  [Pond and Pickard, 1983](#Ref:PondPickard), Chapters 3,4 and 6\n", + "\n", + "- **Shallow Water Equations**\n", + "\n", + " -  [Gill, 1982](#Ref:Gill), Section 5.6 and 7.2 (not 7.2.1 etc)\n", + "\n", + "- **Poincaré Waves**\n", + "\n", + " -  [Gill, 1982](#Ref:Gill), Section 7.3 to just after equation (7.3.8), section 8.2\n", + " and 8.3\n", + "\n", + "- **Introduction to Numerical Solution of PDE’s**\n", + "\n", + " -  [Press et al, 1992](#Ref:Pressetal), Section 17.0\n", + "\n", + "- **Waves**\n", + "\n", + " -  [Cushman-Roision, 1994](#Ref:Cushman-Roisin), Appendix A" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import context\n", + "from IPython.display import Image\n", + "import IPython.display as display\n", + "# import plotting package and numerical python package for use in examples later\n", + "import matplotlib.pyplot as plt\n", + "# make the plots happen inline\n", + "%matplotlib inline\n", + "# import the numpy array handling library\n", + "import numpy as np\n", + "# import the quiz script\n", + "from numlabs.lab7 import quiz7 as quiz\n", + "# import the pde solver for a simple 1-d tank of water with a drop of rain\n", + "from numlabs.lab7 import rain\n", + "# import the dispersion code plotter\n", + "from numlabs.lab7 import accuracy2d\n", + "# import the 2-dimensional drop solver\n", + "from numlabs.lab7 import interactive1\n", + "# import the 2-dimensional dispersion relation plotter\n", + "from numlabs.lab7 import dispersion_2d" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Physical Example, Poincaré Waves\n", + "\n", + "One of the obvious examples of a physical phenomena governed by a\n", + "partial differential equation is waves. Consider a shallow layer of\n", + "water and the waves on the surface of that layer. If the depth of the\n", + "water is much smaller than the wavelength of the waves, the velocity of\n", + "the water will be the same throughout the depth. So then we can describe\n", + "the state of the water by three variables: $u(x,y,t)$, the east-west\n", + "velocity of the water, $v(x,y,t)$, the north-south velocity of the water\n", + "and $h(x,y,t)$, the height the surface of the water is deflected. As\n", + "specified, each of these variables are functions of the horizontal\n", + "position, $(x,y)$ and time $t$ but, under the assumption of shallow\n", + "water, not a function of $z$.\n", + "\n", + "In oceanographic and atmospheric problems, the effect of the earth’s\n", + "rotation is often important. We will first introduce the governing\n", + "equations including the Coriolis force ([Full Equations](#Full-Equations)). However,\n", + "most of the numerical concepts can be considered without all the\n", + "complications in these equations. We will also consider two simplier\n", + "sets; one where we assume there is no variation of the variables in the\n", + "y-direction ([No variation in y](#No-variation-in-y)) and one where, in addition, we assume\n", + "that the Coriolis force is negligible ([Simple Equations](#Simple-Equations)).\n", + "\n", + "The solution of the equations including the Coriolis force are Poincaré\n", + "waves whereas without the Coriolis force, the resulting waves are called\n", + "shallow water gravity waves.\n", + "\n", + "The remainder of this section will present the equations and discuss the\n", + "dispersion relation for the two simplier sets of equations. If your wave\n", + "theory is rusty, consider reading Appendix A in [Cushman-Roisin, 1994](#Ref:Cushman-Roisin)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Introduce Full Equations \n", + "[full-equations]:(#Introduce-Full-Equations)\n", + "\n", + "The linear shallow water equations on an f-plane over a flat bottom are\n", + "
\n", + "(Full Equations, Eqn 1)\n", + "$$\\frac{\\partial u}{\\partial t} - fv = -g\\frac{\\partial h}{\\partial x}$$\n", + "
\n", + "(Full Equations, Eqn 2)\n", + "$$\\frac{\\partial v}{\\partial t} + fu = -g\\frac{\\partial h}{\\partial y} $$\n", + "
\n", + "(Full Equations, Eqn 3)\n", + "$$\\frac{\\partial h}{\\partial t} + H\\frac{\\partial u}{\\partial x} + H\\frac{\\partial v}{\\partial y} = 0$$ \n", + "
\n", + "where\n", + "\n", + "- $\\vec{u} = (u,v)$ is the horizontal velocity,\n", + "\n", + "- $f$ is the Coriolis frequency,\n", + "\n", + "- $g$ is the acceleration due to gravity,\n", + "\n", + "- $h$ is the surface elevation, and\n", + "\n", + "- $H$ is the undisturbed depth of the fluid.\n", + "\n", + "We will return to these equations in section [Full Equations](#Full-Equations)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### No variation in y\n", + "[no-variation-in-y.unnumbered]: (#No-variation-in-y)\n", + "\n", + "To simplify the problem assume there is no variation in y. This\n", + "simplification gives:\n", + "
\n", + "(No variation in y, first eqn)\n", + "$$\\frac{\\partial u}{\\partial t} - fv = -g\\frac{\\partial h}{\\partial x}$$ \n", + "
\n", + "(No variation in y, second eqn)\n", + "$$\\frac{\\partial v}{\\partial t} + fu = 0$$\n", + "
\n", + "(No variation in y, third eqn)\n", + "$$\\frac{\\partial h}{\\partial t} + H\\frac{\\partial u}{\\partial x} = 0$$\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Introduce Simple Equations\n", + "[simple-equations]:(#Simple-Equations)\n", + "\n", + "If we consider waves in the absence of the earth’s rotation, $f=0$,\n", + "which implies $v=0$ and we get\n", + "
\n", + "$$\\frac{\\partial u}{\\partial t} = -g\\frac{\\partial h}{\\partial x}$$\n", + "
\n", + "$$\\frac{\\partial h}{\\partial t} + H\\frac{\\partial u}{\\partial x} = 0$$\n", + "
\n", + "\n", + "These simplified equations give shallow water gravity waves. For\n", + "example, a solution is a simple sinusoidal wave:\n", + "
\n", + "(wave solution- h)\n", + "$$h = h_{0}\\cos{(kx - \\omega t)}$$\n", + "
\n", + "(wave solution- u)\n", + "$$u = \\frac{h_{0}\\omega}{kH}\\cos{(kx - \\omega t)}$$ \n", + "
\n", + "where $h_{0}$ is the amplitude, $k$ is the\n", + "wavenumber and $\\omega$ is the frequency (See [Cushman-Roisin, 1994](#Ref:Cushman-Roisin) for a nice\n", + "review of waves in Appendix A).\n", + "\n", + "Substitution of ([wave solution- h](#lab7:sec:hwave)) and ([wave solution- u](#lab7:sec:uwave)) back into\n", + "the differential equations gives a relation between $\\omega$ and k.\n", + "Confirm that \n", + "
\n", + "(Analytic Dispersion Relation)\n", + "$$\\omega^2 = gHk^2,$$\n", + "
\n", + "which is the dispersion relation for these waves." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### No variation in y\n", + "[no-variation-in-y-1.unnumbered]:(#No-variation-in-y)\n", + "\n", + "Now consider $f\\not = 0$.\n", + "\n", + "By assuming\n", + "$$h= h_{0}e^{i(kx - \\omega t)}$$\n", + "$$u= u_{0}e^{i(kx - \\omega t)}$$\n", + "$$v= v_{0}e^{i(kx - \\omega t)}$$\n", + "\n", + "and substituting into the differential equations, eg, for [(No variation in y, first eqn)](#lab7:sec:firsteq)\n", + "$$-iwu_{0}e^{i(kx - \\omega t)} - fv_{0}e^{i(kx - \\omega t)} + ikgh_{0}e^{i(kx - \\omega t)} = 0$$\n", + "and cancelling the exponential terms gives 3 homogeneous equations for\n", + "$u_{0}$, $v_{0}$ and $h_{0}$. If the determinant of the matrix derived\n", + "from these three equations is non-zero, the only solution is\n", + "$u_{0} = v_{0} = h_{0} = 0$, NO WAVE! Therefore the determinant must be\n", + "zero." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Quiz: Find the Dispersion Relation\n", + "\n", + "What is the dispersion relation for 1-dimensional Poincare waves?\n", + "\n", + "A) $\\omega^2 = f^2 + gH (k^2 + \\ell^2)$\n", + "\n", + "B) $\\omega^2 = gH k^2$\n", + "\n", + "C) $\\omega^2 = f^2 + gH k^2$\n", + "\n", + "D) $\\omega^2 = -f^2 + gH k^2$\n", + "\n", + "In the following, replace 'x' by 'A', 'B', 'C' or 'D' and run the cell." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.dispersion_quiz(answer = 'XXX'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Numerical Solution\n", + "\n", + "### Simple Equations\n", + "[simple-equations]:(#Simple-Equations)\n", + "\n", + "Consider first the simple equations with $f = 0$. In order to solve\n", + "these equations numerically, we need to discretize in 2 dimensions, one\n", + "in space and one in time. Consider first the most obvious choice, shown\n", + "in Figure [Unstaggered Grid](#lab7:fig:nonstagger)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/nonstagger.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Unstaggered Grid.\n", + "
\n", + "\n", + "We will use centred difference schemes in both $x$ and $t$. The\n", + "equations become:\n", + "
\n", + "(Non-staggered, Eqn One)\n", + "$$\\frac {u(t+dt, x)-u(t-dt, x)}{2 dt} + g \\frac {h(t, x+dx) - h(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "(Non-staggered, Eqn Two)\n", + "$$\\frac {h(t+dt, x)-h(t-dt, x)}{2 dt} + H \\frac {u(t, x+dx) - u(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "We can rearrange these equations to\n", + "give $u(t+dt, x)$ and $h(t+dt, x)$. For a small number of points, the\n", + "resulting problem is simple enough to solve in a notebook.\n", + "\n", + "For a specific example, consider a dish, 40 cm long, of water 1 cm deep.\n", + "Although the numerical code presented later allows you to vary the\n", + "number of grid points, in the discussion here we will use only 5 spatial\n", + "points, a distance of 10 cm apart. The lack of spatial resolution means\n", + "the wave will have a triangular shape. At $t=0$ a large drop of water\n", + "lands in the centre of the dish. So at $t=0$, all points have zero\n", + "velocity and zero elevation except at $x=3dx$, where we have\n", + "$$h(0, 3dx) = h_{0} = 0.01 cm$$\n", + "\n", + "A centred difference scheme in time, such as defined by equations\n", + "([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)), is\n", + "usually refered to as a *Leap frog scheme*. The new values, $h(t+dt)$\n", + "and $u(t+dt)$ are equal to values two time steps back $h(t-dt)$ and\n", + "$u(t-dt)$ plus a correction based on values calculated one time step\n", + "back. Hence the time scheme “leap-frogs” ahead. More on the consequences\n", + "of this process can be found in section [Computational Mode](#Computational-Mode).\n", + "\n", + "As a leap-frog scheme requires two previous time steps, the given\n", + "conditions at $t=0$ are not sufficient to solve\n", + "([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)). We\n", + "need the solutions at two time steps in order to step forward." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Predictor-Corrector to Start\n", + "[lab7:sec:pred-cor]:(#Predictor-Corrector-to-Start)\n", + "\n", + "In section 4.2.2 of Lab 2, predictor-corrector methods were introduced.\n", + "We will use a predictor-corrector based on the forward Euler scheme, to\n", + "find the solution at the first time step, $t=dt$. Then the second order\n", + "scheme ([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)), ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)) can be used.\n", + "\n", + "Using the forward Euler Scheme, the equations become\n", + "
\n", + "$$\\frac {u(t+dt, x)-u(t, x)}{dt} + g \\frac {h(t, x+dx) - h(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "$$\\frac {h(t+dt, x)-h(t, x)}{dt} + H \\frac {u(t, x+dx) - u(t, x-dx)}{2dx} = 0$$\n", + "
\n", + "\n", + "1. Use this scheme to predict $u$ and $h$ at $t=dt$.\n", + "\n", + "2. Average the solution at $t=0$ and that predicted for $t=dt$, to\n", + " estimate the solution at $t=\\frac{1}{2}dt$. You should confirm that\n", + " this procedure gives: $$u(\\frac{dt}{2}) = \\left\\{ \\begin{array}{ll}\n", + " 0 & { x = 3dx}\\\\\n", + " \\left({-gh_{0}dt}\\right)/\\left({4dx}\\right) & { x = 2dx}\\\\\n", + " \\left({gh_{0}dt}\\right)/\\left({4dx}\\right) & { x = 4dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + " $$h(\\frac{dt}{2}) = \\left\\{ \\begin{array}{ll}\n", + " h_{0} & { x = 3dx}\\\\\n", + " 0 & { x \\not= 3dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + "3. The corrector step uses the centred difference scheme in time (the\n", + " leap-frog scheme) with a time step of ${dt}/{2}$ rather than dt. You\n", + " should confirm that this procedure gives:\n", + " $$u(dt) = \\left\\{ \\begin{array}{ll}\n", + " 0 & { x = 3dx}\\\\\n", + " \\left({-gh_{0}dt}\\right)/\\left({2dx}\\right) & { x = 2dx}\\\\\n", + " \\left({gh_{0}dt}\\right)/\\left({2dx}\\right) & { x = 4dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + " $$h(dt) = \\left\\{ \\begin{array}{ll}\n", + " 0 & { x = 2dx, 4dx}\\\\\n", + " h_{0} - \\left({gHdt^2 h_{0}}\\right)/\\left({4dx^2}\\right) & { x = 3dx}\n", + " \\end{array}\n", + " \\right.$$\n", + "\n", + "Note that the values at $x=dx$ and $x=5dx$ have not been specified.\n", + "These are boundary points and to determine these values we must consider\n", + "the boundary conditions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Boundary Conditions\n", + "\n", + "If we are considering a dish of water, the boundary conditions at\n", + "$x=dx, 5dx$ are those of a wall. There must be no flow through the wall.\n", + "$$u(dx) = 0$$\n", + "$$u(5dx) = 0$$\n", + "But these two conditions are not\n", + "sufficient; we also need $h$ at the walls. If $u=0$ at the wall for all\n", + "time then $\\partial u/\\partial t=0$ at the wall, so $\\partial h/\\partial x=0$ at the wall. Using a\n", + "one-sided difference scheme this gives\n", + "$$\\frac {h(2dx) - h(dx)}{dx} = 0$$\n", + "or\n", + "$$h(dx) = h(2dx)$$\n", + "and\n", + "$$\\frac {h(4dx) - h(5dx)}{dx} = 0$$\n", + "or\n", + "$$h(5dx) = h(4dx)$$\n", + "which gives the required boundary conditions on $h$ at the wall." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Simple Equations on a Non-staggered Grid\n", + "\n", + "1. Given the above equations and boundary conditions, we can find the\n", + " values of $u$ and $h$ at all 5 points when $t = 0$ and $t = dt$.\n", + "\n", + "2. From ([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)), we can find the values of $u$ and\n", + " $h$ for $t = 2dt$ using $u(0, x)$, $u(dt, x)$, $h(0, x)$, and\n", + " $h(dt, x)$.\n", + "\n", + "3. Then we can find the values of $u$ and $h$ at $t = 3dt$ using\n", + " $u(dt, x)$, $u(2dt, x)$, $h(dt, x)$, and $h(2dt, x)$.\n", + "\n", + "We can use this approach recursively to determine the values of $u$ and\n", + "$h$ at any time $t = n * dt$. The python code that solves this problem\n", + "is provided in the file rain.py. It takes two arguments, the first is the\n", + "number of time steps and the second is the number of horizontal grid\n", + "points. \n", + "\n", + "The output is two coloured graphs. The color represents time with black\n", + "the earliest times and red later times. The upper plot shows the water\n", + "velocity (u) and the lower plot shows the water surface. To start with\n", + "the velocity is 0 (black line at zero across the whole domain) and the\n", + "water surface is up at the mid point and zero at all other points (black\n", + "line up at midpoint and zero elsewhere)\n", + "\n", + "Not much happens in 6 time-steps. Do try longer and more grid points." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "rain.rain([6, 5])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "If you want to change something in the script (say the colormap I've chosen, viridis, doesn't work for you), you can edit rain.py in an editor or spyder. To make it take effect here though, you have to reload rain. See next cell for how to. You will also need to do this if you do problem one or other tests changing rain.py but running in a notebook." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import importlib\n", + "importlib.reload(rain)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Staggered Grids\n", + "[lab7:sec:staggered]:(#Staggered-Grids)\n", + "\n", + "After running the program with different numbers of spatial points, you\n", + "will discover that the values of $u$ are always zero at the odd numbered\n", + "points, and that the values of $h$ are always zero at the even numbered\n", + "points. In other words, the values of $u$ and $h$ are zero in every\n", + "other column starting from $u(t, dx)$ and $h(t, 2dx)$, respectively.\n", + "\n", + "A look at ([Non-staggered, Eqn One](#lab7:eq:nonstaggerGrid1)) and ([Non-staggered, Eqn Two](#lab7:eq:nonstaggerGrid2)) can help us understand why this is the\n", + "case:\n", + "\n", + "$u(t + dt, x)$ is dependent on $h(t , x + dx)$ and $h(t , x - dx)$,\n", + "\n", + "but $h(t , x + dx)$ is in turn dependent on $u$ at $x + 2dx$ and at\n", + "$x$,\n", + "\n", + "and $h(t , x - dx)$ is in turn dependent on $u$ at $x - 2dx$ and at\n", + "$x$.\n", + "\n", + "Thus, if we just look at $u$ at a particular $x$, $u(x)$ will depend on\n", + "$u(x + 2dx)$, $u(x - 2dx)$, $u(x + 4dx)$, $u(x - 4dx)$, $u(x + 6dx)$,\n", + "$u(x - 6dx),$ ... but not on $u(x + dx)$ or $u(x - dx)$. Therefore, the\n", + "problem is actually decoupled and consists of two independent problems:\n", + "one problem for all the $u$’s at odd numbered points and all the $h$’s\n", + "at even numbered points, and the other problem for all the $u$’s at even\n", + "numbered points and all the $h$’s at odd numbered points, as shown in\n", + "Figure [Unstaggered Dependency](#lab7:fig:dependency)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/dependency.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Unstaggered Dependency\n", + "
\n", + "\n", + "In either problem, only the variable that is relevant to that problem\n", + "will be considered at each point. So for one problem, if at point $x$ we\n", + "consider the $u$ variable, at $x + dx$ and $x -dx$ we consider $h$. In\n", + "the other problem, at the same point $x$, we consider the variable $h$.\n", + "\n", + "Now we can see why every second $u$ point and $h$ point are zero for\n", + "*rain*. We start with all of\n", + "$u(dx), h(2dx), u(3dx), h(4dx), u(5dx) = 0$, which means they remain at\n", + "zero.\n", + "\n", + "Since the original problem can be decoupled, we can solve for $u$ and\n", + "$h$ on each decoupled grid separately. But why solve two problems?\n", + "Instead, we solve for $u$ and $h$ on a single staggered grid; whereas\n", + "before we solved for $u$ and $h$ on the complete, non-staggered grid.\n", + "Figure [Decoupling](#lab7:fig:decoupling) shows the decoupling of the grids." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/decoupling.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Decoupling: The two staggered grids and the unstaggered grid. Note that the\n", + "unstaggered grid has two variables at each grid/time point whereas the\n", + "staggered grids only have one.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now consider the solution of the same problem on a staggered grid. The\n", + "set-up of the problem is slightly different this time; we are\n", + "considering 4 spatial points in our discussion instead of 5, shown in\n", + "Figure [Staggered Grid](#lab7:fig:stagger). We will also be using $h_{i}$ and $u_{i}$ to\n", + "denote the spatial points instead of $x = dx * i$." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/stagger.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Staggered Grid: The staggered grid for the drop in the pond problem.\n", + "
\n", + "\n", + "The original equations, boundary and initial conditions are changed to\n", + "reflect the staggered case. The equations are changed to the following:\n", + "
\n", + "(Staggered, Eqn 1)\n", + "$$\\frac {u_{i}(t+dt)-u_{i}(t-dt)}{2 dt} + g \\frac {h_{i + 1}(t) - h_{i}(t)}{dx} = 0$$\n", + "
\n", + "(Staggered, Eqn 2)\n", + "$$\\frac {h_{i}(t+dt)-h_{i}(t-dt)}{2 dt} + H \\frac {u_{i}(t) - u_{i - 1}(t)}{dx} = 0$$\n", + "
\n", + "\n", + "The initial conditions are: At $t = 0$ and $t = dt$, all points have\n", + "zero elevation except at $h_{3}$, where \n", + "$$h_{3}(0) = h_{0}$$\n", + "$$h_{3}(dt) = h_{3}(0) - h_{0} Hg \\frac{dt^2}{dx^2}$$ \n", + "At $t = 0$ and\n", + "$t = dt$, all points have zero velocity except at $u_{2}$ and $u_{3}$,\n", + "where \n", + "$$u_{2}(dt) = - h_{0} g \\frac{dt}{dx}$$\n", + "$$u_{3}(dt) = - u_{2}(dt)$$ \n", + "This time we assume there is a wall at\n", + "$u_{1}$ and $u_{4}$, so we will ignore the value of $h_{1}$. The\n", + "boundary conditions are: \n", + "$$u_{1}(t) = 0$$ \n", + "$$u_{4}(t) = 0$$\n", + "\n", + "### Problem One\n", + "[lab7:prob:staggered]:(#Problem-One)\n", + "\n", + "Modify *rain.py* to solve this problem (Simple\n", + "equations on a staggered grid). Submit your code and a final plot for\n", + "one case.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Stability: the CFL condition\n", + "[lab7:stability]:(#Stability:-the-CFL-condition)\n", + "\n", + "In the previous problem, $dt = 0.001$s is used to find $u$ and $h$. If\n", + "you increase $dt$ by 100 times, and run *rain.py* on your staggered grid\n", + "code again, you can see that the magnitude of $u$ has increased from\n", + "around 0.04 to $10^8$! Try this by changing $dt = 0.001$ to\n", + "$dt = 0.1$ in the code and compare the values of $u$ when run with\n", + "different values of $dt$. This tells us that the scheme we have been\n", + "using so far is unstable for large values of $dt$.\n", + "\n", + "To understand this instability, consider a spoked wagon wheel in an old\n", + "western movie (or a car wheel with a pattern in a modern TV movie) such\n", + "as that shown in Figure [Wheel](#lab7:fig:wheel-static)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wheel_static.png',width='35%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wheel: A spoked wagon wheel.\n", + "
\n", + "\n", + "Sometimes the wheels appear to be going backwards. Both TV and\n", + "movies come in frames and are shown at something like 30 frames a second.\n", + "So a movie discretizes time. If the wheel moves just a little in the\n", + "time step between frames, your eye connects the old position with the\n", + "new position and the wheel moves forward $-$ a single frame is shown in\n", + "Figure [Wheel Left](#lab7:fig:wheel-left)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wheel_left.png',width='35%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wheel Left: The wheel appears to rotate counter-clockwise if its speed is slow\n", + "enough.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "scrolled": true + }, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"hgQ66frbBEs\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "However, if the wheel is moving faster, your eye connects each spoke\n", + "with the next spoke and the wheel seems to move backwards $-$ a single\n", + "frame is depicted in Figure [Wheel Right](#lab7:fig:wheel-right)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wheel_right.png',width='35%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wheel Right: When the wheel spins quickly enough, it appears to rotate\n", + "clockwise!\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"w8iQIwX-ek8\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In a similar manner, the time discretization of any wave must be small\n", + "enough that a given crest moves less than half a grid point in a time\n", + "step. Consider the wave pictured in Figure [Wave](#lab7:fig:wave-static)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wave_static.png',width='65%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wave: A single frame of the wave.\n", + "
\n", + "\n", + "If the wave moves slowly, it seems to move in the correct direction\n", + "(i.e. to the left), as shown in Figure [Wave Left](#lab7:fig:wave-left)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wave_left.png',width='65%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Wave Left: The wave moving to the left also appears to be moving to the left if\n", + "it’s speed is slow enough.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"CVybMbfYRXM\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "However if the wave moves quickly, it seems to move backward, as in\n", + "Figure [Wave Right](#lab7:fig:wave-right)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/wave_right.png',width='65%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Figure Wave Right: If the wave moves too rapidly, then it appears to be moving in the\n", + "opposite direction." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "vid = display.YouTubeVideo(\"al2VrnkYyD0\", modestbranding=1, rel=0, width=500)\n", + "display.display(vid)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In summary, an explicit numerical scheme is unstable if the time step is\n", + "too large. Such a large time step does not resolve the process being\n", + "modelled. Now we need to calculate, for our problem, the maximum value\n", + "of the time step is for which the scheme remains stable. To do this\n", + "calculation we will derive the dispersion relation of the waves\n", + "*in* the numerical scheme. The maximum time step for\n", + "stability will be the maximum time step for which all waves either\n", + "maintain their magnitude or decay.\n", + "\n", + "Mathematically, consider the equations ([Staggered, Eqn 1](#lab7:eq:staggerGrid1)) and\n", + "([Staggered, Eqn 2](#lab7:eq:staggerGrid2)). Let $x=md$ and $t=p\\, dt$ and consider a\n", + "solution \n", + "
\n", + "(u-solution)\n", + "$$\\begin{aligned}\n", + "u_{mp} &=& {\\cal R}e \\{ {\\cal U} \\exp [i(kx - \\omega t)] \\}\\\\\n", + "&=& {\\cal R}e \\{ {\\cal U} \\exp [i(kmd - \\omega p\\, dt)] \\} \\nonumber\\end{aligned}$$\n", + "$$\\begin{aligned}\n", + "h_{mp} &=& {\\cal R}e \\{ {\\cal H} \\exp [i(k[x - dx/2] - \\omega t)] \\}\\\\\n", + "&=& {\\cal R}e \\{ {\\cal H} \\exp [i(k[m - 1/2]d - \\omega p\\, dt)] \\} \\nonumber\\end{aligned}$$\n", + "where ${\\cal R}e$ means take the real part and ${\\cal U}$ and ${\\cal H}$\n", + "are constants. Substitution into ([Staggered, Eqn 1](#lab7:eq:staggerGrid1)) and\n", + "([Staggered, Eqn 2](#lab7:eq:staggerGrid2)) gives two algebraic equations in ${\\cal U}$\n", + "and ${\\cal H}$ which can be written: \n", + "$$\\left[\n", + "\\begin{array}{cc} - \\sin(\\omega dt)/ dt & 2 g \\sin(kd/2)/d \\\\\n", + "2 H \\sin(kd/2)/d & -\\sin(\\omega \\, dt)/ dt \\\\\n", + "\\end{array}\n", + "\\right]\n", + "\\left[\n", + "\\begin{array}{c} {\\cal U}\\\\ {\\cal H}\\\\ \n", + "\\end{array} \\right]\n", + "= 0.$$ \n", + "
\n", + "where $\\exp(ikd)-\\exp(-ikd)$ has been written $2 i \\sin(kd)$ etc.\n", + "In order for there to be a non-trivial solution, the determinant of the\n", + "matrix must be zero. This determinant gives the dispersion relation\n", + "
\n", + "(Numerical Dispersion Relation)\n", + "$$ \\frac{\\sin^2(\\omega \\, dt)}{dt^2} = 4 gH \\frac {\\sin^2(kd/2)}{d^2}$$\n", + "
\n", + "Which can be compared to the ([Analytic Dispersion Relation](#lab7:eq:disp)), the “real” dispersion\n", + "relation. In particular, if we decrease both the time step and the space\n", + "step, $dt \\rightarrow 0$ and $d \\rightarrow 0$, \n", + "([Numerical Dispersion Relation](#lab7:eq:numerDisp))\n", + "approaches ([Analytic Dispersion Relation](#lab7:eq:disp)). The effect of just the discretization in\n", + "space can be found by letting just $dt \\rightarrow 0$ which gives\n", + "
\n", + "(Continuous Time, Discretized Space Dispersion Relation)\n", + "$$\\omega^2 = 4 gH \\frac {\\sin^2(kd/2)}{d^2}$$\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The “real” dispersion relation, ([Analytic Dispersion Relation](#lab7:eq:disp)), and the numerical\n", + "dispersion relation with continuous time,\n", + "([Continuous Time, Discretized Space Dispersion Relation](#lab7:eq:numerDispSpace)), both give $\\omega^2$ positive and\n", + "therefore $\\omega$ real. However, this is not necessarily true for the\n", + "numerical dispersion relation ([Numerical Dispersion Relation](#lab7:eq:numerDisp)). What does a\n", + "complex $\\omega$ mean? Well, go back to ([u-solution](#eq:udis)). A complex\n", + "$\\omega = a + ib$ gives $u \\propto \\exp(-iat)\\exp(bt)$. The first\n", + "exponential is oscillatory (like a wave) but the second gives\n", + "exponential growth if $b$ is positive or exponential decay if $b$ is\n", + "negative. Obviously, for a stable solution we must have $b \\le 0$. So,\n", + "using ([Numerical Dispersion Relation](#lab7:eq:numerDisp)) we must find $\\omega$ and determine if it\n", + "is real.\n", + "\n", + "Now, because ([Numerical Dispersion Relation](#lab7:eq:numerDisp)) is a transcendental equation, how\n", + "to determine $\\omega$ is not obvious. The following works:\n", + "\n", + "- Re-expand $\\sin(\\omega\\,dt)$ as\n", + " $(\\exp(i \\omega\\,dt)-\\exp(-i\\omega\\,dt))/2i$.\n", + "\n", + "- Write $\\exp(-i\\omega\\,dt)$ as $\\lambda$ and note that this implies\n", + " $\\exp(i\\omega\\, dt) = 1/\\lambda$. If $\\omega dt = a + ib$ then\n", + " $b = ln |\\lambda|$. For stability the magnitude of $\\lambda$ must\n", + " be less than one.\n", + "\n", + "- Write $4 gH \\sin^2(kd/2)/d^2$ as $q^2$, for brevity.\n", + "\n", + "- Substitute in ([Numerical Dispersion Relation](#lab7:eq:numerDisp)) which gives:\n", + " $$-(\\lambda-1/\\lambda)^2 = 4 q^2 dt^2$$ \n", + " or\n", + " $$\\lambda^4 - 2 (1 - 2 q^2 dt^2) \\lambda^2 + 1 = 0$$ \n", + " or\n", + "
\n", + " (Lambda Eqn)\n", + " $$\\lambda = \\pm \\left(1-2q^2 dt^2 \\pm 2 q dt \\left( q^2\n", + " dt^2 - 1 \\right)^{1/2} \\right)^{1/2}$$ \n", + "
\n", + "\n", + "A plot of the four roots for $\\lambda$ is shown below in\n", + "Figure [Roots](#lab7:fig:allmag)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/allmag.png',width='45%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Roots: Magnitude of the four roots of $\\lambda$ as a function of $q dt$ (not\n", + "$\\omega dt$).\n", + "
\n", + "\n", + "The four roots correspond to the “real” waves travelling to the right\n", + "and left, as well two *computational modes* (see\n", + "Section [Computational Mode](#Computational-Mode) for more information). The plots for\n", + "the four roots overlap, so it is most helpful to view [separate plots for each of the roots](#lab7:fig:sepmag). " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/multimag.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Separate Roots: Magnitude of the four roots of $\\lambda$ as a function of $q dt$ (not $\\omega dt$).\n", + "
\n", + "\n", + "Now for stability\n", + "$\\lambda$ must have a magnitude less than or equal to one. From\n", + "Figure [Separate Roots](#lab7:fig:sepmag), it is easy to see that this is the same as\n", + "requiring that $|q dt|$ be less than 1.0.\n", + "\n", + "Substituting for $q$\n", + "$$1 > q^2 dt^2 = \\frac {4gH}{d^2} \\sin^2(kd/2) dt^2$$ \n", + "for all $k$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The maximum wavenumber that can be resolved by a grid of size $d$ is\n", + "$k = \\pi/d$. At this wavenumber, the sine takes its maximum value of 1.\n", + "So the time step \n", + "
\n", + "$$dt^2 < \\frac { d^2}{4 g H}$$\n", + "
\n", + "\n", + "For this case ($f = 0$) the ratio of the space step to the time step\n", + "must be greater than the wave speed $\\sqrt\n", + "{gH}$, or $$d / dt > 2 \\sqrt{gH}.$$ This stability condition is known\n", + "as **the CFL condition** (named after Courant, Friedrich and Levy).\n", + "\n", + "On a historical note, the first attempts at weather prediction were\n", + "organized by Richardson using a room full of human calculators. Each\n", + "person was responsible for one grid point and passed their values to\n", + "neighbouring grid points. The exercise failed dismally, and until the\n", + "theory of CFL, the exact reason was unknown. The equations Richardson\n", + "used included fast sound waves, so the CFL condition was\n", + "$$d/dt > 2 \\times 300 {\\rm m/s}.$$ \n", + "Richardson’s spatial step, $d$, was\n", + "too small compared to $dt$ and the problem was unstable.\n", + "\n", + "### Problem Two \n", + "[lab7:prob:stability]:(#Problem-Two)\n", + "> a) Find the CFL condition (in seconds) for $dt$\n", + "for the Python example in Problem One.\n", + "\n", + "\n", + "\n", + "Test your value. \n", + "\n", + "b) Find the CFL condition (in seconds) for $dt$ for the Python\n", + "example in *rain.py*, ie., for the non-staggered grid. Test your value.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Accuracy\n", + "[lab7:accuracy]:(#Accuracy)\n", + "\n", + "A strong method to determine the accuracy of a scheme is to compare the\n", + "numerical solution to an analytic solution. The equations we are\n", + "considering are wave equations and so we will compare the properties of\n", + "the waves. Wave properties are determined by the dispersion relation and\n", + "we will compare the *numerical dispersion relation* and the exact, continuous\n", + "*analytic dispersion relation*. Both the time step and the space step\n", + "(and as we’ll see below the grid) affect the accuracy. Here we will only\n", + "consider the effect of the space step. So, consider the numerical\n", + "dispersion relation assuming $dt \\rightarrow 0$ (reproduced here from\n", + "([Continuous Time, Discretized Space Dispersion Relation](#lab7:eq:numerDispSpace)))\n", + "$$\\omega^2 = 4 gH \\frac {\\sin^2(kd/2)}{d^2}$$ \n", + "with the exact, continuous\n", + "dispersion relation ([Analytic Dispersion Relation](#lab7:eq:disp)) $$\\omega^2 = gHk^2$$\n", + "\n", + "We can plot the two dispersion relations as functions of $kd$, The graph\n", + "is shown in Figure [Simple Accuracy](#lab7:fig:simpleaccuracy)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/simple_accuracy.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Simple Accuracy\n", + "
\n", + "\n", + "We can see that the accuracy is good for long waves ($k$ small) but for\n", + "short waves, near the limit of the grid resolutions, the discrete\n", + "frequency is too small. As the phase speed is ${\\omega}/{k}$, the phase\n", + "speed is also too small and most worrying, the group speed\n", + "${\\partial \\omega}/\n", + "{\\partial k}$ goes to zero!" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Choosing a Grid\n", + "\n", + "#### No variation in y\n", + "[no-variation-in-y-2.unnumbered]:(#No-variation-in-y)\n", + "\n", + "For the simple case above, there is little choice in grid. Let’s\n", + "consider the more complicated case, $f \\not = 0$. Then $v \\not = 0$ and\n", + "we have to choose where on the grid we wish to put $v$. There are two\n", + "choices:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/simple_grid1.png',width='50%')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/simple_grid2.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Two Simple Grids\n", + "
\n", + "\n", + "For each of these, we can calculate the discrete dispersion relation\n", + "discussed above.\n", + "\n", + "For grid 1\n", + "$$\\omega ^2 = f^2 \\cos^2(\\frac{kd}{2}) + \\frac{4gH \\sin^2(\\frac{kd}{2})}{d^2}$$\n", + "\n", + "### Problem Three\n", + "[lab7:prob:grid2]:(#Problem-Three)\n", + "\n", + "Show that for grid 2\n", + "$$\\omega ^2 = f^2 + \\frac{4gH \\sin^2(\\frac{kd}{2})}{d^2}$$\n", + "\n", + "We can plot the two dispersion relations as a function of $k$ given the\n", + "ratio of $d/R$, where $d$ is the grid size and $R$ is the Rossby radius\n", + "which is given by $$R = \\frac {\\sqrt{gH}}{f}.$$" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "accuracy2d.main(0.5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Four\n", + "\n", + "[lab7:prob:accurate]:(#Problem-Four)\n", + "\n", + "Which grid gives the best accuracy for $d=R/2$?\n", + "Explain in what ways it is more accurate. Consider the accuracy of the frequency, but also the accuracy of the group speed (the gradient of the frequency with respect to wavenumber). Describe what ranges of wavenumber the accuracy is good and what ranges it is less good.\n", + "\n", + "### Problem Five\n", + "\n", + "[lab7:prob:noy]:(#Problem-Five)\n", + "\n", + "Modify *rain.py* to solve equations\n", + "([No variation in y, first eqn](#lab7:sec:firsteq)), ([No variation in y, second eqn](#lab7:sec:secondeq)) and ([No variation in y, third eqn](#lab7:sec:thirdeq)) on the most accurate\n", + "grid." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# End of Lab 7a and Beginning of Lab 7b #" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Full Equations\n", + "[Full-Equations]:(#Full-Equations)\n", + "\n", + "In order to solve the full equations\n", + "([Full Equations, Eqn 1](#lab7:eq:swea)), ([Full Equations, Eqn 2](#lab7:eq:sweb)) and ([Full Equations, Eqn 3](#lab7:eq:swec)) numerically, we need to\n", + "discretize in 3 dimensions, two in space and one in time.\n", + " [Mesinger and Arakawa, 1976](#Ref:MesingerArakawa) introduced [five different spatial discretizations](http://clouds.eos.ubc.ca/~phil/numeric/labs/lab7/lab7_files/images/allgrid.gif).\n", + " \n", + " \n", + "Consider first the most obvious choice an\n", + "unstaggered grid or Arakawa A grid, shown in Figure [Arakawa A Grid](#lab7:fig:gridA). We\n", + "might expect, from the studies above, that an unstaggered grid may not\n", + "be the best choice. The grid A is not two de-coupled grids because of\n", + "weak coupling through the Coriolis force. However, we will see that this\n", + "grid is not as accurate as some of the staggered grids (B, C, D and E)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/grid1.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Arakawa A Grid.\n", + "
\n", + "\n", + "As the problem becomes more complicated, we need to simplify the\n", + "notation; hence define a discretization operator:\n", + "$$(\\delta_x \\alpha)_{(m,n)} \\equiv \\frac 1 {2d} ( \\alpha_{m+1,n} - \\alpha_{m-1,n} )$$ \n", + "\n", + "where $d$ is the grid spacing in both the $x$ and\n", + "$y$ directions. Note that this discretization is the same centered\n", + "difference we have used throughout this lab.\n", + "\n", + "The finite difference approximation to the full shallow water equations\n", + "on the A grid becomes: \n", + "$$\\frac{\\partial u}{\\partial t} = -g \\delta_x h + fv$$\n", + "$$\\frac{\\partial v}{\\partial t} = -g \\delta_y h - fu$$ \n", + "$$\\frac{\\partial h}{\\partial t} = -H (\\delta_x u + \\delta_y v)$$\n", + "As before, consider a centre difference scheme (leap-frog method) for\n", + "the time step as well, so that\n", + "$$\\frac{\\partial u}{\\partial t}(t) = \\frac {u(t+1)-u(t-1)}{2 dt}$$ \n", + "Putting this together with the spatial scheme we have:\n", + "\n", + "(Numerical Scheme: Grid A)\n", + "$$\\frac {u(t+1)-u(t-1)}{2 dt} = -g \\delta_x h(t) + fv(t)$$\n", + "$$\\frac{v(t+1)-v(t-1)}{2 dt} = -g \\delta_y h(t) - fu(t)$$\n", + "$$\\frac{h(t+1)-h(t-1)}{2 dt} = -H (\\delta_x u(t) + \\delta_y v(t))$$ \n", + "Each of these equations can be rearranged to give $u(t+1)$, $v(t+1)$ and\n", + "$h(t+1)$, respectively. Then given the values of the three variables at\n", + "every grid point at two times, ($t$ and $t-1$), these three equations\n", + "allow you to calculate the updated values, the values of the variables\n", + "at $t+1$. Once again, the following questions arise regarding the\n", + "scheme:\n", + "\n", + "- **Is it stable?**\n", + "\n", + "- **Is it accurate?**" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Stability\n", + "\n", + "To determine the stability of the scheme, we need to repeat the analysis\n", + "of section [Stability](#Stability:-the-CFL-condition) for the 2 spatial dimensions used here. The\n", + "first step is to assume a form of the solutions: \n", + "
\n", + "$$\\begin{aligned}\n", + "z_{mnp} &=& {\\cal R}e \\{ {\\cal Z} \\exp [i(kx + \\ell y - \\omega t)] \\}\\\\\n", + "&=& {\\cal R}e \\{ {\\cal Z} \\exp [i(kmd + \\ell n d- \\omega p\\, dt)] \\} \\nonumber\\end{aligned}$$\n", + "
\n", + "where $z$ represents any of $u$, $v$ and $h$ and we have let $x=md$,\n", + "$y=nd$ and $t=p\\,dt$. Substitution into ([Numerical Scheme: Grid A](#eq:numericalGridA)) gives three algebraic equation in\n", + "${\\cal U}$, ${\\cal V}$ and ${\\cal H}$: $$\\left[\n", + "\\begin{array}{ccc} - i \\sin(\\omega dt)/ dt & - f & i g \\sin(kd)/d \\\\\n", + "f & - i \\sin(\\omega dt)/ dt & i g \\sin(\\ell d)/d \\\\\n", + "i H \\sin(kd)/d & i H \\sin(\\ell d)/d & -i \\sin(\\omega \\, dt)/ dt \\\\\n", + "\\end{array}\n", + "\\right]\n", + "\\left[\n", + "\\begin{array}{c} {\\cal U}\\\\ {\\cal V} \\\\ {\\cal H}\\\\ \n", + "\\end{array} \\right]\n", + "= 0.$$\n", + "\n", + "Setting the determinate to zero gives the dispersion relation:\n", + "
\n", + "(Full Numerical Dispersion Relation)\n", + "$$\n", + "\\frac {\\sin^2(\\omega\\,dt)}{dt^2} = f^2 + \\frac{gH}{d^2} \\left( \\sin^2(kd) + \\sin^2(\\ell d) \\right)$$\n", + "
\n", + "Still following section [Stability](#Stability:-the-CFL-condition), let\n", + "$\\lambda = \\exp (i \\omega\\, dt)$ and let\n", + "$q^2 = f^2 + {gH}/{d^2} \\left( \\sin^2(kd) + \\sin^2(\\ell d) \\right)$,\n", + "substitution into ([Full Numerical Dispersion Relation](#lab7:eq:full:numDisp)) gives\n", + "$$-(\\lambda-1/\\lambda)^2 = 4 q^2 dt^2$$ or equation ([Lambda Eqn](#lab7:eq:lambda))\n", + "again. For stability $\\lambda$ must be less than 1, so\n", + "$$1 > q^2 dt^2 = {dt^2} \\left(f^2 + {gH}/{d^2} \\left( \\sin^2(kd) + \\sin^2(\\ell d) \\right)\n", + "\\right)$$ The sines take their maximum values at $k=\\pi/(2d)$ and\n", + "$\\ell=\\pi/(2d)$ giving $$dt^2 < \\frac{1}{f^2 + 2 gH/d^2}$$ This is the\n", + "CFL condition for the full equations on the Arakawa A grid. Note that\n", + "the most unstable mode moves at $45^o$ to the grid." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/fourgrids.png', width='80%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "Figure Four More Grids. Note that E is simply a rotation of grid B by $45^\\circ$.\n", + "\n", + "\n", + "### Problem Six\n", + "\n", + "[lab7:prob:stability~2~d]:(#Problem-Six)\n", + "\n", + "Use the interactive example below to investigate stability of the various grids. Calculate the stability for each grids A, B, and C. Find the dt for stability (to one significant figure) given $f = 1 \\times 10^{-4}$s$^{-1}$, $g = 10$ m s$^{-2}$, $H = 4000$ m and $dx = 20$ km. Is it the same for all four grids? Why not? " + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# grid A is grid 1, grid B is grid 2 and and grid C is grid 3\n", + "# ngrid is the number of grid points in x and y\n", + "# dt is the time step in seconds\n", + "# T is the time plotted is seconds to 4*3600 is 4 hours\n", + "interactive1.interactive1(grid=3, ngrid=11, dt=150, T=4*3600)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Accuracy\n", + "[accuracy]:(#Accuracy)\n", + "\n", + "To determine the accuracy of the spatial discretization, we will compare\n", + "the *numerical dispersion relation* ([Full Numerical Dispersion Relation](#lab7:eq:full:numDisp)) for\n", + "$dt \\rightarrow 0$ $$\\omega^2 = f^2 + gH \\frac {\\sin^2(kd)}{d^2} +\n", + "gH \\frac {\\sin^2 (\\ell d)}{d^2}$$ with the exact, *continuous dispersion\n", + "relation* \n", + "
\n", + "(Full Analytic Dispersion Relation)\n", + "$$\n", + "\\omega^2 = f^2 + gH(k^2+\\ell^2)$$\n", + "
\n", + "\n", + "We can plot the two dispersion relations as functions of $k$ and $\\ell$,\n", + "given the ratio of $d/R$, where $d$ is the grid size and $R$ is the\n", + "Rossby radius defined in the previous section. For example, the exact\n", + "$\\omega$ and its discrete approximation, using Grid A and $d/R = 1/2$,\n", + "can be compared in Figure [Accuracy Comparison](#lab7:fig:accuracy-demo)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/accuracy_demo.png',width='60%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "Figure Accuracy Comparison: A comparison of the exact $\\omega$ and the discrete approximation\n", + "using Grid A and with $d/R=1/2$.\n", + "
\n", + "\n", + "It is easy to see that the Grid A approximation is not accurate enough.\n", + "There are a number of other possibilities for grids, all of which\n", + "*stagger* the unknowns; that is, different variables are placed at\n", + "different spatial positions as discussed in\n", + "section [Staggered Grids](#Staggered-Grids).\n", + "\n", + "Four other grids, which are known as [Mesinger and Arakawa](#Ref:MesingerArakawa) B, C, D and E\n", + "grids as shown above [Figure Four More Grids](#FigureFourGrids). " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "To work with these grids, we must introduce an averaging operator,\n", + "defined in terms of *half-points* on the grid:\n", + "$$\\overline{\\alpha}_{mn}^{x} = \\frac{\\alpha_{m+\\frac{1}{2},n} + \\alpha_{m-\\frac{1}{2},n}}{2}$$\n", + "and modify the difference operator\n", + "$$(\\delta_{x}\\alpha)_{mn} = \\frac{\\alpha_{m+\\frac{1}{2},n} -\n", + " \\alpha_{m-\\frac{1}{2},n}}{d}$$\n", + "\n", + "### Problem Seven\n", + "\n", + "[lab7:prob:finite~d~ifference~f~orm]:(#Problem-Seven)\n", + "\n", + "A. For grid B, write down the finite difference form of the shallow\n", + " water equations.\n", + "\n", + "B. For grid C, write down the finite difference form of the shallow\n", + " water equations.\n", + "\n", + "C. For grid D, write down the finite difference form of the shallow\n", + " water equations.\n", + "\n", + "The dispersion relation for each grid can be found in a manner analgous\n", + "to that for the A grid. For the B grid the dispersion relation is\n", + "$$\\left( \\frac {\\omega}{f} \\right)^2\n", + "= 1 + 4 \\left( \\frac {R}{d} \\right)^2 \\left( \\sin^2 \\frac {k d}{2}\\cos^2 \\frac{\\ell d}{2} + \\cos^2 \\frac {k d}{2}\\sin^2 \\frac{\\ell d}{2} \\right)$$\n", + "and for the C grid it is $$\\left( \\frac {\\omega}{f} \\right)^2\n", + "= \\cos^2 \\frac {k d}{2} \\cos^2 \\frac{\\ell d}{2} + 4 \\left( \\frac {R}{d} \\right)^2 \\left( \\sin^2 \\frac {k d}{2} + \\sin^2 \\frac {\\ell d}{2} \\right)$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Eight\n", + "[lab7:prob:disp~D~]:(#Problem-Eight)\n", + "\n", + "Find the dispersion relation for the D grid.\n", + "\n", + "In the interactive exercise below, you will enter the dispersion for\n", + "each of the grids. Study each plot carefully for accuracy of phase and\n", + "group speed." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "def disp_analytic(kd, ld, Rod=0.5):\n", + " Omegaof = 1 + Rod**2 * (kd**2 + ld**2)\n", + " return Omegaof\n", + "# define disp_A, disp_B, disp_C here and run the cell" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "# replace the second disp_analytic with one of your numerical dispersion functions, e.g. disp_A\n", + "dispersion_2d.dispersion_2d(disp_analytic, disp_analytic, Rod=0.5)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Nine\n", + "[lab7:prob:accuracy]:(#Problem-Nine)\n", + "\n", + "A. For $R/d = 2$ which grid gives the most accurate solution? As well as the closeness of fit of $\\omega/f$, also consider the group speed (gradient of the curve). The group speed is the speed at which wave energy propagates. \n", + "\n", + "B. For $R/d = 0.2$ which grid gives the most accurate solution? As well as the closeness of fit of $\\omega/f$, also consider the group speed (gradient of the curve). The group speed is the speed at which wave energy propagates. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Details\n", + "\n", + "### Starting the Simulation Full Equations \n", + "\n", + "[lab7start]:(#Starting-the-Simulation:-Full-Equations)\n", + "\n", + "The *leap-frog scheme* requires values of $u$, $v$ and $h$ at step 1 as\n", + "well as at step 0. However, the initial conditions only provide starting\n", + "values at step 0, so we must find some other way to obtain values at\n", + "step 1. There are various methods of obtaining the second set of\n", + "starting values, and the code used in this laboratory uses a\n", + "*predictor/corrector method* to obtain values at step 1 from the values\n", + "at step 0. For the simple equations this process was discussed in\n", + "section [Predictor-Corrector to Start](#Predictor-Corrector-to-Start). For the full equations the procedure goes\n", + "as follows:\n", + "\n", + "- the solution at step $1$ is predicted using a *forward Euler step*:\n", + " $$\\begin{array}{l}\n", + " u(1) = u(0) + dt (f v(0)-g \\delta_x h(0))\\\\\n", + " v(1) = v(0) + dt (-f u(0)-g \\delta_y h(0))\\\\\n", + " h(1) = h(0) + dt (-H(\\delta_x u(0) + \\delta_y v(0))) \n", + " \\end{array}$$\n", + "\n", + "- then, step $\\frac{1}{2}$ is estimated by averaging step $0$ and the\n", + " predicted step $1$: $$\\begin{array}{l} \n", + " u(1/2) = 1/2 (u(0)+u(1)) \\\\\n", + " v(1/2) = 1/2 (v(0)+v(1)) \\\\\n", + " h(1/2) = 1/2 (h(0)+h(1)) \n", + " \\end{array}$$\n", + "\n", + "- finally, the step $1$ approximation is corrected using leap frog\n", + " from $0$ to $1$ (here, we use only a half time-step\n", + " $\\frac{1}{2} dt$): $$\\begin{array}{l}\n", + " u(1) = u(0) + dt (f v(1/2)-g\\delta_x h(1/2)) \\\\\n", + " v(1) = v(0) + dt(-f u(1/2)-g\\delta_y h(1/2)) \\\\\n", + " h(1) = h(0) + dt(-H (\\delta_x u(1/2)+\\delta_y v(1/2))) \n", + " \\end{array}$$\n", + "\n", + "### Initialization\n", + "\n", + "\n", + "The initial conditions used for the stability demo for the full\n", + "equations are Poincare waves as described in the physical example in\n", + "Section [Physical Example, Poincaré Waves](#Physical-Example,-Poincar%C3%A9-Waves).\n", + "\n", + "Assuming a surface height elevation $$h = \\cos (kx+\\ell y)$$ \n", + "equations\n", + "([Full Eqns, Eqn 1](#lab7:eq:swea),[Full Eqns, Eqn 2](#lab7:eq:sweb)) give \n", + "$$\\begin{aligned}\n", + " u &=& \\frac {-1} {H(k^2+\\ell^2)} \\, \\left( k \\omega \\cos(kx+\\ell\n", + " y)+f \\ell \\sin(kx+\\ell y) \\right) \\nonumber \\\\\n", + " v &=& \\frac 1 {H(k^2+\\ell^2)} \\, \\left( -\\ell \\omega \\cos(kx+\\ell\n", + " y)+f k \\sin(kx+\\ell y)\\right) \\nonumber \\end{aligned}$$ \n", + "where\n", + "$\\ell$ and $k$ are selected by the user. It is assumed $g = 9.8$m/s$^2$,\n", + "$H = 400$m and $f = 10^{-4}$/s. The value of the frequency $\\omega$ is\n", + "given by the dispersion relation, ([Full Analytic Dispersion Relation](#lab7:eq:full_disp))." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Boundary Conditions\n", + "\n", + "The boundary conditions used for the stability demo for the full\n", + "equations are *“periodic”* in both x and y. Anything which propagates\n", + "out the left hand side comes in the right hand side, *etc*. This\n", + "condition forces a periodicity on the flow, the wavelengths of the\n", + "simulated waves must be sized so that an integral number of waves fits\n", + "in the domain.\n", + "\n", + "Specifically, for a $m \\times n$ grid, the boundary conditions for the\n", + "variable $h$ along the right and left boundaries are \n", + "$$\\begin{aligned}\n", + " h(i=1,j) &=& h(i=m-1,j)\\,\\,\\, {\\rm for}\\,\\,\\, j=2 \\,\\, {\\rm to} \\,\\,n-1\\\\ \\nonumber\n", + " h(i=m,j) &=& h(i=2,j) \\,\\,\\, {\\rm for}\\,\\,\\, j=2 \\,\\, {\\rm to} \\,\\,n-1\n", + " \\end{aligned}$$ and along the top and bottom boundaries\n", + "$$\\begin{aligned}\n", + " h(i,j=1) &=& h(i,j=n-1)\\,\\,\\, {\\rm for} \\,\\,\\, i=2 \\,\\, {\\rm to} \\,\\,m-1\\\\ \\nonumber\n", + " h(i,j=n) &=& h(i,j=2) \\,\\,\\, {\\rm for} \\,\\,\\, i=2 \\,\\, {\\rm to} \\,\\,m-1 .\n", + " \\end{aligned}$$ The conditions for $u$ and $v$ are identical." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Computational Mode \n", + "[lab7computational_mode]:(#Computational-Mode)\n", + "\n", + "In section [Stability](#Stability) it was determined that there are four\n", + "oscillatory modes. Consider letting $dt \\rightarrow 0$ in\n", + "([Lambda Eqn](#lab7:eq:lambda)). Two of the roots give $+1$ and two roots give $-1$.\n", + "\n", + "Consider the variable $u$ at time $p \\, dt$ and one time step later at\n", + "time $(p+1) \\, dt$.\n", + "$$\\frac{u_{m(p+1)}}{u_{mp}} = \\frac {{\\cal R}e \\{ {\\cal U} \\exp [i(kmd - \\omega (p+1)\\, dt)] \\} } {{\\cal R}e \\{ {\\cal U} \\exp [i(kmd - \\omega p\\, dt)] \\} } = \\exp(i \\omega \\, dt) = \\lambda$$\n", + "Thus, $\\lambda$ gives the ratio of the velocity at the next time step to\n", + "its value at the current time step. Therefore, as the time step gets\n", + "very small, physically we expect the system not to change much between\n", + "steps. In the limit of zero time step, the value of $u$ or $h$ should\n", + "not change between steps; that is $\\lambda$ should be $1$.\n", + "\n", + "A value of $\\lambda = -1$ implies $u$ changes sign between each step, no\n", + "matter how small the step. This mode is not a physical mode of\n", + "oscillation, but rather a *computational mode*, which is entirely\n", + "non-physical. It arises from using a centre difference scheme in time\n", + "and not staggering the grid in time. There are schemes that avoid\n", + "introducing such spurious modes (by staggering in time), but we won’t\n", + "discuss them here (for more information, see [Mesinger and Arakawa, 1976](#Ref:MesingerArakawa)\n", + " [Ch. II]). However, the usual practice in geophysical fluid dynamics is\n", + "to use the leap-frog scheme anyway (since it is second order in time)\n", + "and find a way to keep the computational modes “small”, in some sense.\n", + "\n", + "For a reasonably small value of $dt$, the computational modes have\n", + "$\\lambda \\approx -1$. Therefore, these modes can be eliminated almost\n", + "completely by averaging two adjacent time steps. To understand why this\n", + "is so, think of a computational mode $\\hat{u}_{mp}$ at time level $p$,\n", + "which is added to its counterpart,\n", + "$\\hat{u}_{m(p+1)} \\approx -\\hat{u}_{mp}$ at the next time step: *their\n", + "sum is approximately zero!* For the code in this lab, it is adequate to\n", + "average the solution in this fashion only every 101 time steps (though\n", + "larger models may need to be averaged more often). After the averaging\n", + "is performed, the code must be restarted; see [Section on Starting](#Starting-the-Simulation-Full-Equations)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary \n", + "\n", + "**Poincare waves:** These are waves that obey the dispersion relation $\\omega ^2=f^2+k^2 c^2$, where $c$ is the wave speed, $k$ is the magnitude of the wavenumber vector, $f$ is the Coriolis frequency, and $\\omega$ is the wave frequency.\n", + "\n", + "**CFL condition:** named after Courant, Friedrichs and Levy, who first derived the relationship. This is a stability condition for finite difference schemes (for propagation problems) that corresponds physically to the idea that the continuous domain of dependence must contain the corresponding domain of dependence for the discrete problem.\n", + "\n", + "**dispersive wave:** Any wave whose speed varies with the wavenumber. As a consequence, waves of different wavelengths that start at the same location will move away at different speeds, and hence will spread out, or *disperse*.\n", + "\n", + "**dispersion relation:** For dispersive waves, this relation links the frequency (and hence also the phase speed) to the wavenumber, for a given wave. See also, *dispersive wave*.\n", + "\n", + "**dispersive wave:** Any wave whose speed varies with the wavenumber. As a consequence, waves of different wavelengths that start at the same location will move away at different speeds, and hence will spread out, or *disperse*.\n", + "\n", + "**dispersion relation:** For dispersive waves, this relation links the frequency (and hence also the phase speed) to the wavenumber, for a given wave. See also, *dispersive wave*.\n", + "\n", + "**staggered grid:** This refers to a problem with several unknown functions, where the discrete unknowns are not located at same grid points; rather, they are *staggered* from each other. For example, it is often the case that in addition to the grid points themselves, some uknowns will be placed at the center of grid cells, or at the center of the sides of the grid cells." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**leap-frog scheme:** This term is used to refer to time discretization schemes that use the centered difference formula to discretize the first time derivative in PDE problems. The resulting difference scheme relates the solution at the next time step to the solution *two* time steps previous. Hence, the even- and odd-numbered time steps are linked together, with the resulting computation performed in a *leap-frog* fashion.\n", + "\n", + "**periodic boundary conditions:** Spatial boundary conditions where the value of the solution at one end of the domain is required to be equal to the value on the other end (compare to dirichlet boundary values, where the the solution at both ends is fixed at a specific value or values). This enforces periodicity on the solution, and in terms of a fluid flow problem, these conditions can be thought of more intuitively as requiring that any flow out of the one boundary must return through the corresponding boundary on the other side.\n", + "\n", + "**computational mode:** When performing a modal analysis of a numerical scheme, this is a mode in the solution that does not correspond to any of the \"real\" (or physical) modes in the continuous problem. It is an artifact of the discretization process only, and can sometimes lead to spurious computational results (for example, with the leap-frog time stepping scheme)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "\n", + "\n", + "## References\n", + "\n", + "\n", + "Cushman-Roisin, B., 1994: Introduction to Geophysical Fluid Dynamics, Prentice Hall.\n", + "\n", + "\n", + "Gill, A.E., 1982: Atmosphere-Ocean Dynamics, Academic Press, Vol. 30 International Geophysics Series, New York. \n", + "\n", + "\n", + "Mesinger, F. and A. Arakawa, 1976: Numerical Methods Used in Atmospheric Models,GARP Publications Series No.~17, Global Atmospheric Research Programme.\n", + "\n", + "\n", + "Pond, G.S. and G.L. Pickard, 1983: Introductory Dynamic Oceanography, Pergamon, Great Britain, 2nd Edition.\n", + "\n", + "\n", + "Press, W.H., S.A. Teukolsky, W.T. Vetterling and B.P. Flannery, 1992: Numerical Recipes in FORTRAN: The Art of Scientific Computing, Cambridge University Press, Cambridge, 2n Edition.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.13" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": { + "height": "calc(100% - 180px)", + "left": "10px", + "top": "150px", + "width": "255.625px" + }, + "toc_section_display": "block", + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab8/01-lab8.html b/notebooks/lab8/01-lab8.html new file mode 100644 index 0000000..df8d96d --- /dev/null +++ b/notebooks/lab8/01-lab8.html @@ -0,0 +1,1204 @@ + + + + + + + + Laboratory 8: Solution of the Quasi-geostrophic Equations — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Laboratory 8: Solution of the Quasi-geostrophic Equations

+

Lin Yang & John M. Stockie

+
+

List of Problems

+
    +
  • Problem 1: Discretization of the Jacobian term

  • +
  • Problem 2: Numerical instability in the “straightforward” Jacobian

  • +
  • Problem 3: Implement the SOR relaxation

  • +
  • Problem 4: No-slip boundary conditions

  • +
  • Problem 5: Starting values for the time integration

  • +
  • Problem 6: Duplication of classical results

  • +
+
+
+

Goals

+

This lab is an introduction to the use of implicit schemes for the solution of PDE’s, using as an example the quasi-geostrophic equations that govern the large-scale circulation of the oceans.

+

You will see that the discretization of the governing equations leads to a large, sparse system of linear equations. The resulting matrix problem is solved with relaxation methods, one of which you will write the code for, by modifying the simpler Jacobi relaxation. There are two types of boundary conditions typically used for this problem, one of which you will program yourself – your computations are easily compared to previously-obtained “classical” results.

+
+
+

Learning Objectives

+

After reading and working through this lab you will be able to: * Explain one reason why one may need to solve a large system of linear equations even though the underlying method is explicit * Describe the relaxation method * Rescale a partial-differential equation * Write down the center difference approximation for the Laplacian operator * Describe what a ghost point is

+
+
+

Readings

+

There are no required readings for this lab. If you would like some additional background beyond the material in the lab itself, then you may refer to the references listed below:

+
    +
  • Equations of motion:

    +
      +
    • Pedlosky Sections 4.6 & 4.11 (derivation of QG equations)

    • +
    +
  • +
  • Nonlinear instability:

    +
      +
    • Mesinger & Arakawa (classic paper with description of instability and aliasing)

    • +
    • Arakawa & Lamb (non-linear instability in the QG equations, with the Arakawa-Jacobian)

    • +
    +
  • +
  • Numerical methods:

    +
      +
    • Strang (analysis of implicit schemes)

    • +
    • McCalpin (QGbox model)

    • +
    +
  • +
  • Classical numerical results:

    +
      +
    • Veronis (numerical results)

    • +
    • Bryan (numerical results)

    • +
    +
  • +
+
+
[ ]:
+
+
+
import context
+from IPython.display import Image
+# import the quiz script
+from numlabs.lab8 import quiz8 as quiz
+
+
+
+
+
+

Introduction

+

An important aspect in the study of large-scale circulation in the ocean is the response of the ocean to wind stress. Solution of this problem using the full Navier-Stokes equations is quite complicated, and it is natural to look for some way to simplify the governing equations. A common simplification in many models of large-scale, wind-driven ocean circulation, is to assume a system that is homogeneous and barotropic.

+

It is now natural to ask:

+
+

Does the simplified model capture the important dynamics in the real ocean?

+
+

This question can be investigated by solving the equations numerically, and comparing the results to observations in the real ocean. Many numerical results are already available, and so the purpose of this lab is to introduce you to the numerical methods used to solve this problem, and to compare the computed results to those from some classical papers on numerical ocean simulations.

+

Some of the numerical details (in Sections Right Hand Side, Boundary Conditions, Matrix Form of Discrete Equations, Solution of the Poisson Equation by Relaxation and the appendices) are quite technical, and may be passed over the first time you read through the lab. You can get a general idea of the basic solution procedure without them. However, +you should return to them later and understand the material contained in them, since these sections contain techniques that are commonly encountered when solving PDE’s, and an understanding of these sections is required for you to answer the problems in the Lab.

+
+
+

The Quasi-Geostrophic Model

+

Consider a rectangular ocean with a flat bottom, as pictured in Figure Model Ocean, and ignore curvature effects, by confining the region of interest to a mid-latitude :math:`beta`-plane.

+
+
[ ]:
+
+
+
Image(filename='images/rect.png',width='45%')
+
+
+
+

Figure Model Ocean The rectangular ocean with flat bottom, ignoring curvature effects.

+

More information on what is a \(\beta\)-plane and on the neglect of curvature terms in the \(\beta\)-plane approximation is given in the appendix.

+

If we assume that the ocean is homogeneous (it has constant density throughout), then the equations governing the fluid motion on the \(\beta\)-plane are:

+

(X-Momentum Eqn)

+

\begin{equation} +\frac{\partial u}{\partial t} + u \frac {\partial u}{\partial x} + v \frac {\partial u}{\partial y} + w \frac{\partial u}{\partial z} - fv = - \, \frac{1}{\rho} \, \frac {\partial p}{\partial x} ++ A_v \, \frac{\partial^2 u}{\partial z^2} + A_h \, \nabla^2 u +\end{equation}

+

(Y-Momentum Eqn)

+

\begin{equation} +\frac{\partial v}{\partial t} + u \frac{\partial v}{\partial x} + v \frac{\partial v}{\partial y} + w \frac{\partial v}{\partial z} + fu = - \, \frac{1}{\rho} \, \frac{\partial p}{\partial y} ++ A_v \, \frac{\partial^2 v}{\partial z^2} + A_h \, \nabla^2 v +\end{equation}

+

(Hydrostatic Eqn)

+

\begin{equation} +\frac{\partial p}{\partial z} = - \rho g +\end{equation}

+

(Continuity Eqn)

+

\begin{equation} +\frac {\partial u}{\partial x} + \frac{\partial v}{\partial y} = - \, \frac{\partial w}{\partial z} +\end{equation}

+

where

+
    +
  • (X-Momentum Eqn) and (Y-Momentum Eqn) are the lateral momentum equations,

  • +
  • (Hydrostatic Eqn) is the hydrostatic balance (and replaces the vertical momentum equation), and

  • +
  • (Continuity Eqn) is the continuity (or incompressibility or conservation of volume) condition.

  • +
+

The variables and parameters appearing above are:

+
    +
  • \((u,v,w)\), the fluid velocity components;

  • +
  • \(f(y)\), the Coriolis parameter (assumed to be a linear function of \(y\));

  • +
  • \(\rho\), the density (assumed constant for a homogeneous fluid);

  • +
  • \(A_v\) and \(A_h\), the vertical and horizontal coefficients of viscosity, respectively (constants);

  • +
  • \(g\), the gravitational acceleration (constant).

  • +
+

Equations (X-Momentum Eqn), (Y-Momentum Eqn), (Hydrostatic Eqn) and (Continuity Eqn) form a non-linear system of PDE’s, for which there are many numerical methods available. However, due to the complexity of the equations, the methods themselves are very complex, and consume a large amount of CPU time. It is therefore advantageous for us to reduce the equations to a simpler form, for which common, and more efficient +numerical solution techniques can be used.

+

By applying a sequence of physically-motivated approximations (see Appendix Simplification of the QG Model Equations) and by using the boundary conditions, the system(X-Momentum Eqn), (Y-Momentum Eqn), (Hydrostatic Eqn) and (Continuity Eqn) can be reduced to a single PDE:

+

(Quasi-Geotrophic Eqn)

+
+\[\frac{\partial}{\partial t} \, \nabla_h^2 \psi + {\cal J} \left( \psi, \nabla_h^2 \psi \right) ++ \beta \, \frac {\partial \psi}{\partial x} = \frac{1}{\rho H} \, \nabla_h \times \tau - \kappa +\, \nabla_h^2 \psi + A_h \, \nabla_h^4 \psi\]
+

where

+
    +
  • \(\psi\) is the stream function, defined by

    +
    +\[u = - \frac{\partial \psi}{\partial y},\]
    +
    +\[v = \frac{\partial \psi}{\partial x}\]
    +
  • +
  • +\[\nabla_h = \left(\frac{\partial}{\partial x},\frac{\partial}{\partial y}\right)\]
    +

    is the “horizontal” gradient operator, so-called because it involves only derivatives in \(x\) and \(y\);

    +
  • +
  • +

    +
    +
    +
    +\[{\cal J} (a,b) = \frac{\partial a}{\partial x} \frac{\partial +b}{\partial y} - \frac{\partial a}{\partial y} \frac{\partial b}{\partial x}\]
    +
    +
    +
    is the Jacobian operator;
    +
    +
  • +
  • \(\vec{\tau}(x,y) = \left(\,\tau_1(x,y),\tau_2(x,y)\,\right)\) is the wind stress boundary condition at the surface \(z=0\). A simple form of the wind stress might assume an ocean “box” that extends from near the equator to a latitude of about \(60^\circ\), for which typical winds are easterly near the equator and turn westerly at middle latitudes. A simple function describing this is

    +
    +\[\vec{\tau} = \tau_{max} (-\cos y, 0),\]
    +

    which is what we will use in this lab.

    +

    More complicated wind stress functions are possible. See McCalpin’s QGBOX documentation [p. 24] for another example.

    +
  • +
+
    +
  • \(\beta = df/dy\) is a constant, where \(f(y) = f_0+\beta y\) (see Appendix Definition of the Beta-plane);

  • +
  • \(\kappa = {1}/{H} \left[ (A_v f_0)/{2} \right]^{1/2}\) is the bottom friction scaling (constant); and

  • +
  • \(H\) is the vertical depth of the water column (constant).

  • +
+

Notice that the original (second order) system of four equations in four unknowns (\(u\), \(v\), \(w\), \(p\)) has now been reduced to a single (fourth order) PDE in one unknown function, \(\psi\). It will become clear in the next section just how much simpler the system of equations has become …

+

Before going on, though, we need to close the system with the boundary conditions for the stream function \(\psi\). We must actually consider two cases, based on whether or not the lateral eddy viscosity parameter, \(A_h\), is zero:

+
    +
  • if :math:`A_h=0`: the boundary conditions are free-slip; that is, \(\psi=0\) on the boundary.

  • +
  • if :math:`A_hneq 0`: the boundary conditions are no-slip; that is both \(\psi\) and its normal derivative \(\nabla\psi\cdot\hat{n}\) are zero on the boundary (where \(\hat{n}\) is the normal vector to the boundary).

  • +
+
+

Scaling the Equations of Motion

+

In physical problems, it is not uncommon for some of the quantities of interest to differ in size by many orders of magnitude. This is of particular concern when computing numerical solutions to such problems, since then round-off errors can begin to pollute the computations (see Lab 2).

+

This is also the case for the QG equations, where the parameters have a large variation in size. The QG model parameters, and typical numerical values, are given in Table of Parameters. For such problems it is customary to rescale the variables in such a way that the size differences are minimized.

+

Problem Parameters

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Symbol

Name

Range of Magnitude

Units

\(R\)

Earth’s radius

\(6.4 \times 10^6\)

\(m\)

\(\Omega\)

Angular frequency for Earth

\(7.27 \times 10^{-5}\)

\(s^{-1}\)

\(H\)

Depth of active layer

\(100 \rightarrow 4000\)

\(m\)

\(B\)

Length and width of ocean

\(1.0 \rightarrow 5.0 \times 10^6\)

\(m\)

\(\rho\)

Density of water

\(10^3\)

\(kg/m^3\)

\(A_h\)

Lateral eddy viscosity

\(0\) or \(10^1 \rightarrow 10^4\)

\(m^2/s\)

\(A_v\)

Vertical eddy viscosity

\(10^{-4} \rightarrow 10^{-1}\)

\(m^2/s\)

\(\tau_{max}\)

Maximum wind stress

\(10^{-2} \rightarrow 1\)

\(kg m^{-1} s^{-2}\)

\(\theta_0\)

Latitude

\(0 \rightarrow \frac{\pi}{3}\)

    +
  • +
+
+

Derived Quantities

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +

Symbol

Name

Range of Magnitude

Units

\(\beta\)

\(\beta =2\Omega \cos \theta_0 / R\)

\(1.1 \rightarrow 2.3 \times 10^{-11}\)

\(m^{-1} s^{-1}\)

\(f_0\)

\(f_0 = 2 \Omega \sin \theta_0\)

\(0.0 \rightarrow 1.3 \times 10^{-4}\)

\(s^{-1}\)

\(U_0\)

Velocity scale = \(\tau_{max}/(\beta\rho H B)\)

\(10^{-5} \rightarrow 10^{-1}\)

\(m s^{-1}\)

\(\kappa\)

bottom friction parameter

\(0.0 \rightarrow 10^{-5}\)

\(m^2 s^{-2}\)

+

Non-dimensional Quantities

+ + + + + + + + + + + + + + + + + + + + +

Symbol / Name

Range of Magnitude for Quantity

\(\epsilon\) / Vorticity ratio = \(U_0/(\beta B^2)\)

(computed)

\(\frac{\tau_{max}}{\epsilon\beta^2 \rho H B^3}\)

\(10^{-12} \rightarrow 10^{-14}\)

\(\frac{\kappa}{\beta B}\)

\(4 \times 10^{-4} \rightarrow 6 \times 10^1\)

\(\frac{A_h}{\beta B^3}\)

\(10^{-7} \rightarrow 10^{-4}\)

+

Table of Parameters

+

Let us go through this scaling process for the evolution equation (Quasi-Geostrophic Eqn) for the stream function, which is reproduced here for easy comparison:

+
+\[\frac{\partial}{\partial t} \nabla^2_h \psi = - \beta \frac{\partial \psi}{\partial x} - {\cal J}(\psi, \nabla_h^2\psi)+ \frac{1}{\rho H} \nabla_h \times \vec{\tau} - \kappa \nabla_h^2 \psi + A_h \nabla_h^4 \psi\]
+

The basic idea is to find typical scales of motion, and then redefine the dependent and independent variables in terms of these scales to obtain dimensionless variables.

+

For example, the basin width and length, \(B\), can be used as a scale for the dependent variables \(x\) and \(y\). Then, we define dimensionless variables

+

(x-scale eqn)

+
+\[x^\prime = \frac{x}{B}\]
+

(y-scale eqn)

+
+\[y^\prime = \frac{y}{B}\]
+

Notice that where \(x\) and \(y\) varied between 0 and \(B\) (where \(B\) could be on the order of hundreds of kilometres), the new variables \(x^\prime\) and \(y^\prime\) now vary between 0 and 1 (and so the ocean is now a unit square).

+

Similarly, we can redefine the remaining variables in the problem as

+

(t-scale eqn)

+
+\[t^\prime = \frac{t}{\left(\frac{1}{\beta B}\right)}\]
+

(\(\psi\)-scale eqn)

+
+\[\psi^\prime = \frac{\psi}{\epsilon \beta B^3}\]
+

(\(\tau\)-scale eqn)

+
+\[\vec{\tau}^\prime = \frac{\vec{\tau}}{\tau_{max}}\]
+

where the scales have been specially chosen to represent typical sizes of the variables. Here, the parameter \(\epsilon\) is a measure of the the ratio between the “relative vorticity” (\(\max|\nabla_h^2 \psi|\)) and the planetary vorticity (given by \(\beta B\)).

+

Now, we need only substitute for the original variables in the equations, and replace derivatives with their dimensionless counterparts; for example, using the chain rule,

+
+\[\frac{\partial}{\partial x} = \frac{\partial x^\prime}{\partial x} +\frac{\partial}{\partial x^\prime}.\]
+

Then the equation of motion becomes

+

(Rescaled Quasi-Geostrophic Eqn)

+
+\[\frac{\partial}{\partial t^\prime} \nabla^{\prime 2}_h \psi^\prime = - \, \frac{\partial \psi^\prime}{\partial x^\prime} - \epsilon {\cal J^\prime}(\psi^\prime, \nabla_h^{\prime 2}\psi^\prime) + \frac{\tau_{max}}{\epsilon \beta^2 \rho H B^3} \nabla^\prime_h \times \vec{\tau}^\prime - \, \frac{\kappa}{\beta B} \nabla_h^{\prime 2} \psi^\prime + \frac{A_h}{\beta B^3} \nabla_h^{\prime 4} \psi^\prime\]
+

The superscript “\(\,^\prime\)” on \(\nabla_h\) and \({\cal J}\) signify that the derivatives are taken with respect to the dimensionless variables. Notice that each term in (Rescaled Quasi-Geostrophic Eqn) is now dimensionless, and that there are now 4 dimensionless combinations of parameters

+
+\[\epsilon, \;\; \frac{\tau_{max}}{\epsilon \beta^2 \rho H B^3}, \;\; \frac{\kappa}{\beta B}, \;\; \mbox{ and} \;\; \frac{A_h}{\beta B^3}.\]
+

These four expressions define four new dimensionless parameters that replace the original (unscaled) parameters in the problem.

+

The terms in the equation now involve the dimensionless stream function, \(\psi^\prime\), and its derivatives, which have been scaled so that they are now of order 1 in magnitude. The differences in sizes between terms in the equation are now embodied solely in the four dimensionless parameters. A term which is multiplied by a small parameter is thus truly small in comparison to the other terms, and hence additive round-off errors will not contribute substantially to a numerical solution +based on this form of the equations.

+

For the remainder of this lab, we will use the scaled version of the equations. Consequently, the notation will be simplified by dropping the “primes” on the dimensionless variables. But, do not forget, that any solution (numerical or analytical) from the scaled equations must be converted back into dimensional variables using the scale equations.

+
+
+
+

Discretization of the QG equations

+

At first glance, it is probably not clear how one might discretize the QG equation (Rescaled Quasi-Geostrophic Eqn) from the previous section. This equation is an evolution equation for \(\nabla_h^2 \psi\) (the Laplacian of the stream function) but has a right hand side that depends not only on \(\nabla_h^2 \psi\), but also on \(\psi\) and \(\nabla_h^4 \psi\). The problem may be written in a more suggestive form, by letting +\(\chi = \partial\psi/\partial t\). Then, the (Rescaled Quasi-Geostrophic Eqn) becomes

+

(Poisson Eqn)

+
+\[\nabla_h^2 \chi = F(x,y,t),\]
+

where \(F(x,y,t)\) contains all of the terms except the time derivative. We will see that the discrete version of this equation is easily solved for the new unknown variable \(\chi\), after which

+
+\[\frac{\partial\psi}{\partial t} = \chi\]
+

may be used to evolve the stream function in time.

+

The next two sections discuss the spatial and temporal discretization, including some details related to the right hand side, the boundary conditions, and the iterative scheme for solving the large sparse system of equations that arises from the Poisson equation for \(\chi\). Following that is an summary of the steps in the solution procedure.

+
+

Spatial Discretization

+

Assume that we are dealing with a square ocean, with dimensions \(1\times 1\) (in non-dimensional coordinates) and begin by dividing the domain into a grid of discrete points

+
+\[x_i = i \Delta x, \;\;i = 0, 1, 2, \dots, M\]
+
+\[y_j = j \Delta y, \;\;j = 0, 1, 2, \dots, N\]
+

where \(\Delta x = 1/M\) and \(\Delta y = 1/N\). In order to simplify the discrete equations, it will be helpful to assume that \(M=N\), so that \(\Delta x = \Delta y \equiv d\). We can then look for approximate values of the stream function at the discrete points; that is, we look for

+
+\[\Psi_{i,j} \approx \psi(x_i,y_j)\]
+

(and similarly for \(\chi_{i,j}\)). The computational grid and placement of unknowns is pictured in Figure Spatial Grid.

+
+
[ ]:
+
+
+
Image(filename='images/spatial.png',width='45%')
+
+
+
+

Figure Spatial Grid

+

Derivatives are replaced by their centered, second-order finite difference approximations

+
+\[\left. \frac{\partial \Psi}{\partial x} \right|_{i,j} +\approx +\frac{\Psi_{i+1,j}-\Psi_{i-1,j}}{2d} +\left. \frac{\partial^2 \Psi}{\partial x^2} \right|_{i,j} +\approx +\frac{\Psi_{i+1,j} - 2 \Psi_{i,j} + \Psi_{i-1,j}}{d^2}\]
+

and similarly for the \(y\)-derivatives. The discrete analogue of the (Poisson equation), centered at the point \((x_i,y_j)\), may be written as

+
+\[\frac{\chi_{i+1,j} - 2\chi_{i,j} +\chi_{i-1,j}}{d^2} + + \frac{\chi_{i,j+1} - 2\chi_{i,j} +\chi_{i,j-1}}{d^2} = F_{i,j}\]
+

or, after rearranging,

+

(Discrete \(\chi\) Eqn)

+
+\[\chi_{i+1,j}+\chi_{i-1,j}+\chi_{i,j+1}+\chi_{i,j-1}-4\chi_{i,j} = + d^2F_{i,j}.\]
+

Here, we’ve used \(F_{i,j} = F(x_i,y_j,t)\) as the values of the right hand side function at the discrete points, and said nothing of how to discretize \(F\) (this will be left until Right Hand Side. The (Discrete :math:chi` equation <#eq:discrete-chi>`__) is an equation centered at the grid point \((i,j)\), and relating the values of the approximate solution, \(\chi_{i,j}\), at the \((i,j)\) point, to the four neighbouring values, as described by +the 5-point difference stencil pictured in Figure Stencil.

+
+
[ ]:
+
+
+
Image(filename='images/2diff.png',width='40%')
+
+
+
+

Figure Stencil: The standard 5-point centered difference stencil for the Laplacian (multiply by \(\frac{1}{d^2}\) to get the actual coefficients.

+

These stencil diagrams are a compact way of representing the information contained in finite difference formulas. To see how useful they are, do the following:

+
    +
  • Choose the point on the grid in Figure Spatial Grid that you want to apply the difference formula (Discrete :math:chi` Eqn <#eq:discrete-chi>`__).

  • +
  • Overlay the difference stencil diagram on the grid, placing the center point (with value \(-4\)) on this point.

  • +
  • The numbers in the stencil are the multiples for each of the unknowns \(\chi_{i,j}\) in the difference formula.

  • +
+

An illustration of this is given in Figure Overlay.

+
+
[ ]:
+
+
+
Image(filename='images/2diffgrid.png',width='40%')
+
+
+
+

Figure Overlay: The 5-point difference stencil overlaid on the grid.

+

Before going any further with the discretization, we need to provide a few more details for the discretization of the right hand side function, \(F(x,y,t)\), and the boundary conditions. If you’d rather skip these for now, and move on to the time discretization (Section Temporal Discretization) or the outline of the solution procedure (Section Outline of Solution Procedure), then you may do so now.

+
+

Right Hand Side

+

The right hand side function for the (Poisson equation) is reproduced here in scaled form (with the “primes” dropped):

+
+\[F(x,y,t) = - \, \frac{\partial \psi}{\partial x} - \epsilon {\cal J}(\psi,\nabla_h^{2}\psi) + \frac{\tau_{max}}{\epsilon \beta^2 \rho H B^3} \nabla_h \times \vec{\tau} - \frac{\kappa}{\beta B} \nabla_h^{2} \psi + \frac{A_h}{\beta B^3} \nabla_h^{4} \psi\]
+

Alternatively, the Coriolis and Jacobian terms can be grouped together as a single term:

+
+\[- \, \frac{\partial\psi}{\partial x} - \epsilon {\cal J}(\psi, \nabla^2_h\psi) = - {\cal J}(\psi, y + \epsilon \nabla^2_h\psi)\]
+

Except for the Jacobian term, straightforward second-order centered differences will suffice for the Coriolis force

+
+\[\frac{\partial\psi}{\partial x} \approx \frac{1}{2d} \left(\Psi_{i+1,j} - \Psi_{i-1,j}\right),\]
+

the wind stress

+
+\[\nabla_h \times \vec{\tau} \approx + \frac{1}{2d} \, + \left( \tau_{2_{i+1,j}}-\tau_{2_{i-1,j}} - + \tau_{1_{i,j+1}}+\tau_{1_{i,j-1}} \right),\]
+

and the second order viscosity term

+
+\[\nabla_h^2 \psi \approx + \frac{1}{d^2} \left( \Psi_{i+1,j}+\Psi_{i-1,j}+\Psi_{i,j+1} + + \Psi_{i,j-1} - 4 \Psi_{i,j} \right).\]
+

The higher order (biharmonic) viscosity term, \(\nabla_h^4 \psi\), is slightly more complicated. The difference stencil can be derived in a straightforward way by splitting into \(\nabla_h^2 (\nabla_h^2 \psi)\) and applying the discrete version of the Laplacian operator twice. The resulting difference formula is

+

(Bi-Laplacian)

+
+\[\nabla_h^4 \psi + = \nabla_h^2 ( \nabla_h^2 \psi )\]
+

+
+\[\approx \frac{1}{d^4} \left( \Psi_{i+2,j} + \Psi_{i,j+2} + + \Psi_{i-2,j} + \Psi_{i,j-2} \right. + \, 2 \Psi_{i+1,j+1} + 2 \Psi_{i+1,j-1} + 2 \Psi_{i-1,j+1} + + 2 \Psi_{i-1,j-1} +\left. - 8 \Psi_{i,j+1} - 8 \Psi_{i-1,j} - 8 \Psi_{i,j-1} - 8 \, \Psi_{i+1,j} + 20 \Psi_{i,j} \right)\]
+

which is pictured in the difference stencil in Figure Bi-Laplacian Stencil.

+
+
[ ]:
+
+
+
Image(filename='images/4diff.png',width='40%')
+
+
+
+

Figure Bi-Laplacian Stencil: 13-point difference stencil for the centered difference formula for \(\nabla_h^4\) (multiply by \(\frac{1}{d^4}\) to get the actual coefficients).

+

The final term is the Jacobian term, \({\cal J}(\psi, +\nabla^2_h\psi)\) which, as you might have already guessed, is the one that is going to give us the most headaches. To get a feeling for why this might be true, go back to (Rescaled Quasi-Geostropic Equation)  and notice that the only nonlinearity arises from this term. Typically, it is the nonlinearity in a problem that leads to difficulties in a numerical scheme. Remember the formula given for \({\cal J}\) in the previous section:

+

(Jacobian: Expansion 1)

+
+\[{\cal J}(a,b) = \frac{\partial a}{\partial x} \, \frac{\partial b}{\partial y} - + \frac{\partial a}{\partial y} \, \frac{\partial b}{\partial x}\]
+
+
+

Problem One

+
+

Apply the standard centered difference formula (see Lab 1 if you need to refresh you memory) to get a difference approximation to the Jacobian based on (Jacobian: Expansion 1. You will use this later in Problem Two.

+
+

We’ve seen before that there is usually more than one way to discretize a given expression. This case is no different, and there are many possible ways to derive a discrete version of the Jacobian. Two other approaches are to apply centered differences to the following equivalent forms of the Jacobian:

+

(Jacobian: Expansion 2)

+
+\[{\cal J}(a,b) = \frac{\partial}{\partial x} \, \left( a \frac{\partial b}{\partial y} \right) - + \frac{\partial}{\partial y} \left( a \frac{\partial b}{\partial x} \right)\]
+

(Jacobian: Expansion 3)

+
+\[{\cal J}(a,b) = \frac{\partial}{\partial y} \, \left( b \frac{\partial a}{\partial x} \right) - + \frac{\partial}{\partial x} \left( b \frac{\partial a}{\partial y} \right)\]
+

Each formula leads to a different discrete formula, and we will see in Section Aliasing Error and Nonlinear Instability  what effect the non-linear term has on the discrete approximations and how the use of the different formulas affect the behaviour of the numerical scheme. Before moving on, try to do the following two quizzes.

+
+
+

Quiz on Jacobian Expansion #2

+

Using second order centered differences, what is the discretization of the second form of the Jacobian given by

+
+\[{\cal J}(a,b) = \frac{\partial}{\partial x} \, \left( a \frac{\partial b}{\partial y} \right) - + \frac{\partial}{\partial y} \left( a \frac{\partial b}{\partial x} \right)\]
+
    +
  • A:

    +
    +\[\frac 1 {d^2} \left[ \left( a_{i+1,j} - a_{i-1,j} \right) \left( b_{i,j+1} - b_{i,j-1} \right) - \left( a_{i,j+1} - a_{i,j-1} \right) \left( b_{i+1,j} - b_{i-1,j} \right) \right]\]
    +
  • +
  • B:

    +
    +\[\frac 1 {4d^2} \left[ a_{i+1,j} \left( b_{i+1,j+1} - b_{i+1,j-1} \right) - a_{i-1,j} \left( b_{i-1,j+1} - b_{i-1,j-1} \right) - a_{i,j+1} \left( b_{i+1,j+1} - b_{i-1,j+1} \right) + a_{i,j-1} \left( b_{i+1,j-1} - b_{i-1,j-1} \right) \right]\]
    +
  • +
  • C:

    +
    +\[\frac 1 {d^2} \left[ \left( a_{i+1/2,j} - a_{i-1/2,j} \right) \left( b_{i,j+1/2} - b_{i,j-1/2} \right) - \left( a_{i,j+1/2} - a_{i,j-1/2} \right) \left( b_{i+1/2,j} - b_{i-1/2,j} \right) \right]\]
    +
  • +
  • D:

    +
    +\[\frac 1 {4d^2} \left[ b_{i+1,j} \left( a_{i+1,j+1} - a_{i+1,j-1} \right) - b_{i-1,j} \left( a_{i-1,j+1} - a_{i-1,j-1} \right) - b_{i,j+1} \left( a_{i+1,j+1} - a_{i-1,j+1} \right) + b_{i,j-1} \left( a_{i+1,j-1} - a_{i-1,j-1} \right) \right]\]
    +
  • +
  • E:

    +
    +\[\frac 1 {4d^2} \left[ a_{i+1,j+1} \left( b_{i+1,j+1} - b_{i+1,j-1} \right) - a_{i-1,j-1} \left( b_{i-1,j+1} - b_{i-1,j-1} \right) - a_{i+1,j+1} \left( b_{i+1,j+1} - b_{i-1,j+1} \right) + a_{i-1,j-1} \left( b_{i+1,j-1} - b_{i-1,j-1} \right) \right]\]
    +
  • +
  • F:

    +
    +\[\frac 1 {4d^2} \left[ \left( a_{i+1,j} - a_{i-1,j} \right) \left( b_{i,j+1} - b_{i,j-1} \right) - \left( a_{i,j+1} - a_{i,j-1} \right) \left( b_{i+1,j} - b_{i-1,j} \right) \right]\]
    +
  • +
  • G:

    +
    +\[\frac 1 {4d^2} \left[ b_{i,j+1} \left( a_{i+1,j+1} - a_{i-1,j+1} \right) - b_{i,j-1} \left( a_{i+1,j-1} - a_{i-1,j-1} \right) - b_{i+1,j} \left( a_{i+1,j+1} - a_{i+1,j-1} \right) + b_{i-1,j} \left( a_{i-1,j+1} - a_{i-1,j-1} \right) \right]\]
    +
  • +
+

In the following, replace ‘x’ by ‘A’, ‘B’, ‘C’, ‘D’, ‘E’, ‘F’, ‘G’, or ‘Hint’

+
+
[ ]:
+
+
+
print (quiz.jacobian_2(answer = 'x'))
+
+
+
+
+
+

Quiz on Jacobian Expansion #3

+

Using second order centered differences, what is the discretization of the third form of the Jacobian given by

+
+\[{\cal J}(a,b) = \frac{\partial}{\partial y} \, \left( b \frac{\partial a}{\partial x} \right) - + \frac{\partial}{\partial x} \left( b \frac{\partial a}{\partial y} \right)\]
+
    +
  • A: - A:

    +
    +\[\frac 1 {d^2} \left[ \left( a_{i+1,j} - a_{i-1,j} \right) \left( b_{i,j+1} - b_{i,j-1} \right) - \left( a_{i,j+1} - a_{i,j-1} \right) \left( b_{i+1,j} - b_{i-1,j} \right) \right]\]
    +
  • +
  • B:

    +
    +\[\frac 1 {4d^2} \left[ a_{i+1,j} \left( b_{i+1,j+1} - b_{i+1,j-1} \right) - a_{i-1,j} \left( b_{i-1,j+1} - b_{i-1,j-1} \right) - a_{i,j+1} \left( b_{i+1,j+1} - b_{i-1,j+1} \right) + a_{i,j-1} \left( b_{i+1,j-1} - b_{i-1,j-1} \right) \right]\]
    +
  • +
  • C:

    +
    +\[\frac 1 {d^2} \left[ \left( a_{i+1/2,j} - a_{i-1/2,j} \right) \left( b_{i,j+1/2} - b_{i,j-1/2} \right) - \left( a_{i,j+1/2} - a_{i,j-1/2} \right) \left( b_{i+1/2,j} - b_{i-1/2,j} \right) \right]\]
    +
  • +
  • D:

    +
    +\[\frac 1 {4d^2} \left[ b_{i+1,j} \left( a_{i+1,j+1} - a_{i+1,j-1} \right) - b_{i-1,j} \left( a_{i-1,j+1} - a_{i-1,j-1} \right) - b_{i,j+1} \left( a_{i+1,j+1} - a_{i-1,j+1} \right) + b_{i,j-1} \left( a_{i+1,j-1} - a_{i-1,j-1} \right) \right]\]
    +
  • +
  • E:

    +
    +\[\frac 1 {4d^2} \left[ a_{i+1,j+1} \left( b_{i+1,j+1} - b_{i+1,j-1} \right) - a_{i-1,j-1} \left( b_{i-1,j+1} - b_{i-1,j-1} \right) - a_{i+1,j+1} \left( b_{i+1,j+1} - b_{i-1,j+1} \right) + a_{i-1,j-1} \left( b_{i+1,j-1} - b_{i-1,j-1} \right) \right]\]
    +
  • +
  • F:

    +
    +\[\frac 1 {4d^2} \left[ \left( a_{i+1,j} - a_{i-1,j} \right) \left( b_{i,j+1} - b_{i,j-1} \right) - \left( a_{i,j+1} - a_{i,j-1} \right) \left( b_{i+1,j} - b_{i-1,j} \right) \right]\]
    +
  • +
  • G:

    +
    +\[\frac 1 {4d^2} \left[ b_{i,j+1} \left( a_{i+1,j+1} - a_{i-1,j+1} \right) - b_{i,j-1} \left( a_{i+1,j-1} - a_{i-1,j-1} \right) - b_{i+1,j} \left( a_{i+1,j+1} - a_{i+1,j-1} \right) + b_{i-1,j} \left( a_{i-1,j+1} - a_{i-1,j-1} \right) \right]\]
    +
  • +
+

In the following, replace ‘x’ by ‘A’, ‘B’, ‘C’, ‘D’, ‘E’, ‘F’, ‘G’, or ‘Hint’

+
+
[ ]:
+
+
+
print (quiz.jacobian_3(answer = 'x'))
+
+
+
+
+
+

Boundary Conditions

+

One question that arises immediately when applying the difference stencils in Figure Stencil and Figure Bi-Laplacian Stencil is

+
+

What do we do at the boundary, where at least one of the nodes of the difference stencil lies outside of the domain?

+
+

The answer to this question lies with the boundary conditions for \(\chi\) and \(\psi\). We already know the boundary conditions for \(\psi\) from Section The Quasi-Geostrophic Model:

+

Free slip:

+
+

The free slip boundary condition, \(\psi=0\), is applied when \(A_h=0\), which we can differentiate with respect to time to get the identical condition \(\chi=0\). In terms of the discrete unknowns, this translates to the requirement that

+
+\[\Psi_{0,j} = \Psi_{N,j} = \Psi_{i,0} = \Psi_{i,N} = 0 \;\; \mbox{ for} \; i,j = 0,1,\ldots,N,\]
+
+
+

and similarly for \(\chi\). All boundary values for \(\chi\) and \(\Psi\) are known, and so we need only apply the difference stencils at interior points (see Figure Ghost Points). When \(A_h=0\), the high-order viscosity term is not present, and so the only stencil appearing in the discretization is the 5-point formula (the significance of this will become clear when we look at no-slip boundary conditions).

+
+
+
[ ]:
+
+
+
Image(filename='images/ghost3.png',width='50%')
+
+
+
+

Figure Ghost Points: The points on the computational grid, which are classified into interior, real boundary, and ghost boundary points. The 5- and 13-point difference stencils, when overlaid on the grid, demonstrate that only the real boundary values are needed for the free-slip boundary values (when \(A_h=0\)), while ghost points must be introduced for the no-slip conditions (when \(A_h\neq 0\), and the higher order viscosity term is present).

+

No-slip:

+
+

The no-slip conditions appear when \(A_h\neq 0\), and include the free slip conditions \(\psi=\chi=0\) (which we already discussed above), and the normal derivative condition \(\nabla\psi\cdot\hat{n}=0\), which must be satisfied at all boundary points. It is clear that if we apply the standard, second-order centered difference approximation to the first derivative, then the difference stencil extends beyond the boundary of the domain and contains at least one non-existent point! +How can we get around this problem?

+
+
+

The most straightforward approach (and the one we will use in this Lab) is to introduce a set of fictitious points or ghost points,

+
+\[\Psi_{-1,j}, \;\; \Psi_{N+1,j}, \;\; \Psi_{i,-1}, \;\; \Psi_{i,N+1}\]
+
+
+

for \(i,j=0,1,2,\ldots,N+1\), which extend one grid space outside of the domain, as shown in Figure Ghost Points. We can then discretize the Neumann condition in a straightforward manner. For example, consider the point \((0,1)\), pictured in Figure No Slip Boundary Condition, at which the discrete version of \(\nabla\psi\cdot\hat{n}=0\) is

+
+\[\frac{1}{2d} ( \Psi_{1,1} - \Psi_{-1,1}, \Psi_{0,2} - \Psi_{0,0} ) \cdot (-1,0) = 0,\]
+
+
+

(where \((-1,0)\) is the unit outward normal vector), which simplifies to

+
+\[\Psi_{-1,1} = \Psi_{1,1}.\]
+
+
+

The same can be done for all the remaining ghost points: the value of \(\Psi\) at at point outside the boundary is given quite simply as the value at the corresponding interior point reflected across the boundary.

+
+
+
[ ]:
+
+
+
Image(filename='images/noslip.png',width='40%')
+
+
+
+

Figure No Slip Boundary Condition: The discrete Neumann boundary conditions are discretized using ghost points. Here, at point \((0,1)\), the unit outward normal vector is \(\hat{n}=(-1,0)\), and the discrete points involved are the four points circled in red. The no-slip condition simply states that \(\Psi_{-1,1}\) is equal to the interior value \(\Psi_{1,1}\).

+

Now, remember that when \(A_h\neq 0\), the \(\nabla_h^4\psi\) term appears in the equations, which is discretized as a 13-point stencil . Looking at Figure Ghost Points, it is easy to see that when the 13-point stencil is applied at points adjacent to the boundary (such as \((N-1,N-1)\) in the Figure) it involves not only real boundary points, but also ghost boundary points (compare this to the 5-point stencil). But, as we just discovered above, the presence of the +ghost points in the stencil poses no difficulty, since these values are known in terms of interior values of \(\Psi\) using the boundary conditions.

+

Just as there are many Runge-Kutta schemes, there are many finite difference stencils for the different derivatives. For example, one could use a 5-point, \(\times\)-shaped stencil for \(\nabla^2\psi\). The flexibility of having several second-order stencils is what makes it possible to determine an energy- and enstrophy-conserving scheme for the Jacobian which we do later.

+

A good discussion of boundary conditions is given by McCalpin in his QGBOX code documentation, on page 44.

+
+
+

Matrix Form of the Discrete Equations

+

In order to write the discrete equations  in matrix form, we must first write the unknown values \(\chi_{i,j}\) in vector form. The most obvious way to do this is to traverse the grid (see Figure Spatial Grid), one row at a time, from left to right, leaving out the known, zero boundary values, to obtain the ordering:

+
+\[ \vec{\chi} = + \left(\chi_{1,1},\chi_{2,1},\dots,\chi_{N-1,1}, + \chi_{1,2},\chi_{2,2},\dots,\chi_{N-1,2}, \dots, \right. +\left.\chi_{N-1,N-2}, + \chi_{1,N-1},\chi_{2,N-1},\dots,\chi_{N-1,N-1}\right)^T\]
+

and similarly for \(\vec{F}\). The resulting matrix (with this ordering of unknowns) results in a matrix of the form given in Figure Matrix.

+
+
[ ]:
+
+
+
Image(filename='images/matrix.png',width='40%')
+
+
+
+

Figure Matrix: The matrix form for the discrete Laplacian. The 5 diagonals (displayed in blue and red) represent the non-zero values in the matrix \(-\) all other values are zero.

+

The diagonals with the 1’s (pictured in red) contain some zero entries due to the boundary condition \(u=0\). Notice how similar this matrix appears to the tridiagonal matrix in the Problems from Lab 3, which arose in the discretization of the second derivative in a boundary value problem. The only difference here is that the Laplacian has an additional second derivative with respect to \(y\), which is what adds the additional diagonal entries in the matrix.

+

Before you think about running off and using Gaussian elimination (which was reviewed in Lab 3), think about the size of the matrix you would have to solve. If \(N\) is the number of grid points, then the matrix is size \(N^2\)-by-\(N^2\). Consequently, Gaussian elimination will require on the order of \(N^6\) operations to solve the matrix only once. Even for moderate values of \(N\), this cost can be prohibitively expensive. For example, taking \(N=101\) results in a +\(10000\times 10000\) system of linear equations, for which Gaussian elimination will require on the order of \(10000^3=10^{12}\) operations! As mentioned in Lab 3, direct methods are not appropriate for large sparse systems such as this one. A more appropriate choice is an iterative or relaxation scheme, which is the subject of the next section..

+
+
+
+

Solution of the Poisson Equation by Relaxation

+

One thing to notice about the matrix in Figure Matrix is that it contains many zeros. Direct methods, such as Gaussian elimination (GE), are so inefficient for this problem because they operate on all of these zero entries (in fact, there are other direct methods that exploit the sparse nature of the matrix to reduce the operation count, but none are as efficient as the methods we will talk about here).

+

However, there is another class of solution methods, called iterative methods (refer to Lab 3) which are natural candidates for solving such sparse problems. They can be motivated by the fact that since the discrete equations are only approximations of the PDE to begin with, why should we bother computing an exact solution to an approximate problem? Iterative methods are based on the notion that one sets up an iterative procedure to compute successive approximations to the solution \(-\) +approximations that get closer and closer to the exact solution as the iteration proceeds, but never actually reach the exact solution. As long as the iteration converges and the approximate solution gets to within some tolerance of the exact solution, then we are happy! The cost of a single iterative step is designed to depend on only the number of non-zero elements in the matrix, and so is considerably cheaper than a GE step. Hence, as long as the iteration converges in a “reasonable number” +of steps, then the iterative scheme will outperform GE.

+

Iterative methods are also know as relaxation methods, of which the Jacobi method is the simplest. Here are the basic steps in the Jacobi iteration (where we’ve dropped the time superscript \(p\) to simplify the notation):

+
    +
  1. Take an initial guess, \(\chi_{i,j}^{(0)}\). Let \(n=0\).

  2. +
  3. For each grid point \((i,j)\), compute the residual vector

    +
    +\[R_{i,j}^{(n)} = F_{i,j} - \nabla^2\chi^{(n)}_{i,j}\]
    +
    +\[= F_{i,j} - \frac{1}{d^2} ( \chi_{i+1,j}^{(n)} + \chi_{i,j+1}^{(n)} + \chi_{i-1,j}^{(n)} +\chi_{i,j-1}^{(n)} - 4 \chi_{i,j}^{(n)} )\]
    +

    (which is non-zero unless \(\chi_{i,j}\) is the exact solution).

    +

    You should not confuse the relaxation iteration index (superscript \(\,^{(n)}\)) with the time index (superscript \(\,^p\)). Since the relaxation iteration is being performed at a single time step, we’ve dropped the time superscript for now to simplify notation. Just remember that all of the discrete values in the relaxation are evaluated at the current time level \(p\).

    +
  4. +
  5. “Adjust” \(\chi_{i,j}^{(n)}\), (leaving the other neighbours unchanged) so that \(R_{i,j}^{(n)}=0\). That is, replace \(\chi_{i,j}^{(n)}\) by whatever you need to get a zero residual. This replacement turns out to be:

    +
    +\[\chi_{i,j}^{(n+1)} = \chi_{i,j}^{(n)} - \frac{d^2}{4} R_{i,j}^{(n)},\]
    +

    which defines the iteration.

    +
  6. +
  7. Set \(n\leftarrow n+1\), and repeat steps 2 & 3 until the residual is less than some tolerance value. In order to measure the size of the residual, we use a relative maximum norm measure, which says

    +
    +\[d^2 \frac{\|R_{i,j}^{(n)}\|_\infty}{\|\chi_{i,j}^{(n)}\|_\infty} < TOL\]
    +

    where

    +
    +\[\|R_{i,j}^{(n)}\|_\infty = \max_{i,j} |R_{i,j}^{(n)}|\]
    +

    is the max-norm of \(R_{i,j}\), or the maximum value of the residual on the grid (there are other error tolerances we could use but this is one of the simplest and most effective). Using this measure for the error ensures that the residual remains small relative to the solution, \(\chi_{i,j}\). A typical value of the tolerance that might be used is \(TOL=10^{-4}\).

    +
  8. +
+

There are a few important things to note about the basic relaxation procedure outlined above

+
    +
  • This Jacobi method is the simplest form of relaxation. It requires that you have two storage vectors, one for \(\chi_{i,j}^{(n)}\) and one for \(\chi_{i,j}^{(n+1)}\).

  • +
  • The relaxation can be modified by using a single vector to store the \(\chi\) values. In this case, as you compute the residual vector and update \(\chi\) at each point \((i,j)\), the residual involves some \(\chi\) values from the previous iteration and some that have already been updated. For example, if we traverse the grid by rows (that is, loop over \(j\) first and then \(i\)), then the residual is now given by

    +
    +\[R_{i,j}^{(n)} = F_{i,j} - \frac{1}{d^2} ( \chi_{i+1,j}^{(n)} + +\chi_{i,j+1}^{(n)} + +\underbrace{\chi_{i-1,j}^{(n+1)} + +\chi_{i,j-1}^{(n+1)}}_{\mbox{{updated already}}} - 4 +\chi_{i,j}^{(n)} ),\]
    +

    (where the \((i,j-1)\) and \((i-1,j)\) points have already been updated), and then the solution is updated

    +
    +\[\chi_{i,j}^{(n+1)} = \chi_{i,j}^{(n)} - \frac{d^2}{4} R_{i,j}^{(n)}.\]
    +

    Not only does this relaxation scheme save on storage (since only one solution vector is now required), but it also converges more rapidly (typically, it takes half the number of iterations as Jacobi), which speeds up convergence somewhat, but still leaves the cost at the same order as Jacobi, as we can see from Cost of Schemes Table. This is known as the Gauss-Seidel relaxation scheme.

    +
  • +
  • In practice, we actually use a modification of Gauss-Seidel

    +
    +\[\chi_{i,j}^{(n+1)} = \chi_{i,j}^{(n)} - \frac{\mu d^2}{4} R_{i,j}^{(n)}\]
    +

    where \(1<\mu<2\) is the relaxation parameter. The resulting scheme is called successive over-relaxation, or SOR, and it improves convergence considerably (see Cost of Schemes Table.

    +

    What happens when \(0<\mu<1\)? Or \(\mu>2\)? The first case is called under-relaxation, and is useful for smoothing the solution in multigrid methods. The second leads to an iteration that never converges.

    +
  • +
  • Does the iteration converge? For the Poisson problem, yes, but not in general.

  • +
+
    +
  • How fast does the iteration converge? and How much does each iteration cost? The answer to both of these questions gives us an idea of the cost of the relaxation procedure …

    +

    Assume we have a grid of size \(N\times N\). If we used Gaussian elimination to solve this matrix system (with \(N^2\) unknowns), we would need to perform on the order of \(N^6\) operations (you saw this in Lab #3). One can read in any numerical linear algebra textbook (Strang 1988, for example), that the number of iterations required for Gauss-Seidel and Jacobi is on the order of \(N^3\), while for SOR it reduces to \(N^2\). There is another class of +iterative methods, called multigrid methods, which converge in a constant number of iterations (the optimal result)

    +

    If you look at the arithmetic operations performed in the the relaxation schemes described above, it is clear that a single iteration involves on the order of \(N^2\) operations (a constant number of multiplications for each point).

    +

    Putting this information together, the cost of each iterative scheme can be compared as in Cost of Schemes Table.

    +
  • +
+ + + + + + + + + + + + + + + + + + + + + + + +

Method

Order of Cost

Gaussian Elimination

\(N^6\)

Jacobi

\(N^5\)

Gauss-Seidel

\(N^5\)

SOR

\(N^4\)

Multigrid

\(N^2\)

+

Cost of Schemes Table: Cost of iterative schemes compared to direct methods.

+
    +
  • Multigrid methods are obviously the best, but are also extremely complicated … we will stick to the much more manageable Jacobi, Gauss-Seidel and SOR schemes.

  • +
  • There are other methods (called conjugate gradient and capacitance matrix methods) which improve on the relaxation methods we’ve seen. These won’t be described here.

  • +
+
+
+

Temporal Discretization

+

Let us now turn to the time evolution equation for the stream function. Supposing that the initial time is \(t=0\), then we can approximate the solution at the discrete time points \(t_p = p\Delta t\), and write the discrete solution as

+
+\[\Psi_{i,j}^p \approx \psi(x_i,y_j,t_p).\]
+

Notice that the spatial indices appear as subscripts and the time index as a superscript on the discrete approximation \(\chi\).

+

We can choose any discretization for time that we like, but for the QG equation, it is customary (see Mesinger and Arakawa, for example) to use a centered time difference to approximate the derivative in :

+
+\[\frac{\Psi_{i,j}^{p+1} - \Psi_{i,j}^{p-1}}{2\Delta t} = \chi_{i,j}^p\]
+

or, after rearranging,

+

(Leapfrog Eqn)

+
+\[\Psi_{i,j}^{p+1} = \Psi_{i,j}^{p-1} + 2\Delta t \chi_{i,j}^p\]
+

This time differencing method is called the leap frog scheme, and was introduced in Lab 7. A pictorial representation of this scheme is given in Figure Leap-Frog Scheme.

+
+
[ ]:
+
+
+
Image(filename='images/leapfrog.png',width='40%')
+
+
+
+

Figure Leap-Frog Scheme: A pictorial representation of the “leap-frog” character of the time-stepping scheme. The values of \(\chi\) at even time steps are linked together with the odd \(\Psi\) values; likewise, values of \(\chi\) at odd time steps are linked to the even \(\Psi\) values.

+

There are two additional considerations related to the time discretization:

+
    +
  • The viscous terms (\(\nabla_h^2\psi\) and \(\nabla_h^4\psi\)) are evaluated at the \(p-1\) time points, while all other terms in the right hand side are evaluated at \(p\) points. The reasoning for this is described in McCalpin’s QGBOX documentation [p. 8]:

    +
    +

    Note that the frictional terms are all calculated at the old \((n-1)\) time level, and are therefore first-order accurate in time. This ’time lagging’ is necessary for linear computational stability.

    +
    +
  • +
  • This second item will not be implemented in this lab or the problems, but should still be kept in mind …

    +

    The leap-frog time-stepping scheme has a disadvantage in that it introduces a “computational mode” …  McCalpin [p. 23] describes this as follows:

    +
    +

    Leap-frog models are plagued by a phenomenon called the “computational mode”, in which the odd and even time levels become independent. Although a variety of sophisticated techniques have been developed for dealing with this problem, McCalpin’s model takes a very simplistic approach. Every narg* time steps, adjacent time levels are simply averaged together (where narg\(\approx 100\) and odd)*

    +
    +

    Why don’t we just abandon the leap-frog scheme? Well, Mesinger and Arakawa [p. 18] make the following observations regarding the leap-frog scheme:

    +
      +
    • its advantages: simple and second order accurate; neutral within the stability range.

    • +
    • its disadvantages: for non-linear equations, there is a tendency for slow amplification of the computational mode.

    • +
    • the usual method for suppressing the spurious mode is to insert a step from a two-level scheme occasionally (or, as McCalpin suggests, to occasionally average the solutions at successive time steps).

    • +
    • In Chapter 4, they mention that it is possible to construct grids and/or schemes with the same properties as leap-frog and yet the computational mode is absent.

    • +
    +

    The important thing to get from this is that when integrating for long times, the computational mode will grow to the point where it will pollute the solution, unless one of the above methods is implemented. For simplicity, we will not be worrying about this in Lab #8.

    +
  • +
+
+
+

Outline of Solution Procedure

+

Now that we have discretized in both space and time, it is possible to outline the basic solution procedure.

+
    +
  1. +
    Assume that at \(t=t_p\), we know :math:`Psi^0, Psi^1, dots,

    Psi^{p-1}`

    +
    +
    +
  2. +
  3. Calculate \(F_{i,j}^p\) for each grid point \((i,j)\) (see Section Right Hand Side). Keep in mind that the viscosity terms (\(\nabla_h^2\psi\) and \(\nabla_h^4\psi\)) are evaluated at time level \(p-1\), while all other terms are evaluated at time level \(p\) (this was discussed in Section Temporal Discretization).

  4. +
  5. Solve the (Discrete :math:chi` equation <#eq:discrete-chi>`__) for \(\chi_{i,j}^p\) (the actual solution method will be described in Section Solution of the Poisson Equation by Relaxation.

  6. +
  7. Given \(\chi_{i,j}^p\), we can find \(\Psi_{i,j}^{p+1}\) by using the (Leap-frog time stepping scheme)

  8. +
  9. Let \(p \leftarrow p+1\) and return to step 2.

  10. +
+

Notice that step 1 requires a knowledge of two starting values, \(\Psi^0\) and \(\Psi^1\), at the initial time. An important addition to the procedure below is some way to get two starting values for \(\Psi\). Here are several alternatives:

+
    +
  • Set \(\Psi^0\) and \(\Psi^1\) both to zero.

  • +
  • Set \(\Psi^0=0\), and then use a forward Euler step to find \(\Psi^1\).

  • +
  • Use a predictor-corrector step, like that employed in Lab 7.

  • +
+
+
+

Problem Two

+
+

Now that you’ve seen how the basic numerical scheme works, it’s time to jump into the numerical scheme. The code has already been written for the discretization described above, with free-slip boundary conditions and the SOR relaxation scheme. The code is in qg.py and the various functions are:

+
+
+
+
main

the main routine, contains the time-stepping and the output.

+
+
param()

sets the physical parameters of the system.

+
+
+
+
+
+
numer_init()

sets the numerical parameters.

+
+
vis(psi, nx, ny)

calculates the second order (\(\nabla^2\)) viscosity term (not leap-frogged).

+
+
+
+
+
+
wind(psi, nx, ny)

calculates the the wind term.

+
+
mybeta(psi, nx, ny)

calculates the beta term

+
+
+
+
+
+
jac(psi, vis, nx, ny)

calculate the Jacobian term. (Arakawa Jacobian given here).

+
+
chi(psi, vis_curr, vis_prev, chi_prev, nx, ny, dx, r_coeff, tol, max_count, epsilon, wind_par, vis_par)

calculates \(\chi\) using a call to relax

+
+
+
+
+
+
relax(rhs, chi_prev, dx, nx, ny, r_coeff, tol, max_count)

does the relaxation.

+
+
+
+
+

Your task in this problem is to program the “straightforward” discretization of the Jacobian term, using (Jacobian: Expansion 1), that you derived in Problem One. The only change this involves is inserting the code into the function jac. Once finished, run the code. The parameter functions param init_numer provide some sample parameter values for you to execute the code with. Try these input values and observe what happens in the solution. +Choose one of the physical parameters to vary. Does changing the parameter have any effect on the solution? in what way?

+
+
+

Hand in the code for the Jacobian, and a couple of plots demonstrating the solution as a function of parameter variation. Describe your results and make sure to provide parameter values to accompany your explanations and plots.

+

If the solution is unstable, check your CFL condition. The relevant waves are Rossby waves with wave speed:

+
+\[c=\beta k^{-2}\]
+

where \(k\) is the wave-number. The maximum wave speed is for the longest wave, \(k=\pi/b\) where \(b\) is the size fo your domain.

+
+
+

If the code is still unstable, even though the CFL is satisfied, see Section Aliasing Error and Nonlinear Instability. The solution is nonlinear unstable. Switch to the Arakawa Jacobian for stability.

+
+
+
+

Problem Three

+
+

The code provided for Problem Two implements the SOR relaxation scheme. Your job in this problem is to modify the relaxation code to perform a Jacobi iteration.

+
+
+

Hand in a comparison of the two methods, in tabular form. Use two different relaxation parameters for the SOR scheme. (Include a list of the physical and numerical parameter you are using). Also submit your code for the Jacobi relaxation scheme.

+
+
+
+

Problem Four

+
+

Modify the code to implement the no-slip boundary conditions.

+
+
+
+

Problem Five

+
+

The code you’ve been working with so far uses the simplest possible type of starting values for the unknown stream function: both are set to zero. If you’re keen to play around with the code, you might want to try modifying the SOR code for the two other methods of computing starting values: using a forward Euler step, or a predictor-corrector step (see Section Outline of Solution Procedure).

+
+
+
+
+

Aliasing Error and Nonlinear Instability

+

In Problem Two, you encountered an example of the instability that can occur when computing numerical solutions to some nonlinear problems, of which the QG equations is just one example. This effect has in fact been known for quite some time. Early numerical experiments by N. Phillips in 1956 exploded after approximately 30 days of integration due to nonlinear instability. He used the straightforward centered difference formula for as you did.

+

It is important to realize that this instability does not occur in the physical system modeled by the equations of motion. Rather is an artifact of the discretization process, something known as aliasing error. Aliasing error can be best understood by thinking in terms of decomposing the solution into modes. In brief, aliasing error arises in non-linear problems when a numerical scheme amplifies the high-wavenumber modes in the solution, which corresponds physically to a spurious addition of +energy into the system. Regardless of how much the grid spacing is reduced, the resulting computation will be corrupted, since a significant amount of energy is present in the smallest resolvable scales of motion. This doesn’t happen for every non-linear problem or every difference scheme, but is an issue that one who is using numerical codes must be aware of.

+
+

Example One

+
+

Before moving on to how we can handle the instability in our discretization of the QG equations, you should try out the following demo on aliasing error. It is taken from an example in Mesinger and Arakawa [p. 35ff.], based on the simplest of non-linear PDE’s, the advection equation:

+
+\[\frac{du}{dt}+u\frac{du}{dx} = 0.\]
+

If we decompose the solution into Fourier mode, and consider a single mode with wavenumber \(k\),

+
+\[u(x) = \sin{kx}\]
+

then the solution will contain additional modes, due to the non-linear term, and given by

+
+\[u \frac{du}{dx} = k \sin{kx} \cos{kx} =\frac{1}{2}k \sin{2kx}.\]
+
+
+

With this as an introduction, keep the following in mind while going through the demo:

+
    +
  • on a computational grid with spacing \(\Delta x\), the discrete versions of the modes can only be resolved up to a maximum wavenumber, \(k_{max}=\frac{\pi}{\Delta x}\).

  • +
+
+
+
    +
  • even if we start with modes that are resolvable on the grid, the non-linear term introduces modes with a higher wavenumber, which may it not be possible to resolve. These modes, when evaluated at discrete points, appear as modes with lower wavenumber; that is, they are aliased to the lower modes (this becomes evident in the demo as the wavenumber is increased …).

  • +
  • not only does aliasing occur, but for this problem, these additional modes are amplified by a factor of \(\frac{1}{2}k\). This is the source of the aliasing error – such amplified modes will grow in time, no matter how small the time step taken, and will pollute the computations.

  • +
+
+

The previous example is obviously a much simpler non-linearity than that of the QG equations, but the net effect is the same. The obvious question we might ask now is:

+
+

Can this problem be averted for our discretization of the QG equations with the leap-frog time-stepping scheme, or do we have to abandon it?

+
+

There are several possible solutions, presented by Mesinger and Arakawa [p. 37], summarized here, including

+
    +
  • filter out the top half or third of the wavenumbers, so that the aliasing is eliminated.

  • +
  • use a differencing scheme that has a built-in damping of the shortest wavelengths.

  • +
  • the most elegant, and one that allows us to continue using the leap-frog time stepping scheme, is one suggested by Arakawa, is one that aims to eliminate the spurious inflow of energy into the system by developing a discretization of the Jacobian term that satisfies discrete analogues of the conservation properties for average vorticity, enstrophy and kinetic energy.

  • +
+

This third approach will be the one we take here. The details can be found in the Mesinger-Arakawa paper, and are not essential here; the important point is that there is a discretization of the Jacobian that avoids the instability problem arising from aliasing error. This discrete Jacobian is called the Arakawa Jacobian and is obtained by averaging the discrete Jacobians obtained by using standard centered differences on the formulae (Jacobian: Expansion 1), (Jacobian: +Expansion 2) and (Jacobian: Expansion 3) (see Problem One and the two quizzes following it in Section Right Hand Side.

+

You will not be required to derive or code the Arakawa Jacobian (the details are messy!), and the code will be provided for you for all the problems following Problem Two.

+
+
+
+

Classical Solutions

+

Bryan (1963) and Veronis (1966)

+
+

Problem Six

+
+

Using the SOR code from Problems Three (free-slip BC’s) and Four (no-slip BC’s), try to reproduce the classical results of Bryan and Veronis.

+
+
+
+
+

Mathematical Notes

+
+

Definition of the Beta-plane

+

A \(\beta\)-plane is a plane approximation of a curved section of the Earth’s surface, where the Coriolis parameter, \(f(y)\), can be written roughly as a linear function of \(y\)

+
+\[f(y) = f_0 + \beta y\]
+

for \(f_0\) and \(\beta\) some constants. The motivation behind this approximation follows.

+
+
[ ]:
+
+
+
Image(filename='images/coriolis.png',width='30%')
+
+
+
+

Figure Rotating Globe: A depiction of the earth and its angular frequency of rotation, \(\Omega\), the local planetary vorticity vector in blue, and the Coriolis parameter, \(f_0\), at a latitude of \(\theta_0\).

+

Consider a globe (the earth) which is rotating with angular frequency \(\Omega\) (see Figure Rotating Globe), and assume that the patch of ocean under consideration is at latitude \(\theta\). The most important component of the Coriolis force is the local vertical (see the in Figure Rotating Globe), which is defined in terms of the Coriolis parameter, \(f\), to be

+
+\[f/2 = \Omega \sin\theta.\]
+

This expression may be simplified somewhat by ignoring curvature effects and approximating the earth’s surface at this point by a plane \(-\) if the plane is located near the middle latitudes, then this is a good approximation. If \(\theta_0\) is the latitude at the center point of the plane, and \(R\) is the radius of the earth (see Figure Rotating Globe), then we can apply trigonometric ratios to obtain the following expression on the plane:

+
+\[f = \underbrace{2\Omega\sin\theta_0}_{f_0} + + \underbrace{\frac{2\Omega\cos\theta_0}{R}}_\beta \, y\]
+

Not surprisingly, this plane is called a mid-latitude :math:`beta`-plane.

+
+
[ ]:
+
+
+
Image(filename='images/beta-plane.png',width='30%')
+
+
+
+

Figure Beta-plane: The \(\beta\)-plane approximation, with points on the plane located along the local \(y\)-axis. The Coriolis parameter, \(f\), at any latitude \(\theta\), can be written in terms of \(y\) using trigonometric ratios.

+
+
+

Simplification of the QG Model Equations

+

The first approximation we will make eliminates several of the non-linear terms in the set of equations: (X-Momentum Eqn), (Y-Momentum Eqn), (Hydrostatic Eqn) and (Continuity Eqn). A common simplification that is made in this type of flow is the quasi-geostrophic (QG) approximation, where the horizontal pressure gradient and horizontal components of the Coriolis force are matched:

+
+\[fv \approx \frac{1}{\rho} \, \frac {\partial p}{\partial x}, \, \, fu \approx - \,\frac{1}{\rho} +\, \frac{\partial p}{\partial y} .\]
+

Remembering that the fluid is homogeneous (the density is constant), (Continuity Eqn) implies

+
+\[\frac{\partial^2 p}{\partial x\partial z} = 0, \, \, \frac{\partial^2 p}{\partial y\partial z} = 0.\]
+

We can then differentiate the QG balance equations to obtain

+
+\[\frac{\partial v}{\partial z} \approx 0, \, \, \frac{\partial u}{\partial z} \approx 0.\]
+

Therefore, the terms \(w \, \partial u/\partial z\) and \(w \, \partial v/\partial z\) can be neglected in ((X-Momentum Eqn)) and ((Y-Momentum Eqn)).

+

The next simplification is to eliminate the pressure terms in ((X-Momentum Eqn)) and ((Y-Momentum Eqn)) by cross-differentiating. If we define the vorticity

+
+\[\zeta = \partial v/\partial x - \partial u/\partial y\]
+

then we can cross-differentiate the two momentum equations and replace them with a single equation in \(\zeta\):

+
+\[\frac{\partial \zeta}{\partial t} + u \frac{\partial \zeta}{\partial x} + v \frac{\partial \zeta}{\partial y} + v\beta + (\zeta+f)(\frac {\partial u}{\partial x}+\frac{\partial v}{\partial y}) = +A_v \, \frac{\partial^2 \zeta}{\partial z^2} + A_h \, \nabla_h^2 \zeta,\]
+

where \(\beta \equiv df/dy\). Notice the change in notation for derivatives, from \(\nabla\) to \(\nabla_h\): this indicates that derivative now appear only with respect to the “horizontal” coordinates, \(x\) and \(y\), since the \(z\)-dependence in the solution has been eliminated.

+

The third approximation we will make is to assume that vorticity effects are dominated by the Coriolis force, or that \(|\zeta| \ll f\). Using this, along with the (Continuity Eqn) implies that

+

(Vorticity Eqn)

+
+\[\frac{\partial \zeta}{\partial t} + u \frac {\partial \zeta}{\partial x} + v \frac{\partial \zeta}{\partial y} + v \beta -f \, \frac{\partial w}{\partial z} = A_v \, +\frac{\partial^2 \zeta}{\partial z^2} + A_h \, \nabla_h^2 \zeta .\]
+

The reason for making this simplification may not be obvious now but it is a good approximation in flows in the ocean and, as we will see next, it allows us to eliminate the Coriolis term.

+

The final sequence of simplifications eliminate the \(z\)-dependence in the problem by integrating (Vorticity Eqn) in the vertical direction and using boundary conditions.

+

The top 500 metres of the ocean tend to act as a single slab-like layer. The effect of stratification and rotation cause mainly horizontal motion. To first order, the upper layers are approximately averaged flow (while to the next order, surface and deep water move in opposition with deep flow much weaker). Consequently, our averaging over depth takes into account this “first order” approximation embodying the horizontal (planar) motion, and ignoring the weaker (higher order) effects.

+

First, recognize that the vertical component of velocity on both the top and bottom surfaces should be zero:

+
+\[w = 0 \;\;\; \mbox{at $z=0$}\]
+
+\[w = 0 \;\;\; \mbox{at $z=-H$}\]
+

Notice that the in second condition we’ve also assumed that the water surface is approximately flat \(-\) this is commonly known as the rigid lid approximation. Integrate the differential equation (Vorticity Eqn) with respect to \(z\), applying the above boundary conditions, and using the fact that \(u\) and \(v\) (and therefore also \(\zeta\)) are independent of \(z\),

+

(Depth-Integrated Vorticity)

+
+\[\frac{1}{H} \int_{-H}^0 \mbox{(Vorticity Eqn)} dz \Longrightarrow +\frac {\partial \zeta}{\partial t} + u \frac {\partial \zeta}{\partial x} + v \frac {\partial \zeta}{\partial y} + v\beta += \frac{1}{H} \, \left( \left. A_v \, + \frac{\partial \zeta}{\partial z} \right|_{z=0} - \left. A_v \, + \frac{\partial \zeta}{\partial z} \right|_{z=-H} \right) \, + A_h \, \nabla_h^2 \zeta\]
+

The two boundary terms on the right hand side can be rewritten in terms of known information if we apply two additional boundary conditions: the first, that the given wind stress on the ocean surface,

+
+\[\vec{\tau}(x,y) = (\tau_1,\tau_2) \;\;\mbox{at }\;\; z=0,\]
+

can be written as

+
+\[\rho A_v \left( \frac{\partial u}{\partial z} , \frac{\partial v}{\partial z} \right) = \left( \tau_1 , \tau_2 + \right)\]
+

which, after differentiating, leads to

+

(Stress Boundary Condition)

+
+\[\frac{1}{H} \, A_v \, \left. \frac{\partial \zeta}{\partial z} \right|_{z=0} = \frac{1}{\rho H} \, + \nabla_h \times \tau \,;\]
+

and, the second, that the Ekman layer along the bottom of the ocean, \(z=-H\), generates Ekman pumping which obeys the following relationship:

+

(Ekman Boundary Condition)

+
+\[\frac{1}{H} \, A_v \, \left. \frac{\partial \zeta}{\partial z} \right|_{z=-H} = + \; \kappa \zeta,\]
+

where the Ekman number, \(\kappa\), is defined by

+
+\[\kappa \equiv \frac{1}{H} \left( \frac{A_v f}{2} \right)^{1/2}.\]
+

Using (Stress Boundary Condition) and (Ekman Boundary Condition) to replace the boundary terms in (Depth-Integrated Vorticity), we get the following equation for the vorticity:

+
+\[\frac{\partial \zeta}{\partial t} + u \frac{\partial \zeta}{\partial x} + v \frac{\partial \zeta}{\partial y} + v \beta = \frac{1}{\rho H} \, \nabla_h +\times \tau - \kappa \zeta + A_h \, \nabla_h^2 \zeta.\]
+

The next and final step may not seem at first to be much of a simplification, but it is essential in order to derive a differential equation that can be easily solved numerically. Integrate (Continuity Eqn) with respect to \(z\) in order to obtain

+
+\[\frac {\partial u}{\partial x} + \frac{\partial v}{\partial y} = 0,\]
+

after which we can introduce a stream function, \(\psi\), defined in terms of the velocity as

+
+\[u = - \, \frac{\partial \psi}{\partial y} \, , \, v = \frac{\partial \psi}{\partial x}.\]
+

The stream function satisfies this equation exactly, and we can write the vorticity as

+
+\[\zeta = \nabla_h^2 \psi,\]
+

which then allows us to write both the velocity and vorticity in terms of a single variable, \(\psi\).

+

After substituting into the vorticity equation, we are left with a single equation in the unknown stream function.

+
+\[\frac{\partial}{\partial t} \, \nabla_h^2 \psi + {\cal J} \left( \psi, \nabla_h^2 \psi \right) + \beta \, \frac {\partial \psi}{\partial x} = \frac{-1}{\rho H} \, \nabla_h \times \tau - \, \kappa \, \nabla_h^2 \psi + A_h \, \nabla_h^4 \psi\]
+

where

+
+\[{\cal J} (a,b) = \frac{\partial a}{\partial x} \, \frac{\partial b}{\partial y} - \frac{\partial a}{\partial y} \, \frac{\partial b}{\partial x}\]
+

is called the Jacobian operator.

+

The original system (X-Momentum Eqn), (Y-Momentum Eqn), (Hydrostatic Eqn) and (Continuity Eqn) was a system of four non-linear PDE’s, in four independent variables (the three velocities and the pressure), each of which depend on the three spatial coordinates. Now let us review the approximations made above, and their effects on this system:

+
    +
  1. After applying the QG approximation and homogeneity, two of the non-linear terms were eliminated from the momentum equations, so that the vertical velocity, \(w\), no longer appears.

  2. +
  3. By introducing the vorticity, \(\zeta\), the pressure was eliminated, and the two momentum equations to be rewritten as a single equation in \(\zeta\) and the velocities.

  4. +
  5. Some additional terms were eliminated by assuming that Coriolis effects dominate vorticity, and applying the continuity condition.

  6. +
  7. Integrating over the vertical extent of the ocean, and applying boundary conditions eliminated the \(z\)-dependence in the problem.

  8. +
  9. The final step consists of writing the unknown vorticity and velocities in terms of the single unknown stream function, \(\psi\).

  10. +
+

It is evident that the final equation is considerably simpler: it is a single, non-linear PDE for the unknown stream function, \(\psi\), which is a function of only two independent variables. As we will see in the next section, this equation is of a very common type, for which simple and efficient numerical techniques are available.

+
+
+
+

Glossary

+
    +
  • advection: A property or quantity transferred by the flow of a fluid is said to be “advected” by the flow. aliasing error: In a numerical scheme, this is the phenomenon that occurs when a grid is not fine enough to resolve the high modes in a solution. These high waveumbers consequently appear as lower modes in the solution due to aliasing. If the scheme is such that these high wavenumber modes are amplified, then the aliased modes can lead to a significant error in the computed solution.

  • +
  • :math:`beta`-plane: A \(\beta\)-plane is a plane approximation of a curved section of the Earth’s surface, where the Coriolis parameter can be written roughly as a linear function.

  • +
  • continuity equation: The equation that describes mass conservation in a fluid, \({\partial \rho}/{\partial t} + \nabla \cdot (\rho \vec u) = 0\)

  • +
  • Coriolis force: An additional force experienced by an observer positioned in a rotating frame of reference. If \(\Omega\) is the angular velocity of the rotating frame, and \(\vec u\) is the velocity of an object observed within the rotating frame, then the Coriolis force, \(\Omega \times \vec u\), appears as a force tending to deflect the moving object to the right.

  • +
  • Coriolis parameter: The component of the planetary vorticity which is normal to the earth’s surface, usually denoted by f.

  • +
  • difference stencil: A convenient notation for representing difference approximation formula for derivatives.

  • +
  • Ekman layer: The frictional layer in a geophysical fluid flow field which appears on a rigid surface perpendicular to the rotation vector.

  • +
  • Gauss-Seidel relaxation: One of a class of iterative schemes for solving systems of linear equations. See Lab 8 for a complete discussion.

  • +
  • homogeneous fluid: A fluid with constant density. Even though the density of ocean water varies with depth, it is often assumed homogeneous in order to simplify the equations of motion.

  • +
  • hydrostatic balance: A balance, in the vertical direction, between the vertical pressure gradient and the buoyancy force. The pressure difference between any two points on a vertical line is assumed to depend only on the weight of the fluid between the points, as if the fluid were at rest, even though it is actually in motion. This approximation leads to a simplification in the equations of fluid flow, by replacing the vertical momentum equation.

  • +
  • incompressible fluid: A fluid for which changes in the density with pressure are negligible. For a fluid with velocity field, \(\vec u\), this is expressed by the equation \(\nabla \cdot \vec u = 0\). This equation states that the local increase of density with time must be balanced by the divergence of the mass flux.

  • +
  • Jacobi relaxation: The simplest of the iterative methods for solving linear systems of equations. See Lab 8 for a complete discussion.

  • +
  • momentum equation(s): The equations representing Newton’s second law of motion for a fluid. There is one momentum equation for each component of the velocity.

  • +
  • over-relaxation: Within a relaxation scheme, this refers to the use of a relaxation parameter \(\mu > 1\). It accelerates the standard Gauss-Seidel relaxation by forcing the iterates to move closer to the actual solution.

  • +
+
    +
  • Poisson equation: The partial differential equation \(\nabla^2 u = f\) or, written two dimensions, \({\partial^2 u}/{\partial x^2} + {\partial^2 u}/{\partial y^2} =f(x,y)\).

  • +
  • QG: abbreviation for quasi-geostrophic.

  • +
  • quasi-geostrophic balance: Approximate balance between the pressure gradient and the Coriolis Force.

  • +
  • relaxation: A term that applies to a class of iterative schemes for solving large systems of equations. The advantage to these schemes for sparse matrices (compared to direct schemes such as Gaussian elimination) is that they operate only on the non-zero entries of the matrix. For a description of relaxation methods, see Lab 8.

  • +
+
    +
  • rigid lid approximation: Assumption that the water surface deflection is negligible in the continuity equation (or conservation of volume equation)

  • +
  • SOR: see successive over-relaxation.

  • +
  • sparse system: A system of linear equations whose matrix representation has a large percentage of its entries equal to zero.

  • +
  • stream function: Incompressible, two-dimensional flows with velocity field \((u,v)\), may be described by a stream function, \(\psi(x, y)\), which satisfies \(u = −{\partial \psi}/{\partial y}, v = {\partial \psi}/{\partial x}\). These equations are a consequence of the incompressibility condition.

  • +
  • successive over-relaxation: An iterative method for solving large systems of linear equations. See Lab 8 for a complete discussion.

  • +
  • under-relaxation: Within a relaxation scheme, this refers to the use of a relaxation parameter \(\mu < 1\). It is not appropriate for solving systems of equations directly, but does have some application to multigrid methods.

  • +
  • vorticity: Defined to be the curl of the velocity field, \(\zeta = \nabla \times \vec u\). In geophysical flows, where the Earth is a rotating frame of reference, the vorticity can be considered as the sum of a relative vorticity (the curl of the velocity in the nonrotating frame) and the planetary vorticity, \(2 \Omega\). For these large-scale flows, vorticity is almost always present, and the planetary vorticity dominates.

  • +
+
+
+

References

+

Arakawa, A. and V. R. Lamb, 1981: A potential enstrophy and energy conserving scheme for the shallow water equations. Monthly Weather Review, 109, 18–36.

+

Bryan, K., 1963: A numerical investigation of a non-linear model of a wind-driven ocean. Journal of Atmospheric Science, 20, 594–606

+

On the adjustment of azimuthally perturbed vortices. Journal of Geophysical Research, 92, 8213–8225.

+

Mesinger, F. and A. Arakawa, 1976: Numerical Methods Used in Atmospheric Models,GARP Publications Series No.~17, Global Atmospheric Research Programme.

+

Pedlosky, J., 1987: Geophysical Fluid Dynamics. Springer-Verlag, New York, 2nd edition.Pond,

+

Phillips, N. A., 1956: The general circulation of the atmosphere: A numerical experiment. Quarterly Journal of the Royal Meteorological Society, 82, 123–164.

+

Strang, G., 1988: Linear Algebra and its Applications. Harcourt Brace Jovanovich, San Diego, CA, 2nd edition.

+

Veronis, G., 1966: Wind-driven ocean circulation – Part 2. Numerical solutions of the non- linear problem. Deep Sea Research, 13, 31–55.

+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab8/01-lab8.ipynb b/notebooks/lab8/01-lab8.ipynb new file mode 100644 index 0000000..997b311 --- /dev/null +++ b/notebooks/lab8/01-lab8.ipynb @@ -0,0 +1,1994 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 8: Solution of the Quasi-geostrophic Equations \n", + "\n", + " Lin Yang & John M. Stockie \n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## List of Problems ##\n", + "\n", + "- [Problem 1:](#Problem-One) Discretization of the Jacobian term\n", + "- [Problem 2:](#Problem-Two) Numerical instability in the “straightforward” Jacobian \n", + "- [Problem 3:](#Problem-Three) Implement the SOR relaxation\n", + "- [Problem 4:](#Problem-Four) No-slip boundary conditions\n", + "- [Problem 5:](#Problem-Five) Starting values for the time integration\n", + "- [Problem 6:](#Problem-Six) Duplication of classical results" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Goals ##\n", + "\n", + "This lab is an introduction to the use of implicit schemes for the\n", + "solution of PDE’s, using as an example the quasi-geostrophic equations\n", + "that govern the large-scale circulation of the oceans.\n", + "\n", + "You will see that the discretization of the governing equations leads to\n", + "a large, sparse system of linear equations. The resulting matrix problem\n", + "is solved with relaxation methods, one of which you will write the code\n", + "for, by modifying the simpler Jacobi relaxation. There are two types of\n", + "boundary conditions typically used for this problem, one of which you\n", + "will program yourself – your computations are easily compared to\n", + "previously-obtained “classical” results." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Learning Objectives ##\n", + "\n", + "After reading and working through this lab you will be able to:\n", + "* Explain one reason why one may need to solve a large system of linear equations even though the underlying method is explicit\n", + "* Describe the relaxation method\n", + "* Rescale a partial-differential equation\n", + "* Write down the center difference approximation for the Laplacian operator\n", + "* Describe what a ghost point is" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Readings\n", + "There are no required readings for this lab. If you would like some\n", + "additional background beyond the material in the lab itself, then you\n", + "may refer to the references listed below:\n", + "\n", + "- **Equations of motion:**\n", + "\n", + " - [Pedlosky](#Ref:Pedlosky) Sections 4.6 & 4.11 (derivation of QG\n", + " equations)\n", + "\n", + "- **Nonlinear instability:**\n", + "\n", + " - [Mesinger & Arakawa](#Ref:MesingerArakawa) (classic paper with\n", + " description of instability and aliasing)\n", + "\n", + " - [Arakawa & Lamb](#Ref:ArakawaLamb) (non-linear instability in the QG\n", + " equations, with the Arakawa-Jacobian)\n", + "\n", + "- **Numerical methods:**\n", + "\n", + " - [Strang](#Ref:Strang) (analysis of implicit schemes)\n", + "\n", + " - [McCalpin](#Ref:McCalpin) (QGbox model)\n", + "\n", + "- **Classical numerical results:**\n", + "\n", + " - [Veronis](#Ref:Veronis) (numerical results)\n", + "\n", + " - [Bryan](#Ref:Bryan) (numerical results)\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import context\n", + "from IPython.display import Image\n", + "# import the quiz script\n", + "from numlabs.lab8 import quiz8 as quiz" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction ##\n", + "\n", + "An important aspect in the study of large-scale circulation in the ocean\n", + "is the response of the ocean to wind stress. Solution of this problem\n", + "using the full Navier-Stokes equations is quite complicated, and it is\n", + "natural to look for some way to simplify the governing equations. A\n", + "common simplification in many models of large-scale, wind-driven ocean\n", + "circulation, is to assume a system that is homogeneous and barotropic.\n", + "\n", + "It is now natural to ask:\n", + "\n", + "> *Does the simplified model capture the important dynamics in the\n", + "> real ocean?*\n", + "\n", + "This question can be investigated by solving the equations numerically,\n", + "and comparing the results to observations in the real ocean. Many\n", + "numerical results are already available, and so the purpose of this lab\n", + "is to introduce you to the numerical methods used to solve this problem,\n", + "and to compare the computed results to those from some classical papers\n", + "on numerical ocean simulations.\n", + "\n", + "Some of the numerical details (in Sections [Right Hand Side](#Right-Hand-Side), [Boundary Conditions](#Boundary-Conditions), [Matrix Form of Discrete Equations](#Matrix-Form-of-the-Discrete-Equations), [Solution of the Poisson Equation by Relaxation](#Solution-of-the-Poisson-Equation-by-Relaxation)\n", + "and the\n", + "appendices) are quite technical, and may be passed over the first time\n", + "you read through the lab. You can get a general idea of the basic\n", + "solution procedure without them. However, you should return to them\n", + "later and understand the material contained in them, since these\n", + "sections contain techniques that are commonly encountered when solving\n", + "PDE’s, and an understanding of these sections is required for you to\n", + "answer the problems in the Lab.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## The Quasi-Geostrophic Model ##\n", + "\n", + "Consider a rectangular ocean with a flat bottom, as pictured in\n", + "[Figure Model Ocean](#Figure-Model-Ocean), and ignore curvature effects, by confining the region of interest to a *mid-latitude $\\beta$-plane*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/rect.png',width='45%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Model Ocean The rectangular ocean with flat bottom, ignoring curvature\n", + "effects.\n", + "
\n", + "\n", + "More information on what is a $\\beta$-plane and on the neglect of\n", + "curvature terms in the $\\beta$-plane approximation is given in the\n", + "appendix.\n", + "\n", + "If we assume that the ocean is homogeneous (it has constant density\n", + "throughout), then the equations governing the fluid motion on the\n", + "$\\beta$-plane are: \n", + "\n", + "
(X-Momentum Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac{\\partial u}{\\partial t} + u \\frac {\\partial u}{\\partial x} + v \\frac {\\partial u}{\\partial y} + w \\frac{\\partial u}{\\partial z} - fv = - \\, \\frac{1}{\\rho} \\, \\frac {\\partial p}{\\partial x}\n", + "+ A_v \\, \\frac{\\partial^2 u}{\\partial z^2} + A_h \\, \\nabla^2 u\n", + "\\end{equation}\n", + "\n", + "
(Y-Momentum Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac{\\partial v}{\\partial t} + u \\frac{\\partial v}{\\partial x} + v \\frac{\\partial v}{\\partial y} + w \\frac{\\partial v}{\\partial z} + fu = - \\, \\frac{1}{\\rho} \\, \\frac{\\partial p}{\\partial y}\n", + "+ A_v \\, \\frac{\\partial^2 v}{\\partial z^2} + A_h \\, \\nabla^2 v\n", + "\\end{equation}\n", + "\n", + "
(Hydrostatic Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac{\\partial p}{\\partial z} = - \\rho g\n", + "\\end{equation}\n", + "\n", + "
(Continuity Eqn)
\n", + "\n", + "\\begin{equation}\n", + "\\frac {\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} = - \\, \\frac{\\partial w}{\\partial z}\n", + "\\end{equation}\n", + "\n", + "where\n", + "\n", + "- ([X-Momentum Eqn](#eq:xmom)) and ([Y-Momentum Eqn](#eq:ymom)) are the lateral momentum equations,\n", + "\n", + "- ([Hydrostatic Eqn](#eq:hydrostatic)) is the hydrostatic balance (and replaces the vertical momentum\n", + " equation), and\n", + "\n", + "- ([Continuity Eqn](#eq:continuity)) is the continuity (or incompressibility or conservation of volume) condition.\n", + "\n", + "The variables and parameters appearing above are:\n", + "\n", + "- $(u,v,w)$, the fluid velocity components;\n", + "\n", + "- $f(y)$, the Coriolis parameter (assumed to be a linear function of\n", + " $y$);\n", + "\n", + "- $\\rho$, the density (assumed constant for a homogeneous fluid);\n", + "\n", + "- $A_v$ and $A_h$, the vertical and horizontal coefficients of\n", + " viscosity, respectively (constants);\n", + "\n", + "- $g$, the gravitational acceleration (constant).\n", + "\n", + "Equations ([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)) form a non-linear system of PDE’s, for which there are many\n", + "numerical methods available. However, due to the complexity of the\n", + "equations, the methods themselves are *very complex*, and\n", + "consume a large amount of CPU time. It is therefore advantageous for us\n", + "to reduce the equations to a simpler form, for which common, and more\n", + "efficient numerical solution techniques can be used.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "By applying a sequence of physically-motivated approximations (see\n", + "[Appendix Simplification of the QG Model Equations](#Simplification-of-the-QG-Model-Equations])) and by using the boundary conditions, the system([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)) can be\n", + "reduced to a single PDE: \n", + "
\n", + "(Quasi-Geotrophic Eqn)\n", + "$$\n", + " \\frac{\\partial}{\\partial t} \\, \\nabla_h^2 \\psi + {\\cal J} \\left( \\psi, \\nabla_h^2 \\psi \\right)\n", + " + \\beta \\, \\frac {\\partial \\psi}{\\partial x} = \\frac{1}{\\rho H} \\, \\nabla_h \\times \\tau - \\kappa\n", + " \\, \\nabla_h^2 \\psi + A_h \\, \\nabla_h^4 \\psi $$\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + " where\n", + "\n", + "- $\\psi$ is the stream function, defined by\n", + " $$u = - \\frac{\\partial \\psi}{\\partial y},$$\n", + " $$v = \\frac{\\partial \\psi}{\\partial x}$$\n", + "\n", + "- $$\\nabla_h = \\left(\\frac{\\partial}{\\partial x},\\frac{\\partial}{\\partial y}\\right)$$ \n", + " is the “horizontal”\n", + " gradient operator, so-called because it involves only derivatives in\n", + " $x$ and $y$;\n", + "\n", + "- $${\\cal J} (a,b) = \\frac{\\partial a}{\\partial x} \\frac{\\partial\n", + " b}{\\partial y} - \\frac{\\partial a}{\\partial y} \\frac{\\partial b}{\\partial x}$$ \n", + " is the *Jacobian* operator;\n", + "\n", + "- $\\vec{\\tau}(x,y) = \\left(\\,\\tau_1(x,y),\\tau_2(x,y)\\,\\right)$ is the\n", + " wind stress boundary condition at the surface $z=0$. A simple form\n", + " of the wind stress might assume an ocean “box” that extends from\n", + " near the equator to a latitude of about $60^\\circ$, for which\n", + " typical winds are easterly near the equator and turn westerly at\n", + " middle latitudes. A simple function describing this is\n", + " $$\\vec{\\tau} = \\tau_{max} (-\\cos y, 0),$$ \n", + " which is what we will use in this lab. \n", + " \n", + " More complicated wind stress functions are possible. See [McCalpin’s](#Ref:McCalpin)\n", + " QGBOX documentation [p. 24] for another\n", + " example." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- $\\beta = df/dy$ is a constant, where $f(y) = f_0+\\beta y$ (see\n", + " [Appendix Definition of the Beta-plane](#Definition-of-the-Beta-plane));\n", + "\n", + "- $\\kappa = {1}/{H} \\left[ (A_v f_0)/{2} \\right]^{1/2}$ is the bottom friction scaling (constant); and\n", + "\n", + "- $H$ is the vertical depth of the water column (constant).\n", + "\n", + "Notice that the original (second order) system of four equations in four\n", + "unknowns ($u$, $v$, $w$, $p$) has now been reduced to a single (fourth\n", + "order) PDE in one unknown function, $\\psi$. It will become clear in the\n", + "next section just how much simpler the system of equations has become …\n", + "\n", + "Before going on, though, we need to close the system with the\n", + "*boundary conditions* for the stream function $\\psi$. We\n", + "must actually consider two cases, based on whether or not the lateral\n", + "eddy viscosity parameter, $A_h$, is zero:\n", + "\n", + "- **if $A_h=0$:** the boundary conditions are\n", + " *free-slip*; that is, $\\psi=0$ on the boundary.\n", + "\n", + "- **if $A_h\\neq 0$:** the boundary conditions are\n", + " *no-slip*; that is both $\\psi$ and its normal\n", + " derivative $\\nabla\\psi\\cdot\\hat{n}$ are zero on the boundary (where\n", + " $\\hat{n}$ is the normal vector to the boundary)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Scaling the Equations of Motion ###\n", + "\n", + "In physical problems, it is not uncommon for some of the quantities of\n", + "interest to differ in size by many orders of magnitude. This is of\n", + "particular concern when computing numerical solutions to such problems,\n", + "since then round-off errors can begin to pollute the computations (see\n", + "Lab 2).\n", + "\n", + "This is also the case for the QG equations, where the parameters have a\n", + "large variation in size. The QG model parameters, and typical numerical\n", + "values, are given in [Table of Parameters](#tab:parameters). For such problems it is customary to *rescale* the\n", + "variables in such a way that the size differences are minimized.\n", + "\n", + "\n", + "**Problem Parameters**\n", + "\n", + "| Symbol | Name | Range of Magnitude | Units |\n", + "| :----------: | :-------------------------: | :---------------------------------------: | :----------------: |\n", + "| $R$ | Earth’s radius | $6.4 \\times 10^6$ | $m$ |\n", + "|$\\Omega$ | Angular frequency for Earth | $7.27 \\times 10^{-5}$ | $s^{-1}$ |\n", + "| $H$ | Depth of active layer | $100 \\rightarrow 4000$ | $m$ |\n", + "| $B$ | Length and width of ocean | $1.0 \\rightarrow 5.0 \\times 10^6$ | $m$ |\n", + "| $\\rho$ | Density of water | $10^3$ | $kg/m^3$ |\n", + "| $A_h$ | Lateral eddy viscosity | $0$ or $10^1 \\rightarrow 10^4$ | $m^2/s$ |\n", + "| $A_v$ | Vertical eddy viscosity | $10^{-4} \\rightarrow 10^{-1}$ | $m^2/s$ |\n", + "| $\\tau_{max}$ | Maximum wind stress | $10^{-2} \\rightarrow 1$ | $kg m^{-1} s^{-2}$ |\n", + "| $\\theta_0$ | Latitude | $0 \\rightarrow \\frac{\\pi}{3}$ | - |" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "**Derived Quantities**\n", + "\n", + "| Symbol | Name | Range of Magnitude | Units |\n", + "| :----------: | :-------------------------------------------: | :----------------------------------------: | :----------------: |\n", + "| $\\beta$ | $\\beta =2\\Omega \\cos \\theta_0 / R$ | $1.1 \\rightarrow 2.3 \\times 10^{-11}$ | $m^{-1} s^{-1}$ |\n", + "| $f_0$ | $f_0 = 2 \\Omega \\sin \\theta_0$ | $0.0 \\rightarrow 1.3 \\times 10^{-4}$ | $s^{-1}$ |\n", + "| $U_0$ | Velocity scale = $\\tau_{max}/(\\beta\\rho H B)$ | $10^{-5} \\rightarrow 10^{-1}$ | $m s^{-1}$ | \n", + "| $\\kappa$ | bottom friction parameter | $0.0 \\rightarrow 10^{-5}$ | $m^2 s^{-2}$ |\n", + "\n", + "**Non-dimensional Quantities**\n", + "\n", + "| Symbol / Name | Range of Magnitude for Quantity |\n", + "| :----------------------------------------------------: | :------------------------------------------: |\n", + "| $\\epsilon$ / Vorticity ratio = $U_0/(\\beta B^2)$ | (computed) | \n", + "| $\\frac{\\tau_{max}}{\\epsilon\\beta^2 \\rho H B^3}$ | $10^{-12} \\rightarrow 10^{-14}$ |\n", + "| $\\frac{\\kappa}{\\beta B}$ | $4 \\times 10^{-4} \\rightarrow 6 \\times 10^1$ |\n", + "| $\\frac{A_h}{\\beta B^3}$ | $10^{-7} \\rightarrow 10^{-4}$ |\n", + "\n", + "
\n", + "**Table of Parameters**\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let us go through this scaling process for the evolution\n", + "equation [(Quasi-Geostrophic Eqn)](#eq:quasi) for the stream function, which is reproduced\n", + "here for easy comparison:\n", + "\n", + "$$\\frac{\\partial}{\\partial t} \\nabla^2_h \\psi = - \\beta \\frac{\\partial \\psi}{\\partial x} - {\\cal J}(\\psi, \\nabla_h^2\\psi)+ \\frac{1}{\\rho H} \\nabla_h \\times \\vec{\\tau} - \\kappa \\nabla_h^2 \\psi + A_h \\nabla_h^4 \\psi$$ \n", + " \n", + "The basic idea is to find typical\n", + "*scales of motion*, and then redefine the dependent and\n", + "independent variables in terms of these scales to obtain\n", + "*dimensionless variables*.\n", + "\n", + "For example, the basin width and length, $B$, can be used as a scale for\n", + "the dependent variables $x$ and $y$. Then, we define dimensionless\n", + "variables \n", + "
\n", + "(x-scale eqn)\n", + "$$x^\\prime = \\frac{x}{B}$$\n", + "
\n", + "(y-scale eqn)\n", + "$$y^\\prime = \\frac{y}{B}$$\n", + "
\n", + "\n", + "Notice that where $x$ and $y$ varied between\n", + "0 and $B$ (where $B$ could be on the order of hundreds of kilometres),\n", + "the new variables $x^\\prime$ and $y^\\prime$ now vary between 0 and 1\n", + "(and so the ocean is now a unit square).\n", + "\n", + "Similarly, we can redefine the remaining variables in the problem as\n", + "
\n", + "(t-scale eqn)\n", + "$$\n", + " t^\\prime = \\frac{t}{\\left(\\frac{1}{\\beta B}\\right)} $$\n", + "
\n", + "($\\psi$-scale eqn)\n", + "$$ \\psi^\\prime = \\frac{\\psi}{\\epsilon \\beta B^3} $$\n", + "
\n", + "($\\tau$-scale eqn)\n", + "$$ \\vec{\\tau}^\\prime = \\frac{\\vec{\\tau}}{\\tau_{max}}\n", + " $$
\n", + "\n", + "where the scales have been\n", + "specially chosen to represent typical sizes of the variables. Here, the\n", + "parameter $\\epsilon$ is a measure of the the ratio between the “relative\n", + "vorticity” ($\\max|\\nabla_h^2 \\psi|$) and the planetary vorticity (given\n", + "by $\\beta B$)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Now, we need only substitute for the original variables in the\n", + "equations, and replace derivatives with their dimensionless\n", + "counterparts; for example, using the chain rule,\n", + "$$\\frac{\\partial}{\\partial x} = \\frac{\\partial x^\\prime}{\\partial x}\n", + "\\frac{\\partial}{\\partial x^\\prime}.$$ \n", + "Then the equation of motion becomes \n", + "\n", + "\n", + "(Rescaled Quasi-Geostrophic Eqn)\n", + "$$ \\frac{\\partial}{\\partial t^\\prime} \\nabla^{\\prime 2}_h \\psi^\\prime = - \\, \\frac{\\partial \\psi^\\prime}{\\partial x^\\prime} - \\epsilon {\\cal J^\\prime}(\\psi^\\prime, \\nabla_h^{\\prime 2}\\psi^\\prime) + \\frac{\\tau_{max}}{\\epsilon \\beta^2 \\rho H B^3} \\nabla^\\prime_h \\times \\vec{\\tau}^\\prime - \\, \\frac{\\kappa}{\\beta B} \\nabla_h^{\\prime 2} \\psi^\\prime + \\frac{A_h}{\\beta B^3} \\nabla_h^{\\prime 4} \\psi^\\prime $$ \n", + "The superscript\n", + "“$\\,^\\prime$” on $\\nabla_h$ and ${\\cal J}$ signify that the derivatives\n", + "are taken with respect to the dimensionless variables. Notice that each\n", + "term in ([Rescaled Quasi-Geostrophic Eqn](#eq:qg-rescaled)) is now dimensionless, and that there are\n", + "now 4 dimensionless combinations of parameters \n", + "$$\\epsilon, \\;\\; \\frac{\\tau_{max}}{\\epsilon \\beta^2 \\rho H B^3}, \\;\\; \\frac{\\kappa}{\\beta B}, \\;\\; \\mbox{ and} \\;\\; \\frac{A_h}{\\beta B^3}.$$ \n", + "These four expressions define four new\n", + "dimensionless parameters that replace the original (unscaled) parameters\n", + "in the problem.\n", + "\n", + "The terms in the equation now involve the dimensionless stream function,\n", + "$\\psi^\\prime$, and its derivatives, which have been scaled so that they\n", + "are now of order 1 in magnitude. The differences in sizes between terms\n", + "in the equation are now embodied solely in the four dimensionless\n", + "parameters. A term which is multiplied by a small parameter is thus\n", + "truly small in comparison to the other terms, and hence additive\n", + "round-off errors will not contribute substantially to a numerical\n", + "solution based on this form of the equations.\n", + "\n", + "For the remainder of this lab, we will use the scaled version of the\n", + "equations. Consequently, the notation will be simplified by dropping the\n", + "“primes” on the dimensionless variables. But, **do not\n", + "forget**, that any solution (numerical or analytical) from the\n", + "scaled equations must be converted back into dimensional variables\n", + "using [the scale equations](#eq:xscale)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Discretization of the QG equations ##\n", + "\n", + "At first glance, it is probably not clear how one might discretize the\n", + "QG equation ([Rescaled Quasi-Geostrophic Eqn](#eq:qg-rescaled)) from the previous section. This equation is an evolution\n", + "equation for $\\nabla_h^2 \\psi$ (the Laplacian of the stream function)\n", + "but has a right hand side that depends not only on $\\nabla_h^2 \\psi$,\n", + "but also on $\\psi$ and $\\nabla_h^4 \\psi$. The problem may be written in\n", + "a more suggestive form, by letting $\\chi = \\partial\\psi/\\partial t$.\n", + "Then, the ([Rescaled Quasi-Geostrophic Eqn](#eq:qg-rescaled)) becomes \n", + "\n", + "\n", + "(Poisson Eqn)\n", + "$$\\nabla_h^2 \\chi = F(x,y,t), \n", + "$$\n", + "\n", + "where $F(x,y,t)$ contains all of the terms\n", + "except the time derivative. We will see that the discrete version of\n", + "this equation is easily solved for the new unknown variable $\\chi$,\n", + "after which \n", + "
\n", + "$$\\frac{\\partial\\psi}{\\partial t} = \\chi\n", + "$$
\n", + "\n", + "may be used to evolve the stream function in\n", + "time.\n", + "\n", + "The next two sections discuss the spatial and temporal discretization,\n", + "including some details related to the right hand side, the boundary\n", + "conditions, and the iterative scheme for solving the large sparse system\n", + "of equations that arises from the Poisson equation for $\\chi$. Following\n", + "that is an summary of the steps in the solution procedure.\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Spatial Discretization\n", + "\n", + "Assume that we are dealing with a square ocean, with dimensions\n", + "$1\\times 1$ (in non-dimensional coordinates) and begin by dividing the\n", + "domain into a grid of discrete points\n", + "$$x_i = i \\Delta x, \\;\\;i = 0, 1, 2, \\dots, M$$\n", + "$$y_j = j \\Delta y, \\;\\;j = 0, 1, 2, \\dots, N$$\n", + "where $\\Delta x = 1/M$\n", + "and $\\Delta y = 1/N$. In order to simplify the discrete equations, it\n", + "will be helpful to assume that $M=N$, so that\n", + "$\\Delta x = \\Delta y \\equiv d$. We can then look for approximate values\n", + "of the stream function at the discrete points; that is, we look for\n", + "$$\\Psi_{i,j} \\approx \\psi(x_i,y_j)$$ \n", + "(and similarly for $\\chi_{i,j}$).\n", + "The computational grid and placement of unknowns is pictured in\n", + "Figure [Spatial Grid](#Spatial-Grid)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/spatial.png',width='45%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Spatial Grid\n", + "
\n", + "\n", + "Derivatives are replaced by their centered, second-order finite\n", + "difference approximations \n", + "$$\n", + " \\left. \\frac{\\partial \\Psi}{\\partial x} \\right|_{i,j}\n", + " \\approx \n", + " \\frac{\\Psi_{i+1,j}-\\Psi_{i-1,j}}{2d}\n", + " \\left. \\frac{\\partial^2 \\Psi}{\\partial x^2} \\right|_{i,j} \n", + " \\approx\n", + " \\frac{\\Psi_{i+1,j} - 2 \\Psi_{i,j} + \\Psi_{i-1,j}}{d^2}\n", + "$$ \n", + "and similarly for the\n", + "$y$-derivatives. The discrete analogue of the ([Poisson equation](#eq:poisson)),\n", + "centered at the point $(x_i,y_j)$, may be written as\n", + "$$\\frac{\\chi_{i+1,j} - 2\\chi_{i,j} +\\chi_{i-1,j}}{d^2} + \n", + " \\frac{\\chi_{i,j+1} - 2\\chi_{i,j} +\\chi_{i,j-1}}{d^2} = F_{i,j}$$ \n", + "or,\n", + "after rearranging,\n", + "\n", + "\n", + "(Discrete $\\chi$ Eqn)\n", + "$$\\chi_{i+1,j}+\\chi_{i-1,j}+\\chi_{i,j+1}+\\chi_{i,j-1}-4\\chi_{i,j} =\n", + " d^2F_{i,j}.\n", + "$$\n", + "\n", + "Here, we’ve used\n", + "$F_{i,j} = F(x_i,y_j,t)$ as the values of the right hand side function\n", + "at the discrete points, and said nothing of how to discretize $F$ (this\n", + "will be left until [Right Hand Side](#Right-Hand-Side). The ([Discrete $\\chi$ equation](#eq:discrete-chi)) is an equation centered at the grid point $(i,j)$, and relating\n", + "the values of the approximate solution, $\\chi_{i,j}$, at the $(i,j)$\n", + "point, to the four neighbouring values, as described by the *5-point difference stencil* pictured in\n", + "[Figure Stencil](#fig:stencil)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/2diff.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Stencil:\n", + "The standard 5-point centered difference stencil for the Laplacian\n", + "(multiply by $\\frac{1}{d^2}$ to get the actual coefficients.
\n", + "\n", + "These stencil diagrams are a compact way of representing the information\n", + "contained in finite difference formulas. To see how useful they are, do\n", + "the following:\n", + "\n", + "- Choose the point on the grid in [Figure Spatial Grid](#Spatial-Grid) that\n", + " you want to apply the difference formula ([Discrete $\\chi$ Eqn](#eq:discrete-chi)).\n", + "\n", + "- Overlay the difference stencil diagram on the grid, placing the\n", + " center point (with value $-4$) on this point.\n", + "\n", + "- The numbers in the stencil are the multiples for each of the\n", + " unknowns $\\chi_{i,j}$ in the difference formula.\n", + "\n", + "An illustration of this is given in [Figure Overlay](#fig:overlay)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/2diffgrid.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "**Figure Overlay:**\n", + " The 5-point difference stencil overlaid on the grid.\n", + "
\n", + " \n", + "Before going any further with the discretization, we need to provide a\n", + "few more details for the discretization of the right hand side function,\n", + "$F(x,y,t)$, and the boundary conditions. If you’d rather skip these for\n", + "now, and move on to the time discretization\n", + "([Section Temporal Discretization](#Temporal-Discretization))\n", + "or the outline of the solution procedure\n", + "([Section Outline of Solution Procedure](#Outline-of-Solution-Procedure)), then you may do so now." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Right Hand Side\n", + "\n", + "The right hand side function for the ([Poisson equation](#eq:poisson)) is reproduced here\n", + "in scaled form (with the “primes” dropped): \n", + "$$F(x,y,t) = - \\, \\frac{\\partial \\psi}{\\partial x} - \\epsilon {\\cal J}(\\psi,\\nabla_h^{2}\\psi) + \\frac{\\tau_{max}}{\\epsilon \\beta^2 \\rho H B^3} \\nabla_h \\times \\vec{\\tau} - \\frac{\\kappa}{\\beta B} \\nabla_h^{2} \\psi + \\frac{A_h}{\\beta B^3} \\nabla_h^{4} \\psi$$\n", + "\n", + "Alternatively, the Coriolis and Jacobian terms can be grouped together\n", + "as a single term: \n", + "$$- \\, \\frac{\\partial\\psi}{\\partial x} - \\epsilon {\\cal J}(\\psi, \\nabla^2_h\\psi) = - {\\cal J}(\\psi, y + \\epsilon \\nabla^2_h\\psi)$$\n", + "\n", + "Except for the Jacobian term, straightforward second-order centered\n", + "differences will suffice for the Coriolis force\n", + "$$\\frac{\\partial\\psi}{\\partial x} \\approx \\frac{1}{2d} \\left(\\Psi_{i+1,j} - \\Psi_{i-1,j}\\right),$$ \n", + "the wind stress \n", + "$$\\nabla_h \\times \\vec{\\tau} \\approx\n", + " \\frac{1}{2d} \\, \n", + " \\left( \\tau_{2_{i+1,j}}-\\tau_{2_{i-1,j}} - \n", + " \\tau_{1_{i,j+1}}+\\tau_{1_{i,j-1}} \\right),$$\n", + "and the second order viscosity term \n", + "$$\\nabla_h^2 \\psi \\approx\n", + " \\frac{1}{d^2} \\left( \\Psi_{i+1,j}+\\Psi_{i-1,j}+\\Psi_{i,j+1} +\n", + " \\Psi_{i,j-1} - 4 \\Psi_{i,j} \\right).$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The higher order (biharmonic) viscosity term, $\\nabla_h^4 \\psi$, is\n", + "slightly more complicated. The difference stencil can be derived in a\n", + "straightforward way by splitting into $\\nabla_h^2 (\\nabla_h^2 \\psi)$ and\n", + "applying the discrete version of the Laplacian operator twice. The\n", + "resulting difference formula is \n", + "
\n", + "(Bi-Laplacian)\n", + "$$ \\nabla_h^4 \\psi \n", + " = \\nabla_h^2 ( \\nabla_h^2 \\psi ) $$ $$\n", + " \\approx \\frac{1}{d^4} \\left( \\Psi_{i+2,j} + \\Psi_{i,j+2} +\n", + " \\Psi_{i-2,j} + \\Psi_{i,j-2} \\right. + \\, 2 \\Psi_{i+1,j+1} + 2 \\Psi_{i+1,j-1} + 2 \\Psi_{i-1,j+1} +\n", + " 2 \\Psi_{i-1,j-1}\n", + " \\left. - 8 \\Psi_{i,j+1} - 8 \\Psi_{i-1,j} - 8 \\Psi_{i,j-1} - 8 \\, \\Psi_{i+1,j} + 20 \\Psi_{i,j} \\right)\n", + " $$
\n", + "which is pictured in the difference stencil in [Figure Bi-Laplacian Stencil](#fig:d4stencil)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/4diff.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "**Figure Bi-Laplacian Stencil:**\n", + "13-point difference stencil for the centered difference formula for\n", + "$\\nabla_h^4$ (multiply by $\\frac{1}{d^4}$ to get the actual\n", + "coefficients).
\n", + "\n", + "The final term is the Jacobian term, ${\\cal J}(\\psi,\n", + "\\nabla^2_h\\psi)$ which, as you might have already guessed, is the one\n", + "that is going to give us the most headaches. To get a feeling for why\n", + "this might be true, go back to ([Rescaled Quasi-Geostropic Equation](#eq:qg-rescaled))  and notice that the only\n", + "nonlinearity arises from this term. Typically, it is the nonlinearity in\n", + "a problem that leads to difficulties in a numerical scheme. Remember the\n", + "formula given for ${\\cal J}$ in the previous section:\n", + "\n", + "\n", + "(Jacobian: Expansion 1)\n", + "$${\\cal J}(a,b) = \\frac{\\partial a}{\\partial x} \\, \\frac{\\partial b}{\\partial y} - \n", + " \\frac{\\partial a}{\\partial y} \\, \\frac{\\partial b}{\\partial x}\n", + " $$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Problem One\n", + "> Apply the standard centered difference formula\n", + " (see Lab 1 if you need to refresh you memory) to get a difference\n", + " approximation to the Jacobian based on ([Jacobian: Expansion 1](#eq:jacob1). You will use this later in\n", + " [Problem Two](#Problem-Two)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We’ve seen before that there is usually more than one way to discretize\n", + "a given expression. This case is no different, and there are many\n", + "possible ways to derive a discrete version of the Jacobian. Two other\n", + "approaches are to apply centered differences to the following equivalent\n", + "forms of the Jacobian: \n", + "
\n", + "(Jacobian: Expansion 2)\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial x} \\, \\left( a \\frac{\\partial b}{\\partial y} \\right) -\n", + " \\frac{\\partial}{\\partial y} \\left( a \\frac{\\partial b}{\\partial x} \\right)\n", + " $$
\n", + "
\n", + "(Jacobian: Expansion 3)\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial y} \\, \\left( b \\frac{\\partial a}{\\partial x} \\right) -\n", + " \\frac{\\partial}{\\partial x} \\left( b \\frac{\\partial a}{\\partial y} \\right)\n", + " $$\n", + "
\n", + " \n", + "Each formula leads to a different discrete formula, and we will see in\n", + "[Section Aliasing Error and Nonlinear Instability](#Aliasing-Error-and-Nonlinear-Instability)\n", + " what effect the non-linear term has on\n", + "the discrete approximations and how the use of the different formulas\n", + "affect the behaviour of the numerical scheme. Before moving on, try to\n", + "do the following two quizzes." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Quiz on Jacobian Expansion \\#2\n", + "\n", + "Using second order centered differences, what is the discretization of the second form of the Jacobian given by\n", + "\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial x} \\, \\left( a \\frac{\\partial b}{\\partial y} \\right) -\n", + " \\frac{\\partial}{\\partial y} \\left( a \\frac{\\partial b}{\\partial x} \\right)\n", + " $$\n", + "\n", + "- A: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- B: $$\\frac 1 {4d^2} \\left[ a_{i+1,j} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- C: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1/2,j} - a_{i-1/2,j} \\right) \\left( b_{i,j+1/2} - b_{i,j-1/2} \\right) - \\left( a_{i,j+1/2} - a_{i,j-1/2} \\right) \\left( b_{i+1/2,j} - b_{i-1/2,j} \\right) \\right]$$\n", + "\n", + "- D: $$\\frac 1 {4d^2} \\left[ b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) - b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) - b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) + b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- E: $$\\frac 1 {4d^2} \\left[ a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j-1} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i-1,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- F: $$\\frac 1 {4d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- G: $$\\frac 1 {4d^2} \\left[ b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) - b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) - b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) + b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) \\right]$$\n", + " \n", + "In the following, replace 'x' by 'A', 'B', 'C', 'D', 'E', 'F', 'G', or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.jacobian_2(answer = 'x'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Quiz on Jacobian Expansion #3 \n", + "\n", + "Using second order centered differences, what is the discretization of the third form of the Jacobian given by\n", + "\n", + "$$\n", + " {\\cal J}(a,b) = \\frac{\\partial}{\\partial y} \\, \\left( b \\frac{\\partial a}{\\partial x} \\right) -\n", + " \\frac{\\partial}{\\partial x} \\left( b \\frac{\\partial a}{\\partial y} \\right)\n", + " $$\n", + "\n", + "- A: - A: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- B: $$\\frac 1 {4d^2} \\left[ a_{i+1,j} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- C: $$\\frac 1 {d^2} \\left[ \\left( a_{i+1/2,j} - a_{i-1/2,j} \\right) \\left( b_{i,j+1/2} - b_{i,j-1/2} \\right) - \\left( a_{i,j+1/2} - a_{i,j-1/2} \\right) \\left( b_{i+1/2,j} - b_{i-1/2,j} \\right) \\right]$$\n", + "\n", + "- D: $$\\frac 1 {4d^2} \\left[ b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) - b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) - b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) + b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- E: $$\\frac 1 {4d^2} \\left[ a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i+1,j-1} \\right) - a_{i-1,j-1} \\left( b_{i-1,j+1} - b_{i-1,j-1} \\right) - a_{i+1,j+1} \\left( b_{i+1,j+1} - b_{i-1,j+1} \\right) + a_{i-1,j-1} \\left( b_{i+1,j-1} - b_{i-1,j-1} \\right) \\right]$$\n", + "\n", + "- F: $$\\frac 1 {4d^2} \\left[ \\left( a_{i+1,j} - a_{i-1,j} \\right) \\left( b_{i,j+1} - b_{i,j-1} \\right) - \\left( a_{i,j+1} - a_{i,j-1} \\right) \\left( b_{i+1,j} - b_{i-1,j} \\right) \\right]$$\n", + "\n", + "- G: $$\\frac 1 {4d^2} \\left[ b_{i,j+1} \\left( a_{i+1,j+1} - a_{i-1,j+1} \\right) - b_{i,j-1} \\left( a_{i+1,j-1} - a_{i-1,j-1} \\right) - b_{i+1,j} \\left( a_{i+1,j+1} - a_{i+1,j-1} \\right) + b_{i-1,j} \\left( a_{i-1,j+1} - a_{i-1,j-1} \\right) \\right]$$\n", + " \n", + "In the following, replace 'x' by 'A', 'B', 'C', 'D', 'E', 'F', 'G', or 'Hint'" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "print (quiz.jacobian_3(answer = 'x'))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Boundary Conditions\n", + "\n", + "One question that arises immediately when applying the difference\n", + "stencils in [Figure Stencil](#fig:stencil) and [Figure Bi-Laplacian Stencil](#fig:d4stencil)\n", + "is\n", + "\n", + "> *What do we do at the boundary, where at least one of the nodes\n", + "> of the difference stencil lies outside of the domain?*\n", + "\n", + "The answer to this question lies with the *boundary\n", + "conditions* for $\\chi$ and $\\psi$. We already know the boundary\n", + "conditions for $\\psi$ from [Section The Quasi-Geostrophic Model](#The-Quasi-Geostrophic-Model):\n", + "\n", + "Free slip:\n", + "\n", + "> The free slip boundary condition, $\\psi=0$, is applied when $A_h=0$,\n", + " which we can differentiate with respect to time to get the identical\n", + " condition $\\chi=0$. In terms of the discrete unknowns, this\n", + " translates to the requirement that\n", + "> $$\\Psi_{0,j} = \\Psi_{N,j} = \\Psi_{i,0} = \\Psi_{i,N} = 0 \\;\\; \\mbox{ for} \\; i,j = 0,1,\\ldots,N,$$ \n", + " \n", + "> and similarly for $\\chi$. All\n", + " boundary values for $\\chi$ and $\\Psi$ are known, and so we need only\n", + " apply the difference stencils at *interior points* (see\n", + " [Figure Ghost Points](#fig:ghost)). When $A_h=0$, the high-order viscosity\n", + " term is not present, and so the only stencil appearing in the\n", + " discretization is the 5-point formula (the significance of this will\n", + " become clear when we look at no-slip boundary conditions)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/ghost3.png',width='50%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + "**Figure Ghost Points:**\n", + "The points on the computational grid, which are classified into\n", + " interior, real boundary, and ghost boundary points. The 5- and\n", + " 13-point difference stencils, when overlaid on the grid, demonstrate\n", + " that only the real boundary values are needed for the free-slip\n", + " boundary values (when $A_h=0$), while ghost points must be\n", + " introduced for the no-slip conditions (when $A_h\\neq 0$, and the\n", + " higher order viscosity term is present).\n", + "
\n", + "\n", + "\n", + "No-slip:\n", + "\n", + "> The no-slip conditions appear when $A_h\\neq 0$, and include the free\n", + " slip conditions $\\psi=\\chi=0$ (which we already discussed above),\n", + " and the normal derivative condition $\\nabla\\psi\\cdot\\hat{n}=0$,\n", + " which must be satisfied at all boundary points. It is clear that if\n", + " we apply the standard, second-order centered difference\n", + " approximation to the first derivative, then the difference stencil\n", + " extends *beyond the boundary of the domain and contains at\n", + " least one non-existent point! How can we get around this\n", + " problem?*\n", + "\n", + "> The most straightforward approach (and the one we will use in this\n", + " Lab) is to introduce a set of *fictitious points* or\n", + " *ghost points*,\n", + " \n", + "> $$\\Psi_{-1,j}, \\;\\; \\Psi_{N+1,j}, \\;\\; \\Psi_{i,-1}, \\;\\; \\Psi_{i,N+1}$$\n", + "\n", + "> for $i,j=0,1,2,\\ldots,N+1$, which extend one grid space outside of\n", + " the domain, as shown in [Figure Ghost Points](#fig:ghost). We can then\n", + " discretize the Neumann condition in a straightforward manner. For\n", + " example, consider the point $(0,1)$, pictured in\n", + " [Figure No Slip Boundary Condition](#fig:noslip), at which the discrete version of\n", + " $\\nabla\\psi\\cdot\\hat{n}=0$ is\n", + "\n", + "> $$\\frac{1}{2d} ( \\Psi_{1,1} - \\Psi_{-1,1}, \\Psi_{0,2} - \\Psi_{0,0} ) \\cdot (-1,0) = 0,$$ \n", + " \n", + "> (where $(-1,0)$ is the unit\n", + " outward normal vector), which simplifies to\n", + " \n", + "> $$\\Psi_{-1,1} = \\Psi_{1,1}.$$\n", + "\n", + "> The same can be done for all the\n", + " remaining ghost points: the value of $\\Psi$ at at point outside the\n", + " boundary is given quite simply as the value at the corresponding\n", + " interior point reflected across the boundary.\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/noslip.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure No Slip Boundary Condition:\n", + "The discrete Neumann boundary conditions are discretized using\n", + " ghost points. Here, at point $(0,1)$, the unit outward normal vector\n", + " is $\\hat{n}=(-1,0)$, and the discrete points involved are the four\n", + " points circled in red. The no-slip condition simply states that\n", + " $\\Psi_{-1,1}$ is equal to the interior value $\\Psi_{1,1}$.\n", + "
\n", + "\n", + "Now, remember that when $A_h\\neq 0$, the $\\nabla_h^4\\psi$ term\n", + " appears in the equations, which is discretized as a 13-point\n", + " stencil . Looking at [Figure Ghost Points](#fig:ghost), it is easy to see that\n", + " when the 13-point stencil is applied at points adjacent to the\n", + " boundary (such as $(N-1,N-1)$ in the Figure) it involves not only\n", + " real boundary points, but also ghost boundary points (compare this\n", + " to the 5-point stencil). But, as we just discovered above, the\n", + " presence of the ghost points in the stencil poses no difficulty,\n", + " since these values are known in terms of interior values of $\\Psi$\n", + " using the boundary conditions.\n", + " \n", + "Just as there are many Runge-Kutta schemes, there are many finite difference stencils for the different derivatives.\n", + "For example, one could use a 5-point, $\\times$-shaped stencil for $\\nabla^2\\psi$. The flexibility of\n", + "having several second-order stencils is what makes it possible to determine an energy- and enstrophy-conserving scheme for the Jacobian which we do later.\n", + "\n", + "A good discussion of boundary conditions is given by [McCalpin](#Ref:McCalpin) in his\n", + "QGBOX code documentation, on page 44." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "#### Matrix Form of the Discrete Equations\n", + "\n", + "In order to write the discrete equations  in matrix form, we must first\n", + "write the unknown values $\\chi_{i,j}$ in vector form. The most obvious\n", + "way to do this is to traverse the grid (see\n", + "[Figure Spatial Grid](#Spatial-Grid)), one row at a time, from left to right,\n", + "leaving out the known, zero boundary values, to obtain the ordering:\n", + "$$\n", + " \\vec{\\chi} =\n", + " \\left(\\chi_{1,1},\\chi_{2,1},\\dots,\\chi_{N-1,1},\n", + " \\chi_{1,2},\\chi_{2,2},\\dots,\\chi_{N-1,2}, \\dots, \\right. \n", + " \\left.\\chi_{N-1,N-2},\n", + " \\chi_{1,N-1},\\chi_{2,N-1},\\dots,\\chi_{N-1,N-1}\\right)^T$$\n", + " \n", + "and similarly for $\\vec{F}$. The resulting matrix (with this ordering of\n", + "unknowns) results in a matrix of the form given in\n", + "[Figure Matrix](#fig:matrix)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/matrix.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Matrix: The matrix form for the discrete Laplacian. The 5 diagonals (displayed\n", + "in blue and red) represent the non-zero values in the matrix $-$ all\n", + "other values are zero.\n", + "
\n", + "\n", + "The diagonals with the 1’s (pictured in red) contain some zero entries\n", + "due to the boundary condition $u=0$. Notice how similar this matrix\n", + "appears to the *tridiagonal matrix* in the Problems from\n", + "Lab 3, which arose in the discretization of the second derivative in a\n", + "boundary value problem. The only difference here is that the Laplacian\n", + "has an additional second derivative with respect to $y$, which is what\n", + "adds the additional diagonal entries in the matrix.\n", + "\n", + "Before you think about running off and using Gaussian elimination (which\n", + "was reviewed in Lab 3), think about the size of the matrix you would\n", + "have to solve. If $N$ is the number of grid points, then the matrix is\n", + "size $N^2$-by-$N^2$. Consequently, Gaussian elimination will require on\n", + "the order of $N^6$ operations to solve the matrix only once. Even for\n", + "moderate values of $N$, this cost can be prohibitively expensive. For\n", + "example, taking $N=101$ results in a $10000\\times 10000$ system of\n", + "linear equations, for which Gaussian elimination will require on the\n", + "order of $10000^3=10^{12}$ operations! As mentioned in Lab 3, direct\n", + "methods are not appropriate for large sparse systems such as this one. A\n", + "more appropriate choice is an iterative or *relaxation\n", + "scheme*, which is the subject of the next section.." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Solution of the Poisson Equation by Relaxation\n", + "\n", + "One thing to notice about the matrix in [Figure Matrix](#fig:matrix) is that\n", + "it contains many zeros. Direct methods, such as Gaussian elimination\n", + "(GE), are so inefficient for this problem because they operate on all of\n", + "these zero entries (in fact, there are other direct methods that exploit\n", + "the sparse nature of the matrix to reduce the operation count, but none\n", + "are as efficient as the methods we will talk about here).\n", + "\n", + "However, there is another class of solution methods, called\n", + "*iterative methods* (refer to Lab 3) which are natural\n", + "candidates for solving such sparse problems. They can be motivated by\n", + "the fact that since the discrete equations are only approximations of\n", + "the PDE to begin with, *why should we bother computing an exact\n", + "solution to an approximate problem?* Iterative methods are based\n", + "on the notion that one sets up an iterative procedure to compute\n", + "successive approximations to the solution $-$ approximations that get\n", + "closer and closer to the exact solution as the iteration proceeds, but\n", + "never actually reach the exact solution. As long as the iteration\n", + "converges and the approximate solution gets to within some tolerance of\n", + "the exact solution, then we are happy! The cost of a single iterative\n", + "step is designed to depend on only the number of non-zero elements in\n", + "the matrix, and so is considerably cheaper than a GE step. Hence, as\n", + "long as the iteration converges in a “reasonable number” of steps, then\n", + "the iterative scheme will outperform GE." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Iterative methods are also know as *relaxation methods*, of\n", + "which the *Jacobi method* is the simplest. Here are the\n", + "basic steps in the Jacobi iteration (where we’ve dropped the time\n", + "superscript $p$ to simplify the notation):\n", + "\n", + "1. Take an initial guess, $\\chi_{i,j}^{(0)}$. Let $n=0$.\n", + "\n", + "2. For each grid point $(i,j)$, compute the *residual\n", + " vector* $$ R_{i,j}^{(n)} = F_{i,j} - \\nabla^2\\chi^{(n)}_{i,j} $$ \n", + " $$ = F_{i,j} - \\frac{1}{d^2} ( \\chi_{i+1,j}^{(n)} + \\chi_{i,j+1}^{(n)} + \\chi_{i-1,j}^{(n)} +\\chi_{i,j-1}^{(n)} - 4 \\chi_{i,j}^{(n)} )$$ \n", + " (which is non-zero unless $\\chi_{i,j}$ is the exact solution).\n", + "\n", + " You should not confuse the relaxation iteration index (superscript\n", + " $\\,^{(n)}$) with the time index (superscript $\\,^p$). Since the\n", + " relaxation iteration is being performed at a single time step, we’ve\n", + " dropped the time superscript for now to simplify notation. Just\n", + " remember that all of the discrete values in the relaxation are\n", + " evaluated at the current time level $p$.\n", + "\n", + "3. “Adjust” $\\chi_{i,j}^{(n)}$, (leaving the other neighbours\n", + " unchanged) so that $R_{i,j}^{(n)}=0$. That is, replace\n", + " $\\chi_{i,j}^{(n)}$ by whatever you need to get a zero residual. This\n", + " replacement turns out to be:\n", + " $$\\chi_{i,j}^{(n+1)} = \\chi_{i,j}^{(n)} - \\frac{d^2}{4} R_{i,j}^{(n)},$$ \n", + " which defines the iteration.\n", + "\n", + "4. Set $n\\leftarrow n+1$, and repeat steps 2 & 3 until the residual is\n", + " less than some tolerance value. In order to measure the size of the\n", + " residual, we use a *relative maximum norm* measure,\n", + " which says\n", + " $$d^2 \\frac{\\|R_{i,j}^{(n)}\\|_\\infty}{\\|\\chi_{i,j}^{(n)}\\|_\\infty} < TOL$$\n", + " where $$\\|R_{i,j}^{(n)}\\|_\\infty = \\max_{i,j} |R_{i,j}^{(n)}|$$ is\n", + " the *max-norm* of $R_{i,j}$, or the maximum value of\n", + " the residual on the grid (there are other error tolerances we could\n", + " use but this is one of the simplest and most effective). Using this\n", + " measure for the error ensures that the residual remains small\n", + " *relative* to the solution, $\\chi_{i,j}$. A typical\n", + " value of the tolerance that might be used is $TOL=10^{-4}$." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are a few **important things** to note about the\n", + "basic relaxation procedure outlined above\n", + "\n", + "- This Jacobi method is the simplest form of relaxation. It requires\n", + " that you have two storage vectors, one for $\\chi_{i,j}^{(n)}$ and\n", + " one for $\\chi_{i,j}^{(n+1)}$.\n", + "\n", + "- The relaxation can be modified by using a single vector to store the\n", + " $\\chi$ values. In this case, as you compute the residual vector and\n", + " update $\\chi$ at each point $(i,j)$, the residual involves some\n", + " $\\chi$ values from the previous iteration and some that have already\n", + " been updated. For example, if we traverse the grid by rows (that is,\n", + " loop over $j$ first and then $i$), then the residual is now given by\n", + " $$R_{i,j}^{(n)} = F_{i,j} - \\frac{1}{d^2} ( \\chi_{i+1,j}^{(n)} +\n", + " \\chi_{i,j+1}^{(n)} + \n", + " \\underbrace{\\chi_{i-1,j}^{(n+1)} +\n", + " \\chi_{i,j-1}^{(n+1)}}_{\\mbox{{updated already}}} - 4\n", + " \\chi_{i,j}^{(n)} ),$$ \n", + " (where the $(i,j-1)$ and $(i-1,j)$ points\n", + " have already been updated), and then the solution is updated\n", + " $$\\chi_{i,j}^{(n+1)} = \\chi_{i,j}^{(n)} - \\frac{d^2}{4} R_{i,j}^{(n)}.$$ \n", + " Not only does this relaxation scheme save on\n", + " storage (since only one solution vector is now required), but it\n", + " also converges more rapidly (typically, it takes half the number of\n", + " iterations as Jacobi), which speeds up convergence somewhat, but\n", + " still leaves the cost at the same order as Jacobi, as we can see\n", + " from [Cost of Schemes Table](#tab:cost). This is known as the\n", + " *Gauss-Seidel* relaxation scheme.\n", + " \n", + "- In practice, we actually use a modification of Gauss-Seidel\n", + " $$\\chi_{i,j}^{(n+1)} = \\chi_{i,j}^{(n)} - \\frac{\\mu d^2}{4} R_{i,j}^{(n)}$$ \n", + " where $1<\\mu<2$ is the *relaxation\n", + " parameter*. The resulting scheme is called *successive\n", + " over-relaxation*, or *SOR*, and it improves\n", + " convergence considerably (see [Cost of Schemes Table](#tab:cost).\n", + "\n", + " What happens when $0<\\mu<1$? Or $\\mu>2$? The first case is called\n", + " *under-relaxation*, and is useful for smoothing the\n", + " solution in multigrid methods. The second leads to an iteration that\n", + " never converges.\n", + "\n", + "- *Does the iteration converge?* For the Poisson problem,\n", + " yes, but not in general." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- *How fast does the iteration converge?* and *How\n", + " much does each iteration cost?* The answer to both of these\n", + " questions gives us an idea of the cost of the relaxation procedure …\n", + " \n", + " Assume we have a grid of size $N\\times N$. If we used Gaussian\n", + " elimination to solve this matrix system (with $N^2$ unknowns), we\n", + " would need to perform on the order of $N^6$ operations (you saw this\n", + " in Lab \\#3). One can read in any numerical linear algebra textbook\n", + " ([Strang 1988,](#Ref:Strang) for example), that the number of iterations\n", + " required for Gauss-Seidel and Jacobi is on the order of $N^3$, while\n", + " for SOR it reduces to $N^2$. There is another class of iterative\n", + " methods, called *multigrid methods*, which converge in\n", + " a constant number of iterations (the optimal result)\n", + " \n", + " If you look at the arithmetic operations performed in the the\n", + " relaxation schemes described above, it is clear that a single\n", + " iteration involves on the order of $N^2$ operations (a constant\n", + " number of multiplications for each point).\n", + "\n", + " Putting this information together, the cost of each iterative scheme\n", + " can be compared as in [Cost of Schemes Table](#tab:cost).\n", + " \n", + " \n", + " \n", + "| Method | Order of Cost |\n", + "| :----------------------: | :---------------: |\n", + "| Gaussian Elimination | $N^6$ |\n", + "| Jacobi | $N^5$ |\n", + "| Gauss-Seidel | $N^5$ |\n", + "| SOR | $N^4$ |\n", + "| Multigrid | $N^2$ |\n", + "\n", + "
\n", + " Cost of Schemes Table: Cost of iterative schemes compared to direct methods.
\n", + " \n", + "- Multigrid methods are obviously the best, but are also\n", + " *extremely complicated* … we will stick to the much\n", + " more manageable Jacobi, Gauss-Seidel and SOR schemes.\n", + "\n", + "- There are other methods (called conjugate gradient and capacitance\n", + " matrix methods) which improve on the relaxation methods we’ve seen.\n", + " These won’t be described here." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Temporal Discretization ###\n", + "\n", + "Let us now turn to the time evolution equation for the stream function.\n", + "Supposing that the initial time is $t=0$, then we can approximate the\n", + "solution at the discrete time points $t_p = p\\Delta t$, and write the\n", + "discrete solution as $$\\Psi_{i,j}^p \\approx \\psi(x_i,y_j,t_p).$$ Notice\n", + "that the spatial indices appear as subscripts and the time index as a\n", + "superscript on the discrete approximation $\\chi$.\n", + "\n", + "We can choose any discretization for time that we like, but for the QG\n", + "equation, it is customary (see [Mesinger and Arakawa](#Ref:MesingerArakawa), for\n", + "example) to use a centered time difference to approximate the derivative\n", + "in :\n", + "$$\\frac{\\Psi_{i,j}^{p+1} - \\Psi_{i,j}^{p-1}}{2\\Delta t} = \\chi_{i,j}^p$$\n", + "or, after rearranging,\n", + "\n", + "\n", + "(Leapfrog Eqn)\n", + "$$\n", + "\\Psi_{i,j}^{p+1} = \\Psi_{i,j}^{p-1} + 2\\Delta t \\chi_{i,j}^p\n", + "$$\n", + "This time differencing method is called the\n", + "*leap frog scheme*, and was introduced in Lab 7. A\n", + "pictorial representation of this scheme is given in\n", + "[Figure Leap-Frog Scheme](#fig:leap-frog)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/leapfrog.png',width='40%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Leap-Frog Scheme: A pictorial representation of the “leap-frog” character of the\n", + "time-stepping scheme. The values of $\\chi$ at even time steps are linked\n", + "together with the odd $\\Psi$ values; likewise, values of $\\chi$ at odd\n", + "time steps are linked to the even $\\Psi$ values.\n", + "
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "There are two additional considerations related to the time\n", + "discretization:\n", + "\n", + "- The viscous terms ($\\nabla_h^2\\psi$ and $\\nabla_h^4\\psi$) are\n", + " evaluated at the $p-1$ time points, while all other terms in the\n", + " right hand side are evaluated at $p$ points. The reasoning for this\n", + " is described in [McCalpin’s](#Ref:McCalpin) QGBOX\n", + " documentation [p. 8]:\n", + "\n", + " > Note that the frictional terms are all calculated at the old\n", + " $(n-1)$ time level, and are therefore first-order accurate in\n", + " time. This ’time lagging’ is necessary for linear computational\n", + " stability.\n", + "\n", + "- This second item *will not be implemented in this lab or the\n", + " problems*, but should still be kept in mind …\n", + "\n", + " The leap-frog time-stepping scheme has a disadvantage in that it\n", + " introduces a “computational mode” …  [McCalpin](#Ref:McCalpin) [p. 23]\n", + " describes this as follows:\n", + "\n", + " > *Leap-frog models are plagued by a phenomenon called the\n", + " “computational mode”, in which the odd and even time levels become\n", + " independent. Although a variety of sophisticated techniques have\n", + " been developed for dealing with this problem, McCalpin’s model\n", + " takes a very simplistic approach. Every *narg* time\n", + " steps, adjacent time levels are simply averaged together (where\n", + " *narg*$\\approx 100$ and odd)*\n", + "\n", + " Why don’t we just abandon the leap-frog scheme? Well, [Mesinger and Arakawa](#Ref:MesingerArakawa) [p. 18] make the following observations\n", + " regarding the leap-frog scheme:\n", + "\n", + " - its advantages: simple and second order accurate; neutral within\n", + " the stability range.\n", + "\n", + " - its disadvantages: for non-linear equations, there is a tendency\n", + " for slow amplification of the computational mode.\n", + "\n", + " - the usual method for suppressing the spurious mode is to insert\n", + " a step from a two-level scheme occasionally (or, as McCalpin\n", + " suggests, to occasionally average the solutions at successive\n", + " time steps).\n", + "\n", + " - In Chapter 4, they mention that it is possible to construct\n", + " grids and/or schemes with the same properties as leap-frog and\n", + " yet the computational mode is absent.\n", + "\n", + " The important thing to get from this is that when integrating for\n", + " long times, the computational mode will grow to the point where it\n", + " will pollute the solution, unless one of the above methods is\n", + " implemented. For simplicity, we will not be worrying about this in\n", + " Lab \\#8." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Outline of Solution Procedure ###\n", + "\n", + "Now that we have discretized in both space and time, it is possible to\n", + "outline the basic solution procedure.\n", + "\n", + "1. Assume that at $t=t_p$, we know $\\Psi^0, \\Psi^1, \\dots,\n", + " \\Psi^{p-1}$\n", + "\n", + "2. Calculate $F_{i,j}^p$ for each grid point $(i,j)$ (see\n", + " [Section Right Hand Side](#Right-Hand-Side)). Keep in mind that the viscosity terms\n", + " ($\\nabla_h^2\\psi$ and $\\nabla_h^4\\psi$) are evaluated at time level\n", + " $p-1$, while all other terms are evaluated at time level $p$ (this\n", + " was discussed in [Section Temporal Discretization](#Temporal-Discretization)).\n", + "\n", + "3. Solve the ([Discrete $\\chi$ equation](#eq:discrete-chi)) for $\\chi_{i,j}^p$ (the actual\n", + " solution method will be described in [Section Solution of the Poisson Equation by Relaxation](#Solution-of-the-Poisson-Equation-by-Relaxation).\n", + "\n", + "4. Given $\\chi_{i,j}^p$, we can find $\\Psi_{i,j}^{p+1}$ by using the\n", + " ([Leap-frog time stepping scheme](#leapfrog))\n", + "\n", + "5. Let $p \\leftarrow p+1$ and return to step 2." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Notice that step 1 requires a knowledge of two starting values, $\\Psi^0$\n", + "and $\\Psi^1$, at the initial time. An important addition to the\n", + "procedure below is some way to get two starting values for $\\Psi$. Here\n", + "are several alternatives:\n", + "\n", + "- Set $\\Psi^0$ and $\\Psi^1$ both to zero.\n", + "\n", + "- Set $\\Psi^0=0$, and then use a forward Euler step to find $\\Psi^1$.\n", + "\n", + "- Use a predictor-corrector step, like that employed in Lab 7.\n", + "\n", + "### Problem Two\n", + "> Now that you’ve seen how the basic\n", + "numerical scheme works, it’s time to jump into the numerical scheme. The\n", + "code has already been written for the discretization described above,\n", + "with free-slip boundary conditions and the SOR relaxation scheme. The\n", + "code is in **qg.py** and the various functions are:\n", + "\n", + ">**main**\n", + ": the main routine, contains the time-stepping and the output.\n", + "\n", + ">**param()**\n", + ": sets the physical parameters of the system.\n", + "\n", + ">**numer\\_init()**\n", + ": sets the numerical parameters.\n", + "\n", + ">**vis(psi, nx, ny)**\n", + ": calculates the second order ($\\nabla^2$) viscosity term (not\n", + " leap-frogged).\n", + "\n", + ">**wind(psi, nx, ny)**\n", + ": calculates the the wind term.\n", + "\n", + ">**mybeta(psi, nx, ny)**\n", + ": calculates the beta term\n", + "\n", + ">**jac(psi, vis, nx, ny)**\n", + ": calculate the Jacobian term. (Arakawa Jacobian given here).\n", + "\n", + ">**chi(psi, vis_curr, vis_prev, chi_prev, nx, ny, dx, r_coeff, tol, max_count, epsilon, wind_par, vis_par)**\n", + ": calculates $\\chi$ using a call to relax\n", + "\n", + ">**relax(rhs, chi_prev, dx, nx, ny, r_coeff, tol, max_count)**\n", + ": does the relaxation.\n", + "\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "> Your task in this problem is to program the “straightforward”\n", + "discretization of the Jacobian term, using [(Jacobian: Expansion 1)](#eq:jacob1), that you derived in\n", + "[Problem One](#Problem-One). The only change this involves is\n", + "inserting the code into the function **jac**. Once\n", + "finished, run the code. The parameter functions **param**\n", + "**init\\_numer** provide some sample parameter values for you to\n", + "execute the code with. Try these input values and observe what happens\n", + "in the solution. Choose one of the physical parameters to vary. Does\n", + "changing the parameter have any effect on the solution? in what way?\n", + "\n", + "> Hand in the code for the Jacobian, and a couple of plots demonstrating\n", + "the solution as a function of parameter variation. Describe your results\n", + "and make sure to provide parameter values to accompany your explanations\n", + "and plots.\n", + "\n", + ">If the solution is unstable, check your CFL condition. The relevant\n", + "waves are Rossby waves with wave speed: $$c=\\beta k^{-2}$$ where $k$ is\n", + "the wave-number. The maximum wave speed is for the longest wave,\n", + "$k=\\pi/b$ where $b$ is the size fo your domain.\n", + "\n", + "> If the code is still unstable, even though the CFL is satisfied, see\n", + "[Section Aliasing Error and Nonlinear Instability](#Aliasing-Error-and-Nonlinear-Instability). The solution is nonlinear unstable.\n", + "Switch to the Arakawa Jacobian for stability." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Three\n", + ">The code provided for [Problem Two](#Problem-Two) implements the SOR relaxation\n", + "scheme. Your job in this problem is to modify the relaxation code to\n", + "perform a Jacobi iteration.\n", + "\n", + ">Hand in a comparison of the two methods, in tabular form. Use two\n", + "different relaxation parameters for the SOR scheme. (Include a list of\n", + "the physical and numerical parameter you are using). Also submit your\n", + "code for the Jacobi relaxation scheme." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Four\n", + "> Modify the code to implement the no-slip boundary\n", + "conditions." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Problem Five\n", + ">The code you’ve been working with so far uses the\n", + "simplest possible type of starting values for the unknown stream\n", + "function: both are set to zero. If you’re keen to play around with the\n", + "code, you might want to try modifying the SOR code for the two other\n", + "methods of computing starting values: using a forward Euler step, or a\n", + "predictor-corrector step (see [Section Outline of Solution Procedure](#Outline-of-Solution-Procedure))." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Aliasing Error and Nonlinear Instability\n", + "\n", + "In [Problem Two](#Problem-Two), you encountered an example\n", + "of the instability that can occur when computing numerical solutions to\n", + "some *nonlinear problems*, of which the QG equations is\n", + "just one example. This effect has in fact been known for quite some\n", + "time. Early numerical experiments by [N. Phillips in 1956](#Ref:Phillips)\n", + "exploded after approximately 30 days of integration due to nonlinear\n", + "instability. He used the straightforward centered difference formula for\n", + "as you did.\n", + "\n", + "It is important to realize that this instability does not occur in the\n", + "physical system modeled by the equations of motion. Rather is an\n", + "artifact of the discretization process, something known as\n", + "*aliasing error*. Aliasing error can be best understood by\n", + "thinking in terms of decomposing the solution into modes. In brief,\n", + "aliasing error arises in non-linear problems when a numerical scheme\n", + "amplifies the high-wavenumber modes in the solution, which corresponds\n", + "physically to a spurious addition of energy into the system. Regardless\n", + "of how much the grid spacing is reduced, the resulting computation will\n", + "be corrupted, since a significant amount of energy is present in the\n", + "smallest resolvable scales of motion. This doesn’t happen for every\n", + "non-linear problem or every difference scheme, but is an issue that one\n", + "who is using numerical codes must be aware of." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Example One ###\n", + "\n", + ">Before moving on to how we can handle the\n", + "instability in our discretization of the QG equations, you should try\n", + "out the following demo on aliasing error. It is taken from an example in\n", + "[Mesinger and Arakawa](#Ref:MesingerArakawa) [p. 35ff.], based on the simplest\n", + "of non-linear PDE’s, the advection equation:\n", + "$$\\frac{du}{dt}+u\\frac{du}{dx} = 0.$$ \n", + "If we decompose the solution into\n", + "Fourier mode, and consider a single mode with wavenumber $k$,\n", + "$$u(x) = \\sin{kx}$$ \n", + "then the solution will contain additional modes, due\n", + "to the non-linear term, and given by\n", + "$$u \\frac{du}{dx} = k \\sin{kx} \\cos{kx} =\\frac{1}{2}k \\sin{2kx}.$$\n", + "\n", + ">With this as an introduction, keep the following in mind while going\n", + "through the demo:\n", + "\n", + ">- on a computational grid with spacing $\\Delta x$, the discrete\n", + " versions of the modes can only be resolved up to a maximum\n", + " wavenumber, $k_{max}=\\frac{\\pi}{\\Delta x}$.\n", + "\n", + ">- even if we start with modes that are resolvable on the grid, the\n", + " non-linear term introduces modes with a higher wavenumber, which may\n", + " it not be possible to resolve. These modes, when evaluated at\n", + " discrete points, appear as modes with lower wavenumber; that is,\n", + " they are *aliased* to the lower modes (this becomes\n", + " evident in the demo as the wavenumber is increased …).\n", + "\n", + ">- not only does aliasing occur, but for this problem, these additional\n", + " modes are *amplified* by a factor of $\\frac{1}{2}k$.\n", + " This is the source of the *aliasing error* – such\n", + " amplified modes will grow in time, no matter how small the time step\n", + " taken, and will pollute the computations." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The previous example is obviously a much simpler non-linearity than that\n", + "of the QG equations, but the net effect is the same. The obvious\n", + "question we might ask now is:\n", + "\n", + "> *Can this problem be averted for our discretization of the QG\n", + "> equations with the leap-frog time-stepping scheme, or do we have to\n", + "> abandon it?*\n", + "\n", + "There are several possible solutions, presented by [Mesinger and Arakawa](#Ref:MesingerArakawa) [p. 37], summarized here, including\n", + "\n", + "- filter out the top half or third of the wavenumbers, so that the\n", + " aliasing is eliminated.\n", + "\n", + "- use a differencing scheme that has a built-in damping of the\n", + " shortest wavelengths.\n", + "\n", + "- the most elegant, and one that allows us to continue using the\n", + " leap-frog time stepping scheme, is one suggested by Arakawa, is one\n", + " that aims to eliminate the spurious inflow of energy into the system\n", + " by developing a discretization of the Jacobian term that satisfies\n", + " discrete analogues of the conservation properties for average\n", + " vorticity, enstrophy and kinetic energy.\n", + " \n", + "This third approach will be the one we take here. The details can be\n", + "found in the Mesinger-Arakawa paper, and are not essential here; the\n", + "important point is that there is a discretization of the Jacobian that\n", + "avoids the instability problem arising from aliasing error. This\n", + "discrete Jacobian is called the *Arakawa Jacobian* and is\n", + "obtained by averaging the discrete Jacobians obtained by using standard\n", + "centered differences on the formulae [(Jacobian: Expansion 1)](#eq:jacob1), [(Jacobian: Expansion 2)](#eq:jacob2) and [(Jacobian: Expansion 3)](#eq:jacob3) (see\n", + "[Problem One](#Problem-One) and the two quizzes following it in\n", + "[Section Right Hand Side](#Right-Hand-Side).\n", + "\n", + "You will not be required to derive or code the Arakawa Jacobian (the\n", + "details are messy!), and the code will be provided for you for all the\n", + "problems following [Problem Two](#Problem-Two)." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Classical Solutions ##\n", + "\n", + "[Bryan (1963)](#Ref:Bryan) and [Veronis (1966)](#Ref:Veronis)\n", + "\n", + "### Problem Six ###\n", + "> Using the SOR code from\n", + "Problems [Three](#Problem-Three) (free-slip BC’s) and [Four](#Problem-Four)\n", + "(no-slip BC’s), try to reproduce the classical results of Bryan and\n", + "Veronis." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Mathematical Notes ##\n", + "\n", + "### Definition of the Beta-plane ###\n", + "\n", + "A $\\beta$-plane is a plane approximation of a curved section of the\n", + "Earth’s surface, where the Coriolis parameter, $f(y)$, can be written\n", + "roughly as a linear function of $y$ $$f(y) = f_0 + \\beta y$$ for $f_0$\n", + "and $\\beta$ some constants. The motivation behind this approximation\n", + "follows." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/coriolis.png',width='30%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Rotating Globe:\n", + "A depiction of the earth and its angular frequency of rotation,\n", + "$\\Omega$, the local planetary vorticity vector in blue, and the Coriolis\n", + "parameter, $f_0$, at a latitude of $\\theta_0$.\n", + "
\n", + "\n", + "Consider a globe (the earth) which is rotating with angular frequency\n", + "$\\Omega$ (see [Figure Rotating Globe](#fig:globe)),\n", + "and assume that the patch of ocean under consideration is at latitude\n", + "$\\theta$. The most important component of the Coriolis force is the\n", + "local vertical (see the in [Figure Rotating Globe](#fig:globe)), which is defined in\n", + "terms of the Coriolis parameter, $f$, to be $$f/2 = \\Omega \\sin\\theta.$$\n", + "This expression may be simplified somewhat by ignoring curvature effects\n", + "and approximating the earth’s surface at this point by a plane $-$ if\n", + "the plane is located near the middle latitudes, then this is a good\n", + "approximation. If $\\theta_0$ is the latitude at the center point of the\n", + "plane, and $R$ is the radius of the earth (see\n", + "[Figure Rotating Globe](#fig:globe)), then we can apply trigonometric ratios to\n", + "obtain the following expression on the plane:\n", + "$$f = \\underbrace{2\\Omega\\sin\\theta_0}_{f_0} +\n", + " \\underbrace{\\frac{2\\Omega\\cos\\theta_0}{R}}_\\beta \\, y$$ \n", + "Not surprisingly, this plane is called a *mid-latitude\n", + "$\\beta$-plane*." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "Image(filename='images/beta-plane.png',width='30%')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "
\n", + " Figure Beta-plane:\n", + "The $\\beta$-plane approximation, with points on the plane located\n", + "along the local $y$-axis. The Coriolis parameter, $f$, at any latitude\n", + "$\\theta$, can be written in terms of $y$ using trigonometric\n", + "ratios.
" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Simplification of the QG Model Equations ##\n", + "\n", + "The first approximation we will make eliminates several of the\n", + "non-linear terms in the set of equations: ([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)). A common simplification that is made in\n", + "this type of flow is the *quasi-geostrophic* (QG)\n", + "approximation, where the horizontal pressure gradient and horizontal\n", + "components of the Coriolis force are matched:\n", + "$$fv \\approx \\frac{1}{\\rho} \\, \\frac {\\partial p}{\\partial x}, \\, \\, fu \\approx - \\,\\frac{1}{\\rho}\n", + "\\, \\frac{\\partial p}{\\partial y} .$$ \n", + "Remembering that the fluid is homogeneous (the density is\n", + "constant), ([Continuity Eqn](#eq:continuity)) implies\n", + "$$\\frac{\\partial^2 p}{\\partial x\\partial z} = 0, \\, \\, \\frac{\\partial^2 p}{\\partial y\\partial z} = 0.$$\n", + "We can then differentiate the QG balance equations to obtain\n", + "$$\\frac{\\partial v}{\\partial z} \\approx 0, \\, \\, \\frac{\\partial u}{\\partial z} \\approx 0.$$ \n", + "Therefore, the terms\n", + "$w \\, \\partial u/\\partial z$ and $w \\, \\partial v/\\partial z$ can be neglected in ([(X-Momentum Eqn)](#eq:xmom)) and\n", + "([(Y-Momentum Eqn)](#eq:ymom)).\n", + "\n", + "The next simplification is to eliminate the pressure terms in\n", + "([(X-Momentum Eqn)](#eq:xmom)) and\n", + "([(Y-Momentum Eqn)](#eq:ymom)) by cross-differentiating. If we define\n", + "the vorticity $$\\zeta = \\partial v/\\partial x - \\partial u/\\partial y$$ then we can\n", + "cross-differentiate the two momentum equations and replace them with a\n", + "single equation in $\\zeta$:\n", + "$$\\frac{\\partial \\zeta}{\\partial t} + u \\frac{\\partial \\zeta}{\\partial x} + v \\frac{\\partial \\zeta}{\\partial y} + v\\beta + (\\zeta+f)(\\frac {\\partial u}{\\partial x}+\\frac{\\partial v}{\\partial y}) =\n", + "A_v \\, \\frac{\\partial^2 \\zeta}{\\partial z^2} + A_h \\, \\nabla_h^2 \\zeta,$$ \n", + "where\n", + "$\\beta \\equiv df/dy$. Notice the change in notation for derivatives,\n", + "from $\\nabla$ to $\\nabla_h$: this indicates that derivative now appear\n", + "only with respect to the “horizontal” coordinates, $x$ and $y$, since\n", + "the $z$-dependence in the solution has been eliminated.\n", + "\n", + "The third approximation we will make is to assume that vorticity effects\n", + "are dominated by the Coriolis force, or that $|\\zeta| \\ll f$. Using\n", + "this, along with the ([Continuity Eqn](#eq:continuity)) implies that\n", + "\n", + "\n", + "(Vorticity Eqn)\n", + "$$\\frac{\\partial \\zeta}{\\partial t} + u \\frac {\\partial \\zeta}{\\partial x} + v \\frac{\\partial \\zeta}{\\partial y} + v \\beta -f \\, \\frac{\\partial w}{\\partial z} = A_v \\,\n", + "\\frac{\\partial^2 \\zeta}{\\partial z^2} + A_h \\, \\nabla_h^2 \\zeta .$$ \n", + "The reason for\n", + "making this simplification may not be obvious now but it is a good\n", + "approximation in flows in the ocean and, as we will see next, it allows\n", + "us to eliminate the Coriolis term.\n", + "\n", + "The final sequence of simplifications eliminate the $z$-dependence in\n", + "the problem by integrating ([Vorticity Eqn](#eq:diff)) in the vertical direction and using boundary\n", + "conditions.\n", + "\n", + "The top 500 metres of the ocean tend to act as a single slab-like layer.\n", + "The effect of stratification and rotation cause mainly horizontal\n", + "motion. To first order, the upper layers are approximately averaged flow\n", + "(while to the next order, surface and deep water move in opposition with\n", + "deep flow much weaker). Consequently, our averaging over depth takes\n", + "into account this “first order” approximation embodying the horizontal\n", + "(planar) motion, and ignoring the weaker (higher order) effects." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "First, recognize that the vertical component of velocity on both the top\n", + "and bottom surfaces should be zero: \n", + "$$w = 0 \\;\\;\\; \\mbox{at $z=0$}$$\n", + "$$w = 0 \\;\\;\\; \\mbox{at $z=-H$}$$ \n", + "Notice that the in second\n", + "condition we’ve also assumed that the water surface is approximately\n", + "flat $-$ this is commonly known as the *rigid lid\n", + "approximation*. Integrate the differential equation ([Vorticity Eqn](#eq:diff)) with respect\n", + "to $z$, applying the above boundary conditions, and using the fact that\n", + "$u$ and $v$ (and therefore also $\\zeta$) are independent of $z$,\n", + "\n", + "\n", + "(Depth-Integrated Vorticity)\n", + "$$\n", + " \\frac{1}{H} \\int_{-H}^0 \\mbox{(Vorticity Eqn)} dz \\Longrightarrow \n", + " \\frac {\\partial \\zeta}{\\partial t} + u \\frac {\\partial \\zeta}{\\partial x} + v \\frac {\\partial \\zeta}{\\partial y} + v\\beta \n", + " = \\frac{1}{H} \\, \\left( \\left. A_v \\, \n", + " \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=0} - \\left. A_v \\, \n", + " \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=-H} \\right) \\, + A_h \\, \\nabla_h^2 \\zeta\n", + "$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The two boundary terms on the right hand side can be rewritten in terms\n", + "of known information if we apply two additional boundary conditions: the\n", + "first, that the given wind stress on the ocean surface,\n", + "$$\\vec{\\tau}(x,y) = (\\tau_1,\\tau_2) \\;\\;\\mbox{at }\\;\\; z=0,$$ \n", + "can be written as\n", + "$$\\rho A_v \\left( \\frac{\\partial u}{\\partial z} , \\frac{\\partial v}{\\partial z} \\right) = \\left( \\tau_1 , \\tau_2\n", + " \\right)$$ \n", + "which, after differentiating, leads to\n", + " \n", + "\n", + "(Stress Boundary Condition)\n", + "$$\\frac{1}{H} \\, A_v \\, \\left. \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=0} = \\frac{1}{\\rho H} \\,\n", + " \\nabla_h \\times \\tau \\,;\n", + "$$ \n", + "and, the second, that the *Ekman\n", + "layer* along the bottom of the ocean, $z=-H$, generates Ekman\n", + "pumping which obeys the following relationship:\n", + "\n", + "\n", + "(Ekman Boundary Condition)\n", + "$$\\frac{1}{H} \\, A_v \\, \\left. \\frac{\\partial \\zeta}{\\partial z} \\right|_{z=-H} =\n", + " \\; \\kappa \\zeta, \n", + "$$ \n", + "where the *Ekman number*,\n", + "$\\kappa$, is defined by\n", + "$$\\kappa \\equiv \\frac{1}{H} \\left( \\frac{A_v f}{2} \\right)^{1/2}.$$\n", + "Using ([Stress Boundary Condition](#eq:stressbc)) and ([Ekman Boundary Condition](#ekmanbc)) to replace the boundary terms in ([Depth-Integrated Vorticity](#vort-depth-integ)), we get the following\n", + "equation for the vorticity:\n", + "$$\\frac{\\partial \\zeta}{\\partial t} + u \\frac{\\partial \\zeta}{\\partial x} + v \\frac{\\partial \\zeta}{\\partial y} + v \\beta = \\frac{1}{\\rho H} \\, \\nabla_h \n", + "\\times \\tau - \\kappa \\zeta + A_h \\, \\nabla_h^2 \\zeta.$$\n", + "\n", + "The next and final step may not seem at first to be much of a\n", + "simplification, but it is essential in order to derive a differential\n", + "equation that can be easily solved numerically. Integrate ([Continuity Eqn](#eq:continuity)) with respect\n", + "to $z$ in order to obtain $$\\frac {\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} = 0,$$ after which we can\n", + "introduce a *stream function*, $\\psi$, defined in terms of\n", + "the velocity as\n", + "$$u = - \\, \\frac{\\partial \\psi}{\\partial y} \\, , \\, v = \\frac{\\partial \\psi}{\\partial x}.$$ \n", + "The stream function satisfies this equation exactly, and we can write the\n", + "vorticity as $$\\zeta = \\nabla_h^2 \\psi,$$ which then allows us to write\n", + "both the velocity and vorticity in terms of a single variable, $\\psi$.\n", + "\n", + "After substituting into the vorticity equation, we are left with a\n", + "single equation in the unknown stream function. \n", + "$$ \\frac{\\partial}{\\partial t} \\, \\nabla_h^2 \\psi + {\\cal J} \\left( \\psi, \\nabla_h^2 \\psi \\right) + \\beta \\, \\frac {\\partial \\psi}{\\partial x} = \\frac{-1}{\\rho H} \\, \\nabla_h \\times \\tau - \\, \\kappa \\, \\nabla_h^2 \\psi + A_h \\, \\nabla_h^4 \\psi $$\n", + "where\n", + "$${\\cal J} (a,b) = \\frac{\\partial a}{\\partial x} \\, \\frac{\\partial b}{\\partial y} - \\frac{\\partial a}{\\partial y} \\, \\frac{\\partial b}{\\partial x}$$ \n", + "is called the *Jacobian* operator." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The original system ([X-Momentum Eqn](#eq:xmom)), ([Y-Momentum Eqn](#eq:ymom)), ([Hydrostatic Eqn](#eq:hydrostatic)) and ([Continuity Eqn](#eq:continuity)) was a system of four non-linear PDE’s, in four\n", + "independent variables (the three velocities and the pressure), each of\n", + "which depend on the three spatial coordinates. Now let us review the\n", + "approximations made above, and their effects on this system:\n", + "\n", + "1. After applying the QG approximation and homogeneity, two of the\n", + " non-linear terms were eliminated from the momentum equations, so\n", + " that the vertical velocity, $w$, no longer appears.\n", + "\n", + "2. By introducing the vorticity, $\\zeta$, the pressure was eliminated,\n", + " and the two momentum equations to be rewritten as a single equation\n", + " in $\\zeta$ and the velocities.\n", + "\n", + "3. Some additional terms were eliminated by assuming that Coriolis\n", + " effects dominate vorticity, and applying the continuity condition.\n", + "\n", + "4. Integrating over the vertical extent of the ocean, and applying\n", + " boundary conditions eliminated the $z$-dependence in the problem.\n", + "\n", + "5. The final step consists of writing the unknown vorticity and\n", + " velocities in terms of the single unknown stream function, $\\psi$.\n", + "\n", + "It is evident that the final equation is considerably simpler: it is a\n", + "single, non-linear PDE for the unknown stream function, $\\psi$, which is\n", + "a function of only two independent variables. As we will see in the next\n", + "section, this equation is of a very common type, for which simple and\n", + "efficient numerical techniques are available." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Glossary ##\n", + "\n", + "- **advection:** A property or quantity transferred by the flow of a fluid is said to be “advected” by the flow.\n", + "aliasing error: In a numerical scheme, this is the phenomenon that occurs when a grid is not fine enough to resolve the high modes in a solution. These high waveumbers consequently appear as lower modes in the solution due to aliasing. If the scheme is such that these high wavenumber modes are amplified, then the aliased modes can lead to a significant error in the computed solution.\n", + "- **$\\beta$-plane:** A $\\beta$-plane is a plane approximation of a curved section of the Earth’s surface, where the Coriolis parameter can be written roughly as a linear function.\n", + "- **continuity equation:** The equation that describes mass conservation in a fluid, ${\\partial \\rho}/{\\partial t} + \\nabla \\cdot (\\rho \\vec u) = 0$\n", + "- **Coriolis force:** An additional force experienced by an observer positioned in a rotating frame of reference. If $\\Omega$ is the angular velocity of the rotating frame, and $\\vec u$ is the velocity of an object observed within the rotating frame, then the Coriolis force, $\\Omega \\times \\vec u$, appears as a force tending to deflect the moving object to the right.\n", + "- **Coriolis parameter:** The component of the planetary vorticity which is normal to the earth’s surface, usually denoted by f.\n", + "- **difference stencil:** A convenient notation for representing difference approximation formula for derivatives. \n", + "- **Ekman layer:** The frictional layer in a geophysical fluid flow field which appears on a rigid surface perpendicular to the rotation vector.\n", + "- **Gauss-Seidel relaxation:** One of a class of iterative schemes for solving systems of linear equations. See Lab 8 for a complete discussion.\n", + "- **homogeneous fluid:** A fluid with constant density. Even though the density of ocean water varies with depth, it is often assumed homogeneous in order to simplify the equations of motion.\n", + "- **hydrostatic balance:** A balance, in the vertical direction, between the vertical pressure gradient and the buoyancy force. The pressure difference between any two points on a vertical line is assumed to depend only on the weight of the fluid between the points, as if the fluid were at rest, even though it is actually in motion. This approximation leads to a simplification in the equations of fluid flow, by replacing the vertical momentum equation.\n", + "- **incompressible fluid:** A fluid for which changes in the density with pressure are negligible. For a fluid with velocity field, $\\vec u$, this is expressed by the equation $\\nabla \\cdot \\vec u = 0$. This equation states that the local increase of density with time must be balanced by the divergence of the mass flux.\n", + "- **Jacobi relaxation:** The simplest of the iterative methods for solving linear systems of equations. See Lab 8 for a complete discussion.\n", + "- **momentum equation(s):** The equations representing Newton’s second law of motion for a fluid. There is one momentum equation for each component of the velocity.\n", + "- **over-relaxation:** Within a relaxation scheme, this refers to the use of a relaxation parameter $\\mu > 1$. It accelerates the standard Gauss-Seidel relaxation by forcing the iterates to move closer to the actual solution." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- **Poisson equation:** The partial differential equation $\\nabla^2 u = f$ or, written two dimensions, ${\\partial^2 u}/{\\partial x^2} + {\\partial^2 u}/{\\partial y^2} =f(x,y)$.\n", + "- **QG:** abbreviation for quasi-geostrophic.\n", + "- **quasi-geostrophic balance:** Approximate balance between the pressure gradient and the Coriolis Force.\n", + "- **relaxation:** A term that applies to a class of iterative schemes for solving large systems of equations. The advantage to these schemes for sparse matrices (compared to direct schemes such as Gaussian elimination) is that they operate only on the non-zero entries of the matrix. For a description of relaxation methods, see Lab 8." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "- **rigid lid approximation:** Assumption that the water surface deflection is negligible in the continuity equation (or conservation of volume equation)\n", + "- **SOR:** see successive over-relaxation.\n", + "- **sparse system:** A system of linear equations whose matrix representation has a large percentage of its entries equal to zero.\n", + "- **stream function:** Incompressible, two-dimensional flows with velocity field $(u,v)$, may be described by a stream function, $\\psi(x, y)$, which satisfies $u = −{\\partial \\psi}/{\\partial y}, v = {\\partial \\psi}/{\\partial x}$. These equations are a consequence of the incompressibility condition.\n", + "- **successive over-relaxation:** An iterative method for solving large systems of linear equations. See Lab 8 for a complete discussion.\n", + "- **under-relaxation:** Within a relaxation scheme, this refers to the use of a relaxation parameter $\\mu < 1$. It is not appropriate for solving systems of equations directly, but does have some application to multigrid methods.\n", + "- **vorticity:** Defined to be the curl of the velocity field, $\\zeta = \\nabla \\times \\vec u$. In geophysical flows, where the Earth is a rotating frame of reference, the vorticity can be considered as the sum of a relative vorticity (the curl of the velocity in the nonrotating frame) and the planetary vorticity, $2 \\Omega$. For these large-scale flows, vorticity is almost always present, and the planetary vorticity dominates." + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## References ##\n", + "\n", + "
\n", + "Arakawa, A. and V. R. Lamb, 1981: A potential enstrophy and energy conserving scheme for the shallow water equations. Monthly Weather Review, 109, 18–36.\n", + "
\n", + "
\n", + "Bryan, K., 1963: A numerical investigation of a non-linear model of a wind-driven ocean. Journal of Atmospheric Science, 20, 594–606
\n", + "
\n", + "On the adjustment of azimuthally perturbed vortices. Journal of Geophysical Research, 92, 8213–8225.\n", + "
\n", + "
\n", + "Mesinger, F. and A. Arakawa, 1976: Numerical Methods Used in Atmospheric Models,GARP Publications Series No.~17, Global Atmospheric Research Programme.\n", + "
\n", + "
\n", + "Pedlosky, J., 1987: Geophysical Fluid Dynamics. Springer-Verlag, New York, 2nd edition.Pond, \n", + "
\n", + "
\n", + "Phillips, N. A., 1956: The general circulation of the atmosphere: A numerical experiment.\n", + "Quarterly Journal of the Royal Meteorological Society, 82, 123–164.\n", + "
\n", + "
\n", + "Strang, G., 1988: Linear Algebra and its Applications. Harcourt Brace Jovanovich, San Diego,\n", + "CA, 2nd edition.\n", + "
\n", + "
\n", + "Veronis, G., 1966: Wind-driven ocean circulation – Part 2. Numerical solutions of the non- linear problem. Deep Sea Research, 13, 31–55.\n", + "
" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "cell_metadata_filter": "all", + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "latex_envs": { + "LaTeX_envs_menu_present": true, + "autoclose": false, + "autocomplete": true, + "bibliofile": "biblio.bib", + "cite_by": "apalike", + "current_citInitial": 1, + "eqLabelWithNumbers": true, + "eqNumInitial": 1, + "hotkeys": { + "equation": "meta-9" + }, + "labels_anchors": false, + "latex_user_defs": false, + "report_style_numbering": false, + "user_envs_cfg": false + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/notebooks/lab9/01-lab9.html b/notebooks/lab9/01-lab9.html new file mode 100644 index 0000000..e9df441 --- /dev/null +++ b/notebooks/lab9/01-lab9.html @@ -0,0 +1,899 @@ + + + + + + + + Laboratory 9: Fast Fourier Transforms — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Laboratory 9: Fast Fourier Transforms

+
+

Objectives

+

In this lab, you will explore Fast Fourier Transforms as a method of analysing and filtering data. The goal is to gain a better understanding of how Fourier transforms can be used to analyse the power spectral density of data (how much power there is in different frequencies) and for filtering data (removing certain frequencies from the data, whether that’s trying to remove noise, or isolating some frequencies of interest).

+

Specifically you will be able to:

+
    +
  • understand how sampling frequency can impact the frequencies you can detect in data

  • +
  • use Fourier transforms to analyse atmospheric wind measurements and calculate and plot power spectral densities

  • +
  • use Fourier transforms to filter data by removing particular frequencies

  • +
+
+
+

Introduction

+

This lab introduces the use of the fast Fourier transform for estimation of the power spectral density and for simple filtering. If you need a refresher or are learning about fourier transforms for the first time, we recommend reading Newman Chapter 7. For a description of the Fast Fourier transform, see Stull Section 8.4 and Jake VanderPlas’s blog +entry. Another good resources is Numerical Recipes Chapter 12

+
+
Before running this lab you will need to:
+
1. Install netCDF4 by running:
+
+
conda install netCDF4
+
+
+

after you have activated your numeric_2024 conda environment

+
    +
  1. download some data by running the cell below

  2. +
+
+
[ ]:
+
+
+
# Download some data that we will need:
+from matplotlib import pyplot as plt
+import urllib
+import os
+filelist=['miami_tower.npz','a17.nc','aircraft.npz']
+data_download=True
+if data_download:
+    for the_file in filelist:
+        url='http://clouds.eos.ubc.ca/~phil/docs/atsc500/data/{}'.format(the_file)
+        urllib.request.urlretrieve(url,the_file)
+        print("download {}: size is {:6.2g} Mbytes".format(the_file,os.path.getsize(the_file)*1.e-6))
+
+
+
+
+
[ ]:
+
+
+
# import required packages
+import numpy as np
+import matplotlib.pyplot as plt
+plt.style.use('ggplot')
+
+
+
+
+
+

A simple transform

+

To get started assume that there is a pure tone – a cosine wave oscillating at a frequency of 1 Hz. Next assume that we sample that 1 Hz wave at a sampling rate of 5 Hz i.e. 5 times a second

+
+
[ ]:
+
+
+
%matplotlib inline
+#
+# create a cosine wave that oscilates 20 times in 20 seconds
+# sampled at  Hz, so there are 20*5 = 100 measurements in 20 seconds
+#
+deltaT=0.2
+ticks = np.arange(0,20,deltaT)
+#
+#20 cycles in 20 seconds, each cycle goes through 2*pi radians
+#
+onehz=np.cos(2.*np.pi*ticks)
+fig,ax = plt.subplots(1,1,figsize=(8,6))
+ax.plot(ticks,onehz)
+ax.set_title('one hz wave sampled at 5 Hz')
+out=ax.set_xlabel('time (seconds)')
+
+
+
+

Repeat, but for a 2 Hz wave

+
+
[ ]:
+
+
+
deltaT=0.2
+ticks = np.arange(0,20,deltaT)
+#
+#40 cycles in 20 seconds, each cycle goes through 2*pi radians
+#
+twohz=np.cos(2.*2.*np.pi*ticks)
+fig,ax = plt.subplots(1,1,figsize=(8,6))
+ax.plot(ticks,twohz)
+ax.set_title('two hz wave sampled at 5 Hz')
+out=ax.set_xlabel('time (seconds)')
+
+
+
+

Note the problem at 2 Hz, the 5 Hz sampling frequency is too coarse to hit the top of every other peak in the wave. The ‘Nyquist frequency’ = 2 \(\times\) the sampling rate, is the highest frequency that equipment of a given sampling rate can reliably measure. In this example where we are measuring 5 times a second (i.e. at 5Hz), the Nyquist frequency is 2.5Hz. Note that, whilst the 2Hz signal is below the Nyquist frequency, and so it is measurable and we do detect a 2Hz signal, because 2Hz +is close to the Nyquist frequency of 2.5Hz, the signal is being distorted.

+
+
[ ]:
+
+
+
#now take the fft, we have 100 bins, so we alias at 50 bins, which is the nyquist frequency of 5 Hz/2. = 2.5 Hz
+# so the fft frequency resolution is 20 bins/Hz, or 1 bin = 0.05 Hz
+thefft=np.fft.fft(onehz)
+real_coeffs=np.real(thefft)
+
+fig,theAx=plt.subplots(1,1,figsize=(8,6))
+theAx.plot(real_coeffs)
+out=theAx.set_title('real fft of 1 hz')
+
+
+
+
+
The layout of the fft return value is describe in the scipy user manual.
+
For reference, here is the Fourier transform calculated by numpy.fft:
+
+
+\[y[k] = \sum_{n=0}^{N-1} x[n]\exp \left (- i 2 \pi k n /N \right )\]
+

which is the discrete version of the continuous transform (Numerical Recipes 12.0.4):

+
+\[y(k) = \int_{-\infty}^{\infty} x(t) \exp \left ( -i k t \right ) dt\]
+

(Note the different minus sign convention in the exponent compared to Numerical Recipes p. 490. It doesn’t matter what you choose, as long as you’re consistent).

+

From the Scipy manual:

+
+

Inserting k=0 we see that np.sum(x) corresponds to y[0]. This term will be non-zero if we haven’t removed any large scale trend in the data. For N even, the elements y[1]…y[N/2−1] contain the positive-frequency terms, and the elements y[N/2]…y[N−1] contain the negative-frequency terms, in order of decreasingly negative frequency. For N odd, the elements y[1]…y[(N−1)/2] contain the positive- frequency terms, and the elements y[(N+1)/2]…y[N−1] contain the negative- frequency terms, in +order of decreasingly negative frequency. In case the sequence x is real-valued, the values of y[n] for positive frequencies is the conjugate of the values y[n] for negative frequencies (because the spectrum is symmetric). Typically, only the FFT corresponding to positive frequencies is plotted.

+
+

So the first peak at index 20 is (20 bins) x (0.05 Hz/bin) = 1 Hz, as expected. The nyquist frequency of 2.5 Hz is at an index of N/2 = 50 and the negative frequency peak is 20 bins to the left of the end bin.

+

The inverse transform is:

+
+\[x[n] = \frac{1}{N} \sum_{k=0}^{N-1} y]k]\exp \left ( i 2 \pi k n /N \right )\]
+

What about the imaginary part? All imaginary coefficients are zero (neglecting roundoff errors)

+
+
[ ]:
+
+
+
imag_coeffs=np.imag(thefft)
+fig,theAx=plt.subplots(1,1,figsize=(8,6))
+theAx.plot(imag_coeffs)
+out=theAx.set_title('imaginary fft of 1 hz')
+
+
+
+
+
[ ]:
+
+
+
#now evaluate the power spectrum using Stull's 8.6.1a on p. 312
+
+Power=np.real(thefft*np.conj(thefft))
+totsize=len(thefft)
+halfpoint=int(np.floor(totsize/2.))
+firsthalf=Power[0:halfpoint]
+
+
+fig,ax=plt.subplots(1,1)
+freq=np.arange(0,5.,0.05)
+ax.plot(freq[0:halfpoint],firsthalf)
+ax.set_title('power spectrum')
+out=ax.set_xlabel('frequency (Hz)')
+len(freq)
+plt.show()
+
+
+
+

Check Stull 8.6.1b (or Numerical Recipes 12.0.13) which says that squared power spectrum = variance

+
+
[ ]:
+
+
+
print('\nsimple cosine: velocity variance %10.3f' % (np.sum(onehz*onehz)/totsize))
+print('simple cosine: Power spectrum sum %10.3f\n' % (np.sum(Power)/totsize**2.))
+
+
+
+
+
+

Power spectrum of turbulent vertical velocity

+

Let’s apply fft to some real atmospheric measurements. We’ll read in one of the files we downloaded at the beginning of the lab (you should be able to see these files on your local computer in this Lab9 directory).

+
+
[ ]:
+
+
+
#load data sampled at 20.8333 Hz
+
+td=np.load('miami_tower.npz') #load temp, uvel, vvel, wvel, minutes
+print('keys: ',td.keys())
+# Print the description saved in the file so we know what data are in the file
+print(td['description'])
+
+
+
+
+
[ ]:
+
+
+
# calculate the fft and plot the frequency-power spectrum
+sampleRate=20.833
+nyquistfreq=sampleRate/2.
+
+
+totsize=36000
+wvel=td['wvel'][0:totsize].flatten()
+temp=td['temp'][0:totsize].flatten()
+wvel = wvel - np.mean(wvel)
+temp= temp - np.mean(temp)
+flux=wvel*temp
+
+
+halfpoint=int(np.floor(totsize/2.))
+frequencies=np.arange(0,halfpoint)
+frequencies=frequencies/halfpoint
+frequencies=frequencies*nyquistfreq
+
+# raw spectrum -- no windowing or averaging
+#First confirm Parseval's theorem
+# (Numerical Recipes 12.1.10, p. 498)
+
+thefft=np.fft.fft(wvel)
+Power=np.real(thefft*np.conj(thefft))
+print('check Wiener-Khichine theorem for wvel')
+print('\nraw fft sum, full time series: %10.4f\n' % (np.sum(Power)/totsize**2.))
+print('velocity variance: %10.4f\n' % (np.sum(wvel*wvel)/totsize))
+
+
+fig,theAx=plt.subplots(1,1,figsize=(8,8))
+frequencies[0]=np.NaN
+Power[0]=np.NaN
+Power_half=Power[:halfpoint:]
+theAx.loglog(frequencies,Power_half)
+theAx.set_title('raw wvel spectrum with $f^{-5/3}$')
+theAx.set(xlabel='frequency (HZ)',ylabel='Power (m^2/s^2)')
+#
+# pick one point the line should pass through (by eye)
+# note that y intercept will be at log10(freq)=0
+# or freq=1 Hz
+#
+leftspec=np.log10(Power[1]*1.e-3)
+logy=leftspec - 5./3.*np.log10(frequencies)
+yvals=10.**logy
+theAx.loglog(frequencies,yvals,'r-')
+thePoint=theAx.plot(1.,Power[1]*1.e-3,'g+')
+thePoint[0].set_markersize(15)
+thePoint[0].set_marker('h')
+thePoint[0].set_markerfacecolor('g')
+
+
+
+
+
+

power spectrum layout

+

Here is what the entire power spectrum looks like, showing positive and negative frequencies

+
+
[ ]:
+
+
+
fig,theAx=plt.subplots(1,1,figsize=(8,8))
+out=theAx.semilogy(Power)
+
+
+
+

and here is what fftshift does:

+
+
[ ]:
+
+
+
shift_power=np.fft.fftshift(Power)
+fig,theAx=plt.subplots(1,1,figsize=(8,8))
+out=theAx.semilogy(shift_power)
+
+
+
+
+

Confirm that the fft at negative f is the complex conjugate of the fft at positive f

+
+
[ ]:
+
+
+
test_fft=np.fft.fft(wvel)
+fig,theAx=plt.subplots(2,1,figsize=(8,10))
+theAx[0].semilogy(np.real(test_fft))
+theAx[1].semilogy(np.imag(test_fft))
+print(test_fft[100])
+print(test_fft[-100])
+
+
+
+
+
+
+

Windowing

+

The FFT above is noisy, and there are several ways to smooth it. Numerical Recipes, p. 550 has a good discussion of “windowing” which helps remove the spurious power caused by the fact that the timeseries has a sudden stop and start. Below we split the timeseries into 25 segements of 1440 points each, fft each segment then average the 25. We convolve each segment with a Bartlett window.

+
+
[ ]:
+
+
+
print('\n\n\nTry a windowed spectrum (Bartlett window)\n')
+## windowing -- see p. Numerical recipes 550 for notation
+
+def calc_window(numvals=1440):
+    """
+      Calculate a Bartlett window following
+      Numerical Recipes 13.4.13
+    """
+
+    halfpoint=int(np.floor(numvals/2.))
+    facm=halfpoint
+    facp=1/facm
+
+    window=np.empty([numvals],float)
+    for j in np.arange(numvals):
+        window[j]=(1.-((j - facm)*facp)**2.)
+    return window
+
+#
+#  we need to normalize by the squared weights
+#  (see the fortran code on Numerical recipes p. 550)
+#
+numvals=1440
+window=calc_window(numvals=numvals)
+sumw=np.sum(window**2.)/numvals
+fig,theAx=plt.subplots(1,1,figsize=(8,8))
+theAx.plot(window)
+theAx.set_title('Bartlett window')
+print('sumw: %10.3f' % sumw)
+
+
+
+
+
[ ]:
+
+
+
def do_fft(the_series,window,ensemble=25,title='title'):
+    numvals=len(window)
+    sumw=np.sum(window**2.)/numvals
+    subset=the_series.copy()
+    subset=subset[:len(window)*ensemble]
+    subset=np.reshape(subset,(ensemble,numvals))
+    winspec=np.zeros([numvals],float)
+
+    for therow in np.arange(ensemble):
+        thedat=subset[therow,:]
+        thefft =np.fft.fft(thedat*window)
+        Power=thefft*np.conj(thefft)
+        #print('\nensemble member: %d' % therow)
+        #print('\nwindowed fft sum (m^2/s^2): %10.4f\n' % (np.sum(Power)/(sumw*numvals**2.),))
+        #print('velocity variance (m^2/s^2): %10.4f\n\n' % (np.sum(thedat*thedat)/numvals,))
+        winspec=winspec + Power
+
+    winspec=np.real(winspec/(numvals**2.*ensemble*sumw))
+    return winspec
+
+
+
+
+
+

Compare power spectra for wvel, theta, sensible heat flux

+
+

start with wvel

+
+
[ ]:
+
+
+
winspec=do_fft(wvel,window)
+sampleRate=20.833
+nyquistfreq=sampleRate/2.
+halfpoint=int(len(winspec)/2.)
+averaged_freq=np.linspace(0,1.,halfpoint)*nyquistfreq
+winspec=winspec[0:halfpoint]
+
+
+
+
+
[ ]:
+
+
+
def do_plot(the_freq,the_spec,title=None,ylabel=None):
+    the_freq[0]=np.NaN
+    the_spec[0]=np.NaN
+    fig,theAx=plt.subplots(1,1,figsize=(8,6))
+    theAx.loglog(the_freq,the_spec,label='fft power')
+    if title:
+        theAx.set_title(title)
+    leftspec=np.log10(the_spec[int(np.floor(halfpoint/10.))])
+    logy=leftspec - 5./3.*np.log10(the_freq)
+    yvals=10.**logy
+    theAx.loglog(the_freq,yvals,'g-',label='$f^{-5/3}$')
+    theAx.set_xlabel('frequency (Hz)')
+    if ylabel:
+        out=theAx.set_ylabel(ylabel)
+    out=theAx.legend(loc='best')
+    return theAx
+
+labels=dict(title='wvel power spectrum',ylabel='$(m^2\,s^{-2}\,Hz^{-1})$')
+ax=do_plot(averaged_freq,winspec,**labels)
+
+
+
+
+
[ ]:
+
+
+
winspec=do_fft(temp,window)
+winspec=winspec[0:halfpoint]
+labels=dict(title='Temperature power spectrum',ylabel='$K^{2}\,Hz^{-1})$')
+ax=do_plot(averaged_freq,winspec,**labels)
+
+
+
+
+
[ ]:
+
+
+
winspec=do_fft(flux,window)
+winspec=winspec[0:halfpoint]
+labels=dict(title='Heat flux power spectrum',ylabel='$K m s^{-1}\,Hz^{-1})$')
+ax=do_plot(averaged_freq,winspec,**labels)
+
+
+
+
+
+
+

Filtering

+

We can also filter our timeseries by removing frequencies we aren’t interested in. Numerical Recipes discusses the approach on page 551. For example, suppose we want to filter all frequencies higher than 0.5 Hz from the vertical velocity data.

+
+
[ ]:
+
+
+
wvel= wvel - np.mean(wvel)
+thefft=np.fft.fft(wvel)
+totsize=len(thefft)
+samprate=20.8333 #Hz
+the_time=np.arange(0,totsize,1/20.8333)
+freq_bin_width=samprate/(totsize*2)
+half_hz_index=int(np.floor(0.5/freq_bin_width))
+filter_func=np.zeros_like(thefft,dtype=np.float64)
+filter_func[0:half_hz_index]=1.
+filter_func[-half_hz_index:]=1.
+filtered_wvel=np.real(np.fft.ifft(filter_func*thefft))
+fig,ax=plt.subplots(1,1,figsize=(10,6))
+numpoints=500
+ax.plot(the_time[:numpoints],filtered_wvel[:numpoints],label='filtered')
+ax.plot(the_time[:numpoints],wvel[:numpoints],'g+',label='data')
+ax.set(xlabel='time (seconds)',ylabel='wvel (m/s)')
+out=ax.legend()
+
+
+
+
+
+

2D histogram of the optical depth \(\tau\)

+

Below I calculate the 2-d and averaged 1-d spectra for the optical depth, which gives the penetration depth of photons through a cloud, and is closely related to cloud thickness

+
+
[ ]:
+
+
+
# this allows us to ignore (not print out) some warnings
+import warnings
+warnings.filterwarnings("ignore",category=FutureWarning)
+
+
+
+
+
[ ]:
+
+
+
import netCDF4
+from netCDF4 import Dataset
+filelist=['a17.nc']
+with Dataset(filelist[0]) as nc:
+    tau=nc.variables['tau'][...]
+
+
+
+
+
+

Character of the optical depth field

+

The image below shows one of the marine boundary layer landsat scenes analyzed in Lewis et al., 2004

+

It is a 2048 x 2048 pixel image taken by Landsat 7, with the visible reflectivity converted to cloud optical depth. The pixels are 25 m x 25 m, so the scene extends for about 50 km x 50 km

+
+
[ ]:
+
+
+
%matplotlib inline
+from mpl_toolkits.axes_grid1 import make_axes_locatable
+plt.close('all')
+fig,ax=plt.subplots(1,2,figsize=(13,7))
+ax[0].set_title('landsat a17')
+im0=ax[0].imshow(tau)
+im1=ax[1].hist(tau.ravel())
+ax[1].set_title('histogram of tau values')
+divider = make_axes_locatable(ax[0])
+cax = divider.append_axes("bottom", size="5%", pad=0.35)
+out=fig.colorbar(im0,orientation='horizontal',cax=cax)
+
+
+
+
+
+

ubc_fft class

+

In the next cell I define a class that calculates the 2-d fft for a square image

+

in the method power_spectrum we calculate both the 2d fft and the power spectrum and save them as class attributes. In the method annular_average I take the power spectrum, which is the two-dimensional field \(E(k_x, k_y)\) (in cartesian coordinates) or \(E(k,\theta)\) (in polar coordinates). In the method annular_avg I take the average

+
+\[\overline{E}(k) = \int_0^{2\pi} E(k, \theta) d\theta\]
+

and plot that average with the method graph_spectrum

+
+
[ ]:
+
+
+
from netCDF4 import Dataset
+import numpy as np
+import math
+from numpy import fft
+from matplotlib import pyplot as plt
+
+
+class ubc_fft:
+
+    def __init__(self, filename, var, scale):
+        """
+           Input filename, var=variable name,
+           scale= the size of the pixel in km
+
+           Constructer opens the netcdf file, reads the data and
+           saves the twodimensional fft
+        """
+        with Dataset(filename,'r') as fin:
+            data = fin.variables[var][...]
+        data = data - data.mean()
+        if data.shape[0] != data.shape[1]:
+            raise ValueError('expecting square matrix')
+        self.xdim = data.shape[0]     # size of each row of the array
+        self.midpoint = int(math.floor(self.xdim/2))
+        root,suffix = filename.split('.')
+        self.filename = root
+        self.var = var
+        self.scale = float(scale)
+        self.data = data
+        self.fft_data = fft.fft2(self.data)
+
+    def power_spectrum(self):
+        """
+           calculate the power spectrum for the 2-dimensional field
+        """
+        #
+        # fft_shift moves the zero frequency point to the  middle
+        # of the array
+        #
+        fft_shift = fft.fftshift(self.fft_data)
+        spectral_dens = fft_shift*np.conjugate(fft_shift)/(self.xdim*self.xdim)
+        spectral_dens = spectral_dens.real
+        #
+        # dimensional wavenumbers for 2dim spectrum  (need only the kx
+        # dimensional since image is square
+        #
+        k_vals = np.arange(0,(self.midpoint))+1
+        k_vals = (k_vals-self.midpoint)/(self.xdim*self.scale)
+        self.spectral_dens=spectral_dens
+        self.k_vals=k_vals
+
+    def annular_avg(self,avg_binwidth):
+        """
+         integrate the 2-d power spectrum around a series of rings
+         of radius kradial and average into a set of 1-dimensional
+         radial bins
+        """
+        #
+        #  define the k axis which is the radius in the 2-d polar version of E
+        #
+        numbins = int(round((math.sqrt(2)*self.xdim/avg_binwidth),0)+1)
+
+        avg_spec = np.zeros(numbins,np.float64)
+        bin_count = np.zeros(numbins,np.float64)
+
+        print("\t- INTEGRATING... ")
+        for i in range(self.xdim):
+            if (i%100) == 0:
+                print("\t\trow: {} completed".format(i))
+            for j in range(self.xdim):
+                kradial = math.sqrt(((i+1)-self.xdim/2)**2+((j+1)-self.xdim/2)**2)
+                bin_num = int(math.floor(kradial/avg_binwidth))
+                avg_spec[bin_num]=avg_spec[bin_num]+ kradial*self.spectral_dens[i,j]
+                bin_count[bin_num]+=1
+
+        for i in range(numbins):
+            if bin_count[i]>0:
+                avg_spec[i]=avg_spec[i]*avg_binwidth/bin_count[i]/(4*(math.pi**2))
+        self.avg_spec=avg_spec
+        #
+        # dimensional wavenumbers for 1-d average spectrum
+        #
+        self.k_bins=np.arange(numbins)+1
+        self.k_bins = self.k_bins[0:self.midpoint]
+        self.avg_spec = self.avg_spec[0:self.midpoint]
+
+
+
+    def graph_spectrum(self, kol_slope=-5./3., kol_offset=1., \
+                      title=None):
+        """
+           graph the annular average and compare it to Kolmogorov -5/3
+        """
+        avg_spec=self.avg_spec
+        delta_k = 1./self.scale                # 1./km (1/0.025 for landsat 25 meter pixels)
+        nyquist = delta_k * 0.5
+        knum = self.k_bins * (nyquist/float(len(self.k_bins)))# k = w/(25m)
+        #
+        # draw the -5/3 line through a give spot
+        #
+        kol = kol_offset*(knum**kol_slope)
+        fig,ax=plt.subplots(1,1,figsize=(8,8))
+        ax.loglog(knum,avg_spec,'r-',label='power')
+        ax.loglog(knum,kol,'k-',label="$k^{-5/3}$")
+        ax.set(title=title,xlabel='k (1/km)',ylabel='$E_k$')
+        ax.legend()
+        self.plotax=ax
+
+
+
+
+
[ ]:
+
+
+
plt.close('all')
+plt.style.use('ggplot')
+output = ubc_fft('a17.nc','tau',0.025)
+output.power_spectrum()
+
+
+
+
+
[ ]:
+
+
+
fig,ax=plt.subplots(1,1,figsize=(7,7))
+ax.set_title('landsat a17')
+im0=ax.imshow(np.log10(output.spectral_dens))
+ax.set_title('log10 of the 2-d power spectrum')
+divider = make_axes_locatable(ax)
+cax = divider.append_axes("bottom", size="5%", pad=0.35)
+out=fig.colorbar(im0,orientation='horizontal',cax=cax)
+
+
+
+
+
[ ]:
+
+
+
avg_binwidth=5  #make the kradial bins 5 pixels wide
+output.annular_avg(avg_binwidth)
+
+
+
+
+
[ ]:
+
+
+
output.graph_spectrum(kol_offset=2000.,title='Landsat {} power spectrum'.format(output.filename))
+
+
+
+
+
+

Problem – lowpass filtering of a 2-d image

+

For the image above, we know that the 25 meter pixels correspond to k=1/0.025 = 40 \(km^{-1}\). That means that the Nyquist wavenumber is k=20 \(km^{-1}\). Using that information, design a filter that removes all wavenumbers higher than 1 \(km^{-1}\).

+
    +
  1. Use that filter to zero those values in the fft, then inverse transform and plot the low-pass filtered image.

  2. +
  3. Take the 1-d fft of the image and repeat the plot of the power spectrum to show that there is no power in wavenumbers higher than 1 \(km^{-1}\).

  4. +
+

(Hint – I used the fftshift function to put the low wavenumber cells in the center of the fft, which made it simpler to zero the outer cells. I then used ifftshift to reverse shift before inverse transforming to get the filtered image.)

+
+
+

An aside about ffts: Using the fft to compute correlation

+

Below I use aircraft measurments of \(\theta\) and wvel taken at 25 Hz. I compute the autocorrelation using numpy.correlate and numpy.fft and show they are identical, as we’d expect

+
+
[ ]:
+
+
+
#http://stackoverflow.com/questions/643699/how-can-i-use-numpy-correlate-to-do-autocorrelation
+import numpy as np
+%matplotlib inline
+data = np.load('aircraft.npz')
+wvel=data['wvel'] - data['wvel'].mean()
+theta=data['theta'] - data['theta'].mean()
+autocorr = np.correlate(wvel,wvel,mode='full')
+auto_data = autocorr[wvel.size:]
+ticks=np.arange(0,wvel.size)
+ticks=ticks/25.
+fig,ax = plt.subplots(1,1,figsize=(10,8))
+ax.set(xlabel='lag (seconds)',title='autocorrelation of wvel using numpy.correlate')
+out=ax.plot(ticks[:300],auto_data[:300])
+
+
+
+
+
[ ]:
+
+
+
import numpy.fft as fft
+the_fft = fft.fft(wvel)
+auto_fft = the_fft*np.conj(the_fft)
+auto_fft = np.real(fft.ifft(auto_fft))
+
+fig,ax = plt.subplots(1,1,figsize=(10,8))
+ax.plot(ticks[:300],auto_fft[:300])
+out=ax.set(xlabel='lag (seconds)',title='autocorrelation using fft')
+
+
+
+
+
+

An aside about ffts: Using ffts to find a wave envelope

+

Say you have a wave in your data, but it is not across all of your domain, e.g:

+
+
[ ]:
+
+
+
# Create a cosine wave modulated by a larger wavelength envelope wave
+
+# create a cosine wave that oscilates x times in 10 seconds
+# sampled at 10 Hz, so there are 10*10 = 100 measurements in 10 seconds
+#
+%matplotlib inline
+
+fig,axs = plt.subplots(2,2,figsize=(12,8))
+
+deltaT=0.1
+ticks = np.arange(0,10,deltaT)
+
+onehz=np.cos(2.0*np.pi*ticks)
+axs[0,0].plot(ticks,onehz)
+axs[0,0].set_title('wave one hz')
+
+# Define an evelope function that is zero between 0 and 2 second,
+# modulated by a sine wave between 2 and 8 and zero afterwards
+
+envelope = np.empty_like(onehz)
+envelope[0:20] = 0.0
+envelope[20:80] = np.sin((1.0/6.0)*np.pi*ticks[0:60])
+envelope[80:100] = 0.0
+
+axs[0,1].plot(ticks,envelope)
+axs[0,1].set_title('envelope')
+
+envelopewave = onehz * envelope
+
+axs[1,0].plot(ticks,envelopewave)
+axs[1,0].set_title('one hz with envelope')
+plt.show()
+
+
+
+

We can do a standard FFT on this to see the power spectrum and then recover the original wave using the inverse FFT. However, we can also use FFTs in other ways to do wavelet analysis, e.g. to find the envelope function, using a method known as the Hilbert transform (see Zimen et al. 2003). This method uses the following steps: 1. Perform the fourier transform of the function. 2. Apply the inverse fourier transform to only the positive +wavenumber half of the Fourier spectrum. 3. Calculate the magnitude of the result from step 2 (which will have both real and imaginary parts) and multiply by 2.0 to get the correct magnitude of the envelope function.

+
+
[ ]:
+
+
+
# Calculate the envelope function
+# Step 1. FFT
+thefft=np.fft.fft(envelopewave)
+
+# Find the corresponding frequencies for each index of thefft
+# note, these may not be the exact frequencies, as that will depend on the sampling resolution
+# of your input data, but for this purpose we just want to know which ones are positive and
+# which are negative so don't need to get the constant correct
+freqs = np.fft.fftfreq(len(envelopewave))
+
+# Step 2. Hilbert transform: set all negative frequencies to 0:
+filt_fft = thefft.copy() # make a copy of the array that we will change the negative frequencies in
+filt_fft[freqs<0] = 0 # set all values for negative frequenices to 0
+
+# Inverse FFT on full field:
+recover_sig = np.fft.ifft(thefft)
+# inverse FFT on only the positive wavenumbers
+positive_k_ifft = np.fft.ifft(filt_fft)
+
+# Step 3. Calculate magnitude and multiply by 2:
+envelope_sig = 2.0 *np.abs(positive_k_ifft)
+
+# Plot the result
+fig,axs = plt.subplots(1,1,figsize=(12,8))
+
+deltaT=0.1
+ticks = np.arange(0,10,deltaT)
+
+axs.plot(ticks,envelopewave,linewidth=3,label='original signal')
+
+axs.plot(ticks,recover_sig,linestyle=':',color='k',linewidth=3,label ='signal via FFT')
+
+
+axs.plot(ticks,envelope_sig,linewidth=3,color='b',label='envelope from FFT')
+axs.set_title('Envelope from Hilbert transform')
+axs.legend(loc='best')
+plt.show()
+
+
+
+
+
[ ]:
+
+
+

+
+
+
+
+
[ ]:
+
+
+

+
+
+
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/notebooks/lab9/01-lab9.ipynb b/notebooks/lab9/01-lab9.ipynb new file mode 100644 index 0000000..fcf23c0 --- /dev/null +++ b/notebooks/lab9/01-lab9.ipynb @@ -0,0 +1,1254 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "# Laboratory 9: Fast Fourier Transforms" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Objectives\n", + "\n", + "In this lab, you will explore Fast Fourier Transforms as a method of analysing and filtering data. The goal is to gain a better understanding of how Fourier transforms can be used to analyse the power spectral density of data (how much power there is in different frequencies) and for filtering data (removing certain frequencies from the data, whether that's trying to remove noise, or isolating some frequencies of interest).\n", + "\n", + "Specifically you will be able to:\n", + "\n", + "- understand how sampling frequency can impact the frequencies you can detect in data\n", + "\n", + "- use Fourier transforms to analyse atmospheric wind measurements and calculate and plot power spectral densities\n", + "\n", + "- use Fourier transforms to filter data by removing particular frequencies" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Introduction \n", + "\n", + "This lab introduces the use of the fast Fourier transform for estimation\n", + "of the power spectral density and for simple filtering. If you need a refresher or\n", + "are learning about fourier transforms for the first time, we recommend reading\n", + "[Newman Chapter 7](https://owncloud.eoas.ubc.ca/s/STrxS2pXewjqdYt). For a description of the Fast Fourier transform,\n", + "see [Stull Section 8.4](https://owncloud.eoas.ubc.ca/s/KMfPeGPLs2Fe7Qq) and [Jake VanderPlas's blog entry](https://jakevdp.github.io/blog/2013/08/28/understanding-the-fft/). Another good resources is\n", + "[Numerical Recipes Chapter 12](https://nextcloud.eoas.ubc.ca/s/cnbBeQ47qBMgq3K)\n", + "\n", + "Before running this lab you will need to: \n", + " 1. Install netCDF4 by running:\n", + " \n", + " conda install netCDF4 \n", + " \n", + "after you have activated your numeric_2024 conda environment \n", + " \n", + "2. download some data by running the cell below\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "start_time": "2022-03-04T20:51:44.330Z" + } + }, + "outputs": [], + "source": [ + "# Download some data that we will need:\n", + "from matplotlib import pyplot as plt\n", + "import urllib\n", + "import os\n", + "filelist=['miami_tower.npz','a17.nc','aircraft.npz']\n", + "data_download=True\n", + "if data_download:\n", + " for the_file in filelist:\n", + " url='http://clouds.eos.ubc.ca/~phil/docs/atsc500/data/{}'.format(the_file)\n", + " urllib.request.urlretrieve(url,the_file)\n", + " print(\"download {}: size is {:6.2g} Mbytes\".format(the_file,os.path.getsize(the_file)*1.e-6))" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-08T03:04:50.917356Z", + "start_time": "2022-03-08T03:04:50.914138Z" + } + }, + "outputs": [], + "source": [ + "# import required packages\n", + "import numpy as np\n", + "import matplotlib.pyplot as plt\n", + "plt.style.use('ggplot')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## A simple transform\n", + "\n", + "To get started assume that there is a pure tone -- a cosine wave oscillating at a frequency of 1 Hz. Next assume that we sample that 1 Hz wave at a sampling rate of 5 Hz i.e. 5 times a second\n", + "\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:27.525727Z", + "start_time": "2022-03-04T20:48:27.320858Z" + } + }, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "#\n", + "# create a cosine wave that oscilates 20 times in 20 seconds\n", + "# sampled at Hz, so there are 20*5 = 100 measurements in 20 seconds\n", + "#\n", + "deltaT=0.2\n", + "ticks = np.arange(0,20,deltaT)\n", + "#\n", + "#20 cycles in 20 seconds, each cycle goes through 2*pi radians\n", + "#\n", + "onehz=np.cos(2.*np.pi*ticks)\n", + "fig,ax = plt.subplots(1,1,figsize=(8,6))\n", + "ax.plot(ticks,onehz)\n", + "ax.set_title('one hz wave sampled at 5 Hz')\n", + "out=ax.set_xlabel('time (seconds)')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Repeat, but for a 2 Hz wave" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:27.862162Z", + "start_time": "2022-03-04T20:48:27.719034Z" + } + }, + "outputs": [], + "source": [ + "deltaT=0.2\n", + "ticks = np.arange(0,20,deltaT)\n", + "#\n", + "#40 cycles in 20 seconds, each cycle goes through 2*pi radians\n", + "#\n", + "twohz=np.cos(2.*2.*np.pi*ticks)\n", + "fig,ax = plt.subplots(1,1,figsize=(8,6))\n", + "ax.plot(ticks,twohz)\n", + "ax.set_title('two hz wave sampled at 5 Hz')\n", + "out=ax.set_xlabel('time (seconds)')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Note the problem at 2 Hz, the 5 Hz sampling frequency is too coarse to hit the top of every other\n", + "peak in the wave. The 'Nyquist frequency' = 2 $\\times$ the sampling rate, is the highest frequency that equipment of a given sampling rate can reliably measure. In this example where we are measuring 5 times a second (i.e. at 5Hz), the Nyquist frequency is 2.5Hz. Note that, whilst the 2Hz signal is below the Nyquist frequency, and so it is measurable and we do detect a 2Hz signal, because 2Hz is close to the Nyquist frequency of 2.5Hz, the signal is being distorted." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:28.672553Z", + "start_time": "2022-03-04T20:48:28.567073Z" + } + }, + "outputs": [], + "source": [ + "#now take the fft, we have 100 bins, so we alias at 50 bins, which is the nyquist frequency of 5 Hz/2. = 2.5 Hz\n", + "# so the fft frequency resolution is 20 bins/Hz, or 1 bin = 0.05 Hz\n", + "thefft=np.fft.fft(onehz)\n", + "real_coeffs=np.real(thefft)\n", + "\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,6))\n", + "theAx.plot(real_coeffs)\n", + "out=theAx.set_title('real fft of 1 hz')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The layout of the fft return value is describe in \n", + "[the scipy user manual](http://docs.scipy.org/doc/scipy/reference/tutorial/fftpack.html#id9). \n", + "For reference, here is the Fourier transform calculated by numpy.fft:\n", + "\n", + "$$y[k] = \\sum_{n=0}^{N-1} x[n]\\exp \\left (- i 2 \\pi k n /N \\right )$$\n", + "\n", + "which is the discrete version of the continuous transform (Numerical Recipes 12.0.4):\n", + "\n", + "$$y(k) = \\int_{-\\infty}^{\\infty} x(t) \\exp \\left ( -i k t \\right ) dt$$\n", + "\n", + "(Note the different minus sign convention in the exponent compared to Numerical Recipes p. 490. It doesn't matter what you choose, as long as you're consistent).\n", + "\n", + "From the Scipy manual:\n", + "\n", + "> Inserting k=0 we see that np.sum(x) corresponds to y[0]. This term will be non-zero if we haven't removed any large scale trend in the data. For N even, the elements y[1]...y[N/2−1] contain the positive-frequency terms, and the elements y[N/2]...y[N−1] contain the negative-frequency terms, in order of decreasingly negative frequency. For N odd, the elements y[1]...y[(N−1)/2] contain the positive- frequency terms, and the elements y[(N+1)/2]...y[N−1] contain the negative- frequency terms, in order of decreasingly negative frequency.\n", + "> In case the sequence x is real-valued, the values of y[n] for positive frequencies is the conjugate of the values y[n] for negative frequencies (because the spectrum is symmetric). Typically, only the FFT corresponding to positive frequencies is plotted.\n", + "\n", + "So the first peak at index 20 is (20 bins) x (0.05 Hz/bin) = 1 Hz, as expected. The nyquist frequency of 2.5 Hz is at an index of N/2 = 50 and the negative frequency peak is 20 bins to the left of the end bin.\n", + "\n", + "\n", + "The inverse transform is:\n", + "\n", + "$$x[n] = \\frac{1}{N} \\sum_{k=0}^{N-1} y]k]\\exp \\left ( i 2 \\pi k n /N \\right )$$" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "What about the imaginary part? All imaginary coefficients are zero (neglecting roundoff errors)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-04T20:48:30.797803Z", + "start_time": "2022-03-04T20:48:30.680290Z" + } + }, + "outputs": [], + "source": [ + "imag_coeffs=np.imag(thefft)\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,6))\n", + "theAx.plot(imag_coeffs)\n", + "out=theAx.set_title('imaginary fft of 1 hz')" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:17:08.773334Z", + "start_time": "2022-03-03T21:17:08.670253Z" + } + }, + "outputs": [], + "source": [ + "#now evaluate the power spectrum using Stull's 8.6.1a on p. 312\n", + "\n", + "Power=np.real(thefft*np.conj(thefft))\n", + "totsize=len(thefft)\n", + "halfpoint=int(np.floor(totsize/2.))\n", + "firsthalf=Power[0:halfpoint]\n", + "\n", + "\n", + "fig,ax=plt.subplots(1,1)\n", + "freq=np.arange(0,5.,0.05)\n", + "ax.plot(freq[0:halfpoint],firsthalf)\n", + "ax.set_title('power spectrum')\n", + "out=ax.set_xlabel('frequency (Hz)')\n", + "len(freq)\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Check Stull 8.6.1b (or Numerical Recipes 12.0.13) which says that squared power spectrum = variance\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:17:11.622908Z", + "start_time": "2022-03-03T21:17:11.618116Z" + } + }, + "outputs": [], + "source": [ + "print('\\nsimple cosine: velocity variance %10.3f' % (np.sum(onehz*onehz)/totsize))\n", + "print('simple cosine: Power spectrum sum %10.3f\\n' % (np.sum(Power)/totsize**2.))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Power spectrum of turbulent vertical velocity" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's apply fft to some real atmospheric measurements. We'll read in one of the files we downloaded at the beginning of the lab (you should be able to see these files on your local computer in this Lab9 directory)." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:19:06.371394Z", + "start_time": "2022-03-03T21:19:06.363885Z" + }, + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "#load data sampled at 20.8333 Hz\n", + "\n", + "td=np.load('miami_tower.npz') #load temp, uvel, vvel, wvel, minutes\n", + "print('keys: ',td.keys())\n", + "# Print the description saved in the file so we know what data are in the file\n", + "print(td['description'])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:20:16.106598Z", + "start_time": "2022-03-03T21:20:15.516697Z" + }, + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "# calculate the fft and plot the frequency-power spectrum\n", + "sampleRate=20.833\n", + "nyquistfreq=sampleRate/2.\n", + "\n", + "\n", + "totsize=36000\n", + "wvel=td['wvel'][0:totsize].flatten()\n", + "temp=td['temp'][0:totsize].flatten()\n", + "wvel = wvel - np.mean(wvel)\n", + "temp= temp - np.mean(temp)\n", + "flux=wvel*temp\n", + "\n", + "\n", + "halfpoint=int(np.floor(totsize/2.))\n", + "frequencies=np.arange(0,halfpoint)\n", + "frequencies=frequencies/halfpoint\n", + "frequencies=frequencies*nyquistfreq\n", + "\n", + "# raw spectrum -- no windowing or averaging\n", + "#First confirm Parseval's theorem\n", + "# (Numerical Recipes 12.1.10, p. 498)\n", + "\n", + "thefft=np.fft.fft(wvel)\n", + "Power=np.real(thefft*np.conj(thefft))\n", + "print('check Wiener-Khichine theorem for wvel')\n", + "print('\\nraw fft sum, full time series: %10.4f\\n' % (np.sum(Power)/totsize**2.))\n", + "print('velocity variance: %10.4f\\n' % (np.sum(wvel*wvel)/totsize))\n", + "\n", + "\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "frequencies[0]=np.NaN\n", + "Power[0]=np.NaN\n", + "Power_half=Power[:halfpoint:]\n", + "theAx.loglog(frequencies,Power_half)\n", + "theAx.set_title('raw wvel spectrum with $f^{-5/3}$')\n", + "theAx.set(xlabel='frequency (HZ)',ylabel='Power (m^2/s^2)')\n", + "#\n", + "# pick one point the line should pass through (by eye)\n", + "# note that y intercept will be at log10(freq)=0\n", + "# or freq=1 Hz\n", + "#\n", + "leftspec=np.log10(Power[1]*1.e-3)\n", + "logy=leftspec - 5./3.*np.log10(frequencies)\n", + "yvals=10.**logy\n", + "theAx.loglog(frequencies,yvals,'r-')\n", + "thePoint=theAx.plot(1.,Power[1]*1.e-3,'g+')\n", + "thePoint[0].set_markersize(15)\n", + "thePoint[0].set_marker('h')\n", + "thePoint[0].set_markerfacecolor('g')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## power spectrum layout" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Here is what the entire power spectrum looks like, showing positive and negative frequencies" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:20:20.946232Z", + "start_time": "2022-03-03T21:20:20.597672Z" + }, + "scrolled": true + }, + "outputs": [], + "source": [ + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "out=theAx.semilogy(Power)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "and here is what fftshift does:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:20:41.449621Z", + "start_time": "2022-03-03T21:20:41.098947Z" + } + }, + "outputs": [], + "source": [ + "shift_power=np.fft.fftshift(Power)\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "out=theAx.semilogy(shift_power)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### Confirm that the fft at negative f is the complex conjugate of the fft at positive f" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:01.817604Z", + "start_time": "2022-03-03T21:20:59.751476Z" + } + }, + "outputs": [], + "source": [ + "test_fft=np.fft.fft(wvel)\n", + "fig,theAx=plt.subplots(2,1,figsize=(8,10))\n", + "theAx[0].semilogy(np.real(test_fft))\n", + "theAx[1].semilogy(np.imag(test_fft))\n", + "print(test_fft[100])\n", + "print(test_fft[-100])" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Windowing\n", + "\n", + "The FFT above is noisy, and there are several ways to smooth it. Numerical Recipes, p. 550 has a good discussion of \"windowing\" which helps remove the spurious power caused by the fact that the timeseries has a sudden stop and start.\n", + "Below we split the timeseries into 25 segements of 1440 points each, fft each segment then average the 25. We convolve each segment with a Bartlett window." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:29.850813Z", + "start_time": "2022-03-03T21:21:29.532796Z" + } + }, + "outputs": [], + "source": [ + "print('\\n\\n\\nTry a windowed spectrum (Bartlett window)\\n')\n", + "## windowing -- see p. Numerical recipes 550 for notation\n", + "\n", + "def calc_window(numvals=1440):\n", + " \"\"\"\n", + " Calculate a Bartlett window following\n", + " Numerical Recipes 13.4.13\n", + " \"\"\"\n", + "\n", + " halfpoint=int(np.floor(numvals/2.))\n", + " facm=halfpoint\n", + " facp=1/facm\n", + "\n", + " window=np.empty([numvals],float)\n", + " for j in np.arange(numvals):\n", + " window[j]=(1.-((j - facm)*facp)**2.)\n", + " return window\n", + "\n", + "#\n", + "# we need to normalize by the squared weights\n", + "# (see the fortran code on Numerical recipes p. 550)\n", + "#\n", + "numvals=1440\n", + "window=calc_window(numvals=numvals)\n", + "sumw=np.sum(window**2.)/numvals\n", + "fig,theAx=plt.subplots(1,1,figsize=(8,8))\n", + "theAx.plot(window)\n", + "theAx.set_title('Bartlett window')\n", + "print('sumw: %10.3f' % sumw)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:31.330525Z", + "start_time": "2022-03-03T21:21:31.325843Z" + } + }, + "outputs": [], + "source": [ + "def do_fft(the_series,window,ensemble=25,title='title'):\n", + " numvals=len(window)\n", + " sumw=np.sum(window**2.)/numvals\n", + " subset=the_series.copy()\n", + " subset=subset[:len(window)*ensemble]\n", + " subset=np.reshape(subset,(ensemble,numvals))\n", + " winspec=np.zeros([numvals],float)\n", + "\n", + " for therow in np.arange(ensemble):\n", + " thedat=subset[therow,:]\n", + " thefft =np.fft.fft(thedat*window)\n", + " Power=thefft*np.conj(thefft)\n", + " #print('\\nensemble member: %d' % therow)\n", + " #print('\\nwindowed fft sum (m^2/s^2): %10.4f\\n' % (np.sum(Power)/(sumw*numvals**2.),))\n", + " #print('velocity variance (m^2/s^2): %10.4f\\n\\n' % (np.sum(thedat*thedat)/numvals,))\n", + " winspec=winspec + Power\n", + "\n", + " winspec=np.real(winspec/(numvals**2.*ensemble*sumw))\n", + " return winspec" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Compare power spectra for wvel, theta, sensible heat flux" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "### start with wvel" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:33.181739Z", + "start_time": "2022-03-03T21:21:33.174399Z" + } + }, + "outputs": [], + "source": [ + "winspec=do_fft(wvel,window)\n", + "sampleRate=20.833\n", + "nyquistfreq=sampleRate/2.\n", + "halfpoint=int(len(winspec)/2.)\n", + "averaged_freq=np.linspace(0,1.,halfpoint)*nyquistfreq\n", + "winspec=winspec[0:halfpoint]" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:35.469984Z", + "start_time": "2022-03-03T21:21:34.907400Z" + }, + "lines_to_next_cell": 2 + }, + "outputs": [], + "source": [ + "def do_plot(the_freq,the_spec,title=None,ylabel=None):\n", + " the_freq[0]=np.NaN\n", + " the_spec[0]=np.NaN\n", + " fig,theAx=plt.subplots(1,1,figsize=(8,6))\n", + " theAx.loglog(the_freq,the_spec,label='fft power')\n", + " if title:\n", + " theAx.set_title(title)\n", + " leftspec=np.log10(the_spec[int(np.floor(halfpoint/10.))])\n", + " logy=leftspec - 5./3.*np.log10(the_freq)\n", + " yvals=10.**logy\n", + " theAx.loglog(the_freq,yvals,'g-',label='$f^{-5/3}$')\n", + " theAx.set_xlabel('frequency (Hz)')\n", + " if ylabel:\n", + " out=theAx.set_ylabel(ylabel)\n", + " out=theAx.legend(loc='best')\n", + " return theAx\n", + "\n", + "labels=dict(title='wvel power spectrum',ylabel='$(m^2\\,s^{-2}\\,Hz^{-1})$')\n", + "ax=do_plot(averaged_freq,winspec,**labels)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:36.209773Z", + "start_time": "2022-03-03T21:21:35.633088Z" + } + }, + "outputs": [], + "source": [ + "winspec=do_fft(temp,window)\n", + "winspec=winspec[0:halfpoint]\n", + "labels=dict(title='Temperature power spectrum',ylabel='$K^{2}\\,Hz^{-1})$')\n", + "ax=do_plot(averaged_freq,winspec,**labels)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:37.174492Z", + "start_time": "2022-03-03T21:21:36.451203Z" + } + }, + "outputs": [], + "source": [ + "winspec=do_fft(flux,window)\n", + "winspec=winspec[0:halfpoint]\n", + "labels=dict(title='Heat flux power spectrum',ylabel='$K m s^{-1}\\,Hz^{-1})$')\n", + "ax=do_plot(averaged_freq,winspec,**labels)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Filtering" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can also filter our timeseries by removing frequencies we aren't interested in. Numerical Recipes discusses the approach on page 551. For example, suppose we want to filter all frequencies higher than 0.5 Hz from the vertical velocity data." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:21:41.173979Z", + "start_time": "2022-03-03T21:21:41.030629Z" + } + }, + "outputs": [], + "source": [ + "wvel= wvel - np.mean(wvel)\n", + "thefft=np.fft.fft(wvel)\n", + "totsize=len(thefft)\n", + "samprate=20.8333 #Hz\n", + "the_time=np.arange(0,totsize,1/20.8333)\n", + "freq_bin_width=samprate/(totsize*2)\n", + "half_hz_index=int(np.floor(0.5/freq_bin_width))\n", + "filter_func=np.zeros_like(thefft,dtype=np.float64)\n", + "filter_func[0:half_hz_index]=1.\n", + "filter_func[-half_hz_index:]=1.\n", + "filtered_wvel=np.real(np.fft.ifft(filter_func*thefft))\n", + "fig,ax=plt.subplots(1,1,figsize=(10,6))\n", + "numpoints=500\n", + "ax.plot(the_time[:numpoints],filtered_wvel[:numpoints],label='filtered')\n", + "ax.plot(the_time[:numpoints],wvel[:numpoints],'g+',label='data')\n", + "ax.set(xlabel='time (seconds)',ylabel='wvel (m/s)')\n", + "out=ax.legend()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "collapsed": true, + "jupyter": { + "outputs_hidden": true + } + }, + "source": [ + "## 2D histogram of the optical depth $\\tau$\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Below I calculate the 2-d and averaged 1-d spectra for the optical depth, which gives the penetration depth of photons through a cloud, and is closely related to cloud thickness" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:32:33.540660Z", + "start_time": "2022-03-03T21:32:33.536922Z" + } + }, + "outputs": [], + "source": [ + "# this allows us to ignore (not print out) some warnings\n", + "import warnings\n", + "warnings.filterwarnings(\"ignore\",category=FutureWarning)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:33:03.591148Z", + "start_time": "2022-03-03T21:33:03.540959Z" + } + }, + "outputs": [], + "source": [ + "import netCDF4\n", + "from netCDF4 import Dataset\n", + "filelist=['a17.nc']\n", + "with Dataset(filelist[0]) as nc:\n", + " tau=nc.variables['tau'][...]" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Character of the optical depth field\n", + "\n", + "The image below shows one of the marine boundary layer landsat scenes analyzed in \n", + "[Lewis et al., 2004](http://onlinelibrary.wiley.com/doi/10.1029/2003JD003742/full)\n", + "\n", + "It is a 2048 x 2048 pixel image taken by Landsat 7, with the visible reflectivity converted to\n", + "cloud optical depth. The pixels are 25 m x 25 m, so the scene extends for about 50 km x 50 km" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:33:26.163275Z", + "start_time": "2022-03-03T21:33:25.475800Z" + } + }, + "outputs": [], + "source": [ + "%matplotlib inline\n", + "from mpl_toolkits.axes_grid1 import make_axes_locatable\n", + "plt.close('all')\n", + "fig,ax=plt.subplots(1,2,figsize=(13,7))\n", + "ax[0].set_title('landsat a17')\n", + "im0=ax[0].imshow(tau)\n", + "im1=ax[1].hist(tau.ravel())\n", + "ax[1].set_title('histogram of tau values')\n", + "divider = make_axes_locatable(ax[0])\n", + "cax = divider.append_axes(\"bottom\", size=\"5%\", pad=0.35)\n", + "out=fig.colorbar(im0,orientation='horizontal',cax=cax)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## ubc_fft class" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "In the next cell I define a class that calculates the 2-d fft for a square image\n", + "\n", + "in the method ```power_spectrum``` we calculate both the 2d fft and the power spectrum\n", + "and save them as class attributes. In the method ```annular_average``` I take the power spectrum,\n", + "which is the two-dimensional field $E(k_x, k_y)$ (in cartesian coordinates) or $E(k,\\theta)$ (in polar coordinates).\n", + "In the method ```annular_avg``` I take the average\n", + "\n", + "$$\n", + "\\overline{E}(k) = \\int_0^{2\\pi} E(k, \\theta) d\\theta\n", + "$$\n", + "\n", + "and plot that average with the method ```graph_spectrum```" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:07.660224Z", + "start_time": "2022-03-03T21:34:07.645124Z" + } + }, + "outputs": [], + "source": [ + "from netCDF4 import Dataset\n", + "import numpy as np\n", + "import math\n", + "from numpy import fft\n", + "from matplotlib import pyplot as plt\n", + "\n", + "\n", + "class ubc_fft:\n", + "\n", + " def __init__(self, filename, var, scale):\n", + " \"\"\"\n", + " Input filename, var=variable name,\n", + " scale= the size of the pixel in km\n", + "\n", + " Constructer opens the netcdf file, reads the data and\n", + " saves the twodimensional fft\n", + " \"\"\"\n", + " with Dataset(filename,'r') as fin:\n", + " data = fin.variables[var][...]\n", + " data = data - data.mean()\n", + " if data.shape[0] != data.shape[1]:\n", + " raise ValueError('expecting square matrix')\n", + " self.xdim = data.shape[0] # size of each row of the array\n", + " self.midpoint = int(math.floor(self.xdim/2))\n", + " root,suffix = filename.split('.')\n", + " self.filename = root\n", + " self.var = var\n", + " self.scale = float(scale)\n", + " self.data = data\n", + " self.fft_data = fft.fft2(self.data)\n", + "\n", + " def power_spectrum(self):\n", + " \"\"\"\n", + " calculate the power spectrum for the 2-dimensional field\n", + " \"\"\"\n", + " #\n", + " # fft_shift moves the zero frequency point to the middle\n", + " # of the array\n", + " #\n", + " fft_shift = fft.fftshift(self.fft_data)\n", + " spectral_dens = fft_shift*np.conjugate(fft_shift)/(self.xdim*self.xdim)\n", + " spectral_dens = spectral_dens.real\n", + " #\n", + " # dimensional wavenumbers for 2dim spectrum (need only the kx\n", + " # dimensional since image is square\n", + " #\n", + " k_vals = np.arange(0,(self.midpoint))+1\n", + " k_vals = (k_vals-self.midpoint)/(self.xdim*self.scale)\n", + " self.spectral_dens=spectral_dens\n", + " self.k_vals=k_vals\n", + "\n", + " def annular_avg(self,avg_binwidth):\n", + " \"\"\"\n", + " integrate the 2-d power spectrum around a series of rings\n", + " of radius kradial and average into a set of 1-dimensional\n", + " radial bins\n", + " \"\"\"\n", + " #\n", + " # define the k axis which is the radius in the 2-d polar version of E\n", + " #\n", + " numbins = int(round((math.sqrt(2)*self.xdim/avg_binwidth),0)+1)\n", + "\n", + " avg_spec = np.zeros(numbins,np.float64)\n", + " bin_count = np.zeros(numbins,np.float64)\n", + "\n", + " print(\"\\t- INTEGRATING... \")\n", + " for i in range(self.xdim):\n", + " if (i%100) == 0:\n", + " print(\"\\t\\trow: {} completed\".format(i))\n", + " for j in range(self.xdim):\n", + " kradial = math.sqrt(((i+1)-self.xdim/2)**2+((j+1)-self.xdim/2)**2)\n", + " bin_num = int(math.floor(kradial/avg_binwidth))\n", + " avg_spec[bin_num]=avg_spec[bin_num]+ kradial*self.spectral_dens[i,j]\n", + " bin_count[bin_num]+=1\n", + "\n", + " for i in range(numbins):\n", + " if bin_count[i]>0:\n", + " avg_spec[i]=avg_spec[i]*avg_binwidth/bin_count[i]/(4*(math.pi**2))\n", + " self.avg_spec=avg_spec\n", + " #\n", + " # dimensional wavenumbers for 1-d average spectrum\n", + " #\n", + " self.k_bins=np.arange(numbins)+1\n", + " self.k_bins = self.k_bins[0:self.midpoint]\n", + " self.avg_spec = self.avg_spec[0:self.midpoint]\n", + "\n", + "\n", + "\n", + " def graph_spectrum(self, kol_slope=-5./3., kol_offset=1., \\\n", + " title=None):\n", + " \"\"\"\n", + " graph the annular average and compare it to Kolmogorov -5/3\n", + " \"\"\"\n", + " avg_spec=self.avg_spec\n", + " delta_k = 1./self.scale # 1./km (1/0.025 for landsat 25 meter pixels)\n", + " nyquist = delta_k * 0.5\n", + " knum = self.k_bins * (nyquist/float(len(self.k_bins)))# k = w/(25m)\n", + " #\n", + " # draw the -5/3 line through a give spot\n", + " #\n", + " kol = kol_offset*(knum**kol_slope)\n", + " fig,ax=plt.subplots(1,1,figsize=(8,8))\n", + " ax.loglog(knum,avg_spec,'r-',label='power')\n", + " ax.loglog(knum,kol,'k-',label=\"$k^{-5/3}$\")\n", + " ax.set(title=title,xlabel='k (1/km)',ylabel='$E_k$')\n", + " ax.legend()\n", + " self.plotax=ax" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:15.853713Z", + "start_time": "2022-03-03T21:34:15.510810Z" + } + }, + "outputs": [], + "source": [ + "plt.close('all')\n", + "plt.style.use('ggplot')\n", + "output = ubc_fft('a17.nc','tau',0.025)\n", + "output.power_spectrum()" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:29.176416Z", + "start_time": "2022-03-03T21:34:28.420369Z" + } + }, + "outputs": [], + "source": [ + "fig,ax=plt.subplots(1,1,figsize=(7,7))\n", + "ax.set_title('landsat a17')\n", + "im0=ax.imshow(np.log10(output.spectral_dens))\n", + "ax.set_title('log10 of the 2-d power spectrum')\n", + "divider = make_axes_locatable(ax)\n", + "cax = divider.append_axes(\"bottom\", size=\"5%\", pad=0.35)\n", + "out=fig.colorbar(im0,orientation='horizontal',cax=cax)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:47.397968Z", + "start_time": "2022-03-03T21:34:40.370991Z" + } + }, + "outputs": [], + "source": [ + "avg_binwidth=5 #make the kradial bins 5 pixels wide\n", + "output.annular_avg(avg_binwidth)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:34:50.839648Z", + "start_time": "2022-03-03T21:34:50.110205Z" + } + }, + "outputs": [], + "source": [ + "output.graph_spectrum(kol_offset=2000.,title='Landsat {} power spectrum'.format(output.filename))" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Problem -- lowpass filtering of a 2-d image\n", + "\n", + "For the image above, \n", + "we know that the 25 meter pixels correspond to k=1/0.025 = 40 $km^{-1}$. That means that the Nyquist\n", + "wavenumber is k=20 $km^{-1}$. Using that information, design a filter that removes all wavenumbers\n", + "higher than 1 $km^{-1}$. \n", + "\n", + "1) Use that filter to zero those values in the fft, then inverse transform and\n", + "plot the low-pass filtered image.\n", + "\n", + "2) Take the 1-d fft of the image and repeat the plot of the power spectrum to show that there is no power in wavenumbers higher than 1 $km^{-1}$.\n", + "\n", + "(Hint -- I used the fftshift function to put the low wavenumber cells in the center of the fft, which made it simpler to zero the outer cells. I then used ifftshift to reverse shift before inverse transforming to get the filtered\n", + "image.)\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## An aside about ffts: Using the fft to compute correlation\n", + "\n", + "Below I use aircraft measurments of $\\theta$ and wvel taken at 25 Hz. I compute the \n", + "autocorrelation using numpy.correlate and numpy.fft and show they are identical, as we'd expect" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:36:15.510615Z", + "start_time": "2022-03-03T21:36:14.995216Z" + } + }, + "outputs": [], + "source": [ + "#http://stackoverflow.com/questions/643699/how-can-i-use-numpy-correlate-to-do-autocorrelation\n", + "import numpy as np\n", + "%matplotlib inline\n", + "data = np.load('aircraft.npz')\n", + "wvel=data['wvel'] - data['wvel'].mean()\n", + "theta=data['theta'] - data['theta'].mean()\n", + "autocorr = np.correlate(wvel,wvel,mode='full')\n", + "auto_data = autocorr[wvel.size:]\n", + "ticks=np.arange(0,wvel.size)\n", + "ticks=ticks/25.\n", + "fig,ax = plt.subplots(1,1,figsize=(10,8))\n", + "ax.set(xlabel='lag (seconds)',title='autocorrelation of wvel using numpy.correlate')\n", + "out=ax.plot(ticks[:300],auto_data[:300])" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-03T21:36:24.626642Z", + "start_time": "2022-03-03T21:36:24.494830Z" + } + }, + "outputs": [], + "source": [ + "import numpy.fft as fft\n", + "the_fft = fft.fft(wvel)\n", + "auto_fft = the_fft*np.conj(the_fft)\n", + "auto_fft = np.real(fft.ifft(auto_fft))\n", + "\n", + "fig,ax = plt.subplots(1,1,figsize=(10,8))\n", + "ax.plot(ticks[:300],auto_fft[:300])\n", + "out=ax.set(xlabel='lag (seconds)',title='autocorrelation using fft')" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## An aside about ffts: Using ffts to find a wave envelope" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Say you have a wave in your data, but it is not across all of your domain, e.g:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-08T03:05:21.654984Z", + "start_time": "2022-03-08T03:05:21.323238Z" + }, + "code_folding": [ + 0 + ] + }, + "outputs": [], + "source": [ + "# Create a cosine wave modulated by a larger wavelength envelope wave\n", + "\n", + "# create a cosine wave that oscilates x times in 10 seconds\n", + "# sampled at 10 Hz, so there are 10*10 = 100 measurements in 10 seconds\n", + "#\n", + "%matplotlib inline\n", + "\n", + "fig,axs = plt.subplots(2,2,figsize=(12,8))\n", + "\n", + "deltaT=0.1\n", + "ticks = np.arange(0,10,deltaT)\n", + "\n", + "onehz=np.cos(2.0*np.pi*ticks)\n", + "axs[0,0].plot(ticks,onehz)\n", + "axs[0,0].set_title('wave one hz')\n", + "\n", + "# Define an evelope function that is zero between 0 and 2 second,\n", + "# modulated by a sine wave between 2 and 8 and zero afterwards\n", + "\n", + "envelope = np.empty_like(onehz)\n", + "envelope[0:20] = 0.0\n", + "envelope[20:80] = np.sin((1.0/6.0)*np.pi*ticks[0:60])\n", + "envelope[80:100] = 0.0\n", + "\n", + "axs[0,1].plot(ticks,envelope)\n", + "axs[0,1].set_title('envelope')\n", + "\n", + "envelopewave = onehz * envelope\n", + "\n", + "axs[1,0].plot(ticks,envelopewave)\n", + "axs[1,0].set_title('one hz with envelope')\n", + "plt.show()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "We can do a standard FFT on this to see the power spectrum and then recover the original wave using the inverse FFT. However, we can also use FFTs in other ways to do wavelet analysis, e.g. to find the envelope function, using a method known as the Hilbert transform (see [Zimen et al. 2003](https://nextcloud.eoas.ubc.ca/s/E2ebfKNp2mF2kY5)). This method uses the following steps:\n", + "1. Perform the fourier transform of the function.\n", + "2. Apply the inverse fourier transform to only the positive wavenumber half of the Fourier spectrum.\n", + "3. Calculate the magnitude of the result from step 2 (which will have both real and imaginary parts) and multiply by 2.0 to get the correct magnitude of the envelope function." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "ExecuteTime": { + "end_time": "2022-03-08T03:07:51.709585Z", + "start_time": "2022-03-08T03:07:51.558910Z" + }, + "code_folding": [] + }, + "outputs": [], + "source": [ + "# Calculate the envelope function\n", + "# Step 1. FFT\n", + "thefft=np.fft.fft(envelopewave)\n", + "\n", + "# Find the corresponding frequencies for each index of thefft\n", + "# note, these may not be the exact frequencies, as that will depend on the sampling resolution\n", + "# of your input data, but for this purpose we just want to know which ones are positive and\n", + "# which are negative so don't need to get the constant correct\n", + "freqs = np.fft.fftfreq(len(envelopewave))\n", + "\n", + "# Step 2. Hilbert transform: set all negative frequencies to 0:\n", + "filt_fft = thefft.copy() # make a copy of the array that we will change the negative frequencies in\n", + "filt_fft[freqs<0] = 0 # set all values for negative frequenices to 0\n", + "\n", + "# Inverse FFT on full field:\n", + "recover_sig = np.fft.ifft(thefft)\n", + "# inverse FFT on only the positive wavenumbers\n", + "positive_k_ifft = np.fft.ifft(filt_fft)\n", + "\n", + "# Step 3. Calculate magnitude and multiply by 2:\n", + "envelope_sig = 2.0 *np.abs(positive_k_ifft)\n", + "\n", + "# Plot the result\n", + "fig,axs = plt.subplots(1,1,figsize=(12,8))\n", + "\n", + "deltaT=0.1\n", + "ticks = np.arange(0,10,deltaT)\n", + "\n", + "axs.plot(ticks,envelopewave,linewidth=3,label='original signal')\n", + "\n", + "axs.plot(ticks,recover_sig,linestyle=':',color='k',linewidth=3,label ='signal via FFT')\n", + "\n", + "\n", + "axs.plot(ticks,envelope_sig,linewidth=3,color='b',label='envelope from FFT')\n", + "axs.set_title('Envelope from Hilbert transform')\n", + "axs.legend(loc='best')\n", + "plt.show()\n" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [] + } + ], + "metadata": { + "jupytext": { + "encoding": "# -*- coding: utf-8 -*-", + "formats": "ipynb,py:percent", + "notebook_metadata_filter": "all,-language_info,-toc,-latex_envs" + }, + "kernelspec": { + "display_name": "Python 3 (ipykernel)", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.12.1" + }, + "nbsphinx": { + "execute": "never" + }, + "toc": { + "base_numbering": 1, + "nav_menu": {}, + "number_sections": true, + "sideBar": true, + "skip_h1_title": false, + "title_cell": "Table of Contents", + "title_sidebar": "Contents", + "toc_cell": false, + "toc_position": {}, + "toc_section_display": true, + "toc_window_display": true + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/objects.inv b/objects.inv new file mode 100644 index 0000000..ac26c99 Binary files /dev/null and b/objects.inv differ diff --git a/rubrics.html b/rubrics.html new file mode 100644 index 0000000..3a8ed3f --- /dev/null +++ b/rubrics.html @@ -0,0 +1,81 @@ + + + + + + + + Links to rubrics — Numeric course 22.1 documentation + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ + + + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/search.html b/search.html new file mode 100644 index 0000000..084d9e7 --- /dev/null +++ b/search.html @@ -0,0 +1,94 @@ + + + + + + + Search — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + + + + +
+
+ + +
+
+ +

Search

+ + + + +

+ Searching for multiple words only shows matches that contain + all words. +

+ + +
+ + + +
+ + +
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/searchindex.js b/searchindex.js new file mode 100644 index 0000000..ec554ac --- /dev/null +++ b/searchindex.js @@ -0,0 +1 @@ +Search.setIndex({"alltitles": {"2D histogram of the optical depth \\tau": [[17, "2D-histogram-of-the-optical-depth-\\tau"]], "A simple transform": [[17, "A-simple-transform"]], "A. Mathematical Notes": [[14, "A.-Mathematical-Notes"]], "A.1 The Lorenzian Water Wheel Model": [[14, "A.1-The-Lorenzian-Water-Wheel-Model"]], "Academic Integrity": [[5, "academic-integrity"], [21, "academic-integrity"]], "Accuracy": [[15, "Accuracy"], [15, "id3"]], "Accuracy Example": [[10, "Accuracy-Example"]], "Accuracy Summary": [[10, "Accuracy-Summary"]], "Accuracy of Difference Approximations": [[10, "Accuracy-of-Difference-Approximations"]], "Adaptive Stepsize in Runge-Kutta": [[13, "Adaptive-Stepsize-in-Runge-Kutta"]], "Aliasing Error and Nonlinear Instability": [[16, "Aliasing-Error-and-Nonlinear-Instability"]], "An aside about ffts: Using ffts to find a wave envelope": [[17, "An-aside-about-ffts:-Using-ffts-to-find-a-wave-envelope"]], "An aside about ffts: Using the fft to compute correlation": [[17, "An-aside-about-ffts:-Using-the-fft-to-compute-correlation"]], "Appendix: 2 minute intro to object oriented programming": [[13, "Appendix:-2-minute-intro-to-object-oriented-programming"]], "Appendix: Note on Global Energy Balance": [[13, "Appendix:-Note-on-Global-Energy-Balance"]], "Appendix: Organization of the adaptive Runge Kutta routines": [[13, "Appendix:-Organization-of-the-adaptive-Runge-Kutta-routines"]], "April": [[4, "april"], [20, "april"]], "Assignment": [[13, "Assignment"]], "B. References": [[14, "B.-References"]], "Bash and powershell command reference": [[2, "Bash-and-powershell-command-reference"]], "Black Daisies": [[13, "Black-Daisies"]], "Black and White Daisies": [[13, "Black-and-White-Daisies"]], "Books and tutorials": [[2, "Books-and-tutorials"]], "Boundary Conditions": [[15, "Boundary-Conditions"], [15, "id4"], [16, "Boundary-Conditions"]], "Boundary Value Problem": [[8, "Boundary-Value-Problem"]], "Boundedness of the Solution": [[14, "Boundedness-of-the-Solution"]], "Calendar Entry": [[21, "calendar-entry"]], "Character of the optical depth field": [[17, "Character-of-the-optical-depth-field"]], "Characteristic Equation": [[11, "Characteristic-Equation"]], "Check your understanding": [[13, "Check-your-understanding"]], "Choosing a Grid": [[15, "Choosing-a-Grid"]], "Classes and constructors": [[13, "Classes-and-constructors"]], "Classical Solutions": [[16, "Classical-Solutions"]], "Coding Runge-Kutta Adaptive Stepsize Control": [[13, "Coding-Runge-Kutta-Adaptive-Stepsize-Control"]], "Compare power spectra for wvel, theta, sensible heat flux": [[17, "Compare-power-spectra-for-wvel,-theta,-sensible-heat-flux"]], "Computational Mode": [[15, "Computational-Mode"]], "Computational cost of Gaussian elimination": [[11, "Computational-cost-of-Gaussian-elimination"]], "Conclusion": [[13, "Conclusion"]], "Condition Number": [[11, "Condition-Number"]], "Confirm that the fft at negative f is the complex conjugate of the fft at positive f": [[17, "Confirm-that-the-fft-at-negative-f-is-the-complex-conjugate-of-the-fft-at-positive-f"]], "Contents": [[21, "contents"]], "Course Purpose": [[5, "course-purpose"], [21, "course-purpose"]], "Course Structure": [[5, "course-structure"], [21, "course-structure"]], "Creating the course environment": [[1, "Creating-the-course-environment"]], "Daisyworld Steady States": [[13, "Daisyworld-Steady-States"]], "Dates for Graduate Class (EOSC 511)": [[4, "dates-for-graduate-class-eosc-511"]], "Dates for Undergraduate Class (ATSC 409)": [[20, "dates-for-undergraduate-class-atsc-409"]], "Decomposition": [[11, "Decomposition"]], "Definition of the Beta-plane": [[16, "Definition-of-the-Beta-plane"]], "Demo: Conduction": [[8, "Demo:-Conduction"]], "Demo: Interpolation": [[8, "Demo:-Interpolation"]], "Designing Adaptive Stepsize Control": [[13, "Designing-Adaptive-Stepsize-Control"]], "Details": [[15, "Details"]], "Determinant": [[11, "Determinant"]], "Determining Stability Properties": [[10, "Determining-Stability-Properties"]], "Difference Approximations of Higher Derivatives": [[10, "Difference-Approximations-of-Higher-Derivatives"]], "Difference Approximations to the First Derivative": [[8, "Difference-Approximations-to-the-First-Derivative"]], "Discretization": [[8, "Discretization"]], "Discretization Quiz": [[8, "Discretization-Quiz"]], "Discretization of the QG equations": [[16, "Discretization-of-the-QG-equations"]], "Eigenvalue Problems": [[11, "Eigenvalue-Problems"]], "Eigenvectors": [[11, "Eigenvectors"]], "Elective Laboratories": [[5, "elective-laboratories"]], "Embedded Runge-Kutta Methods: Estimate of the Truncation Error": [[12, "Embedded-Runge-Kutta-Methods:-Estimate-of-the-Truncation-Error"]], "End of Lab 7a and Beginning of Lab 7b": [[15, "End-of-Lab-7a-and-Beginning-of-Lab-7b"]], "Error Estimate by Step Doubling": [[13, "Error-Estimate-by-Step-Doubling"]], "Error Estimate using Embedded Runge-Kutta": [[13, "Error-Estimate-using-Embedded-Runge-Kutta"]], "Example 10": [[8, "Example-10"]], "Example 11": [[8, "Example-11"]], "Example 9": [[8, "Example-9"]], "Example Eight": [[8, "Example-Eight"], [11, "Example-Eight"]], "Example Five": [[8, "Example-Five"], [11, "Example-Five"]], "Example Four": [[8, "Example-Four"], [11, "Example-Four"]], "Example Nine": [[11, "Example-Nine"]], "Example One": [[8, "Example-One"], [11, "Example-One"], [16, "Example-One"]], "Example Seven": [[8, "Example-Seven"], [11, "Example-Seven"]], "Example Six": [[8, "Example-Six"], [11, "Example-Six"]], "Example Three": [[8, "Example-Three"], [11, "Example-Three"]], "Example Two": [[8, "Example-Two"], [11, "Example-Two"]], "Example: leap-frog": [[10, "Example:-leap-frog"]], "Explicit Fourth-Order Runge-Kutta Method": [[12, "Explicit-Fourth-Order-Runge-Kutta-Method"]], "FFT\u2019s": [[5, "fft-s"]], "February": [[4, "february"], [20, "february"]], "Filtering": [[17, "Filtering"]], "Floating Point Representation of Numbers": [[10, "Floating-Point-Representation-of-Numbers"]], "For MacOS new installs": [[1, "For-MacOS-new-installs"]], "For Windows new installs": [[1, "For-Windows-new-installs"]], "Fork the course repository into your github account": [[1, "Fork-the-course-repository-into-your-github-account"]], "Forward Euler Method": [[8, "Forward-Euler-Method"]], "Full Equations": [[15, "Full-Equations"]], "Full Pivoting": [[11, "Full-Pivoting"]], "Gaussian Elimination": [[11, "Gaussian-Elimination"]], "Generalizations": [[8, "Generalizations"]], "Getting started": [[0, "getting-started"]], "Git": [[2, "Git"]], "Git install": [[1, "Git-install"]], "Github account": [[1, "Github-account"]], "Glossary": [[8, "Glossary"], [11, "Glossary"], [12, "Glossary"], [15, "Glossary"], [16, "Glossary"]], "Goals": [[16, "Goals"]], "Grades": [[5, "grades"], [21, "grades"]], "Graduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: EOSC 511 / ATSC 506": [[5, "graduate-numerical-techniques-for-atmosphere-ocean-and-earth-scientists-eosc-511-atsc-506"]], "Higher Derivatives": [[8, "Higher-Derivatives"]], "Higher Order Taylor Methods": [[10, "Higher-Order-Taylor-Methods"]], "How can we control the error?": [[10, "How-can-we-control-the-error?"]], "Inheritance": [[13, "Inheritance"]], "Initialization": [[15, "Initialization"]], "Initializing using yaml": [[13, "Initializing-using-yaml"]], "Install vscode from https://code.visualstudio.com/download": [[3, "Install-vscode-from-https://code.visualstudio.com/download"]], "Instructors": [[5, "instructors"], [21, "instructors"]], "Interpolation Quiz": [[8, "Interpolation-Quiz"]], "Introduce Full Equations": [[15, "Introduce-Full-Equations"]], "Introduce Simple Equations": [[15, "Introduce-Simple-Equations"]], "Introduction": [[10, "Introduction"], [12, "Introduction"], [13, "Introduction"], [14, "Introduction"], [16, "Introduction"], [17, "Introduction"]], "Introduction: Why bother with numerical methods?": [[8, "Introduction:-Why-bother-with-numerical-methods?"]], "Investigation": [[8, "Investigation"]], "Iterative Methods": [[11, "Iterative-Methods"]], "January": [[4, "january"], [20, "january"]], "Lab 2: Stability and accuracy": [[10, "Lab-2:-Stability-and-accuracy"]], "Lab 5: Daisyworld": [[13, "Lab-5:-Daisyworld"]], "Lab 6: The Lorenz equations": [[14, "Lab-6:-The-Lorenz-equations"]], "Laboratory 1: An Introduction to the Numerical Solution of Differential Equations: Discretization (Jan 2024)": [[8, "Laboratory-1:-An-Introduction-to-the-Numerical-Solution-of-Differential-Equations:-Discretization-(Jan-2024)"]], "Laboratory 3: Linear Algebra (Sept. 12, 2017)": [[11, "Laboratory-3:-Linear-Algebra-(Sept.-12,-2017)"]], "Laboratory 7: Solving partial differential equations using an explicit, finite difference method.": [[15, "Laboratory-7:-Solving-partial-differential-equations-using-an-explicit,-finite-difference-method."]], "Laboratory 8: Solution of the Quasi-geostrophic Equations": [[16, "Laboratory-8:-Solution-of-the-Quasi-geostrophic-Equations"]], "Laboratory 9: Fast Fourier Transforms": [[17, "Laboratory-9:-Fast-Fourier-Transforms"]], "Learning Objectives": [[15, "Learning-Objectives"], [16, "Learning-Objectives"]], "Linear Systems": [[11, "Linear-Systems"]], "Linearization about the Steady States": [[14, "Linearization-about-the-Steady-States"]], "Links to rubrics": [[18, "links-to-rubrics"]], "List of Problems": [[8, "List-of-Problems"], [10, "List-of-Problems"], [11, "List-of-Problems"], [12, "List-of-Problems"], [13, "List-of-Problems"], [14, "List-of-Problems"], [15, "List-of-Problems"], [16, "List-of-Problems"]], "Managing problem configurations": [[12, "Managing-problem-configurations"]], "March": [[4, "march"], [20, "march"]], "Mathematical Notes": [[8, "Mathematical-Notes"], [10, "Mathematical-Notes"], [12, "Mathematical-Notes"], [16, "Mathematical-Notes"]], "Matrix Form of the Discrete Equations": [[16, "Matrix-Form-of-the-Discrete-Equations"]], "Matrix Inversion": [[11, "Matrix-Inversion"]], "Meeting Times": [[5, "meeting-times"], [21, "meeting-times"]], "Mid-point and leap-frog": [[10, "Mid-point-and-leap-frog"]], "Movie: Diffusion": [[8, "Movie:-Diffusion"]], "Mutable vs. immutable data types": [[12, "Mutable-vs.-immutable-data-types"]], "Named tuples": [[12, "Named-tuples"]], "Neutral Daisies": [[13, "Neutral-Daisies"]], "No variation in y": [[15, "No-variation-in-y"], [15, "id1"], [15, "id2"]], "Not feeling well before class?": [[5, "not-feeling-well-before-class"]], "Note on the Derivation of the Second-Order Runge-Kutta Methods": [[12, "Note-on-the-Derivation-of-the-Second-Order-Runge-Kutta-Methods"]], "Notebooks": [[3, "Notebooks"]], "Numeric notebooks": [[7, "numeric-notebooks"]], "Numerical Integration": [[14, "Numerical-Integration"]], "Numerical Solution": [[15, "Numerical-Solution"]], "Numerical Techniques for Atmosphere, Ocean and Earth Scientists": [[6, "numerical-techniques-for-atmosphere-ocean-and-earth-scientists"]], "Numpy and Python with Matrices": [[11, "Numpy-and-Python-with-Matrices"]], "ODE\u2019s": [[5, "odes"]], "Objectives": [[8, "Objectives"], [10, "Objectives"], [11, "Objectives"], [12, "Objectives"], [13, "Objectives"], [14, "Objectives"], [17, "Objectives"]], "Opening the notebook folder and working with lab 1": [[1, "Opening-the-notebook-folder-and-working-with-lab-1"]], "Optional textbooks": [[19, "optional-textbooks"]], "Ordinary Differential Equations": [[8, "Ordinary-Differential-Equations"]], "Other Approximations": [[8, "Other-Approximations"]], "Other Approximations to the First Derivative": [[10, "Other-Approximations-to-the-First-Derivative"]], "Other Chaotic Systems": [[14, "Other-Chaotic-Systems"]], "Other Methods": [[10, "Other-Methods"]], "Other recommended books": [[10, "Other-recommended-books"]], "Outline of Solution Procedure": [[16, "Outline-of-Solution-Procedure"]], "Overriding initial values in a derived class": [[13, "Overriding-initial-values-in-a-derived-class"]], "PDE\u2019s": [[5, "pdes"]], "Partial Differential Equations": [[8, "Partial-Differential-Equations"], [8, "id3"]], "Partial Pivoting": [[11, "Partial-Pivoting"]], "Passing a derivative function to an integrator": [[12, "Passing-a-derivative-function-to-an-integrator"]], "Physical Example, Poincar\u00e9 Waves": [[15, "Physical-Example,-Poincar\u00e9-Waves"]], "Power spectrum of turbulent vertical velocity": [[17, "Power-spectrum-of-turbulent-vertical-velocity"]], "Powershell and Bash common commands": [[2, "Powershell-and-Bash-common-commands"]], "Predictor-Corrector Methods": [[10, "Predictor-Corrector-Methods"]], "Predictor-Corrector to Start": [[15, "Predictor-Corrector-to-Start"]], "Prerequisites": [[5, "prerequisites"], [11, "Prerequisites"], [21, "prerequisites"]], "Problem Accuracy": [[10, "Problem-Accuracy"]], "Problem Backward Euler": [[10, "Problem-Backward-Euler"]], "Problem Conduction": [[13, "Problem-Conduction"]], "Problem Coupling": [[13, "Problem-Coupling"]], "Problem Eight": [[15, "Problem-Eight"]], "Problem Estimate": [[13, "Problem-Estimate"]], "Problem Five": [[15, "Problem-Five"], [16, "Problem-Five"]], "Problem Four": [[11, "Problem-Four"], [15, "Problem-Four"], [16, "Problem-Four"]], "Problem Initial": [[13, "Problem-Initial"]], "Problem Nine": [[15, "Problem-Nine"]], "Problem One": [[8, "Problem-One"], [11, "Problem-One"], [15, "Problem-One"], [16, "Problem-One"]], "Problem Predator": [[13, "Problem-Predator"]], "Problem Seven": [[15, "Problem-Seven"]], "Problem Six": [[15, "Problem-Six"], [16, "Problem-Six"]], "Problem Stability": [[10, "Problem-Stability"]], "Problem Taylor Series": [[10, "Problem-Taylor-Series"]], "Problem Temperature": [[13, "Problem-Temperature"]], "Problem Three": [[11, "Problem-Three"], [15, "Problem-Three"], [16, "Problem-Three"]], "Problem Two": [[8, "Problem-Two"], [11, "Problem-Two"], [15, "Problem-Two"], [16, "Problem-Two"]], "Problem adaptive": [[13, "Problem-adaptive"]], "Problem constant growth": [[13, "Problem-constant-growth"]], "Problem \u2013 lowpass filtering of a 2-d image": [[17, "Problem----lowpass-filtering-of-a-2-d-image"]], "ProblemCodingA": [[12, "ProblemCodingA"]], "ProblemCodingB": [[12, "ProblemCodingB"]], "ProblemCodingC": [[12, "ProblemCodingC"]], "ProblemEmbedded": [[12, "ProblemEmbedded"]], "ProblemMidpoint": [[12, "ProblemMidpoint"]], "ProblemRK4": [[12, "ProblemRK4"]], "ProblemTableau": [[12, "ProblemTableau"]], "Project": [[5, "project"]], "Pulling changes from the github repository": [[2, "Pulling-changes-from-the-github-repository"]], "Python Modules": [[3, "Python-Modules"]], "Python: moving from a notebook to a library": [[12, "Python:-moving-from-a-notebook-to-a-library"]], "Quick Review": [[11, "Quick-Review"]], "Quiz on Jacobian Expansion #2": [[16, "Quiz-on-Jacobian-Expansion-#2"]], "Quiz on Jacobian Expansion #3": [[16, "Quiz-on-Jacobian-Expansion-#3"]], "Quiz on Matrices": [[11, "Quiz-on-Matrices"]], "Quiz on Newton\u2019s Law of Cooling": [[8, "Quiz-on-Newton's-Law-of-Cooling"]], "Quiz: Find the Dispersion Relation": [[15, "Quiz:-Find-the-Dispersion-Relation"]], "Reading a json file back into python": [[12, "Reading-a-json-file-back-into-python"]], "Readings": [[8, "Readings"], [10, "Readings"], [12, "Readings"], [13, "Readings"], [14, "Readings"], [15, "Readings"], [16, "Readings"]], "References": [[8, "References"], [11, "References"], [15, "References"], [16, "References"]], "Right Hand Side": [[16, "Right-Hand-Side"]], "Round-off Error": [[11, "Round-off-Error"]], "Round-off Error and Discretization Error": [[10, "Round-off-Error-and-Discretization-Error"]], "Round-off error:": [[10, "Round-off-error:"]], "Runge-Kutta methods": [[12, "Runge-Kutta-methods"]], "Running Code Cells": [[8, "Running-Code-Cells"]], "Running the constant growth rate demo": [[13, "Running-the-constant-growth-rate-demo"]], "Saving named tuples to a file": [[12, "Saving-named-tuples-to-a-file"]], "Scaling the Equations of Motion": [[16, "Scaling-the-Equations-of-Motion"]], "Second order accuracy": [[10, "Second-order-accuracy"]], "Second-Order Runge-Kutta Methods": [[12, "Second-Order-Runge-Kutta-Methods"]], "Set Laboratories": [[5, "set-laboratories"]], "Setting up the course repository": [[1, "Setting-up-the-course-repository"]], "Simple Equations": [[15, "Simple-Equations"]], "Simple Equations on a Non-staggered Grid": [[15, "Simple-Equations-on-a-Non-staggered-Grid"]], "Simplification of the QG Model Equations": [[16, "Simplification-of-the-QG-Model-Equations"]], "Solution of an ODE Using Linear Algebra": [[11, "Solution-of-an-ODE-Using-Linear-Algebra"]], "Solution of the Poisson Equation by Relaxation": [[16, "Solution-of-the-Poisson-Equation-by-Relaxation"]], "Solution to the Heat Conduction Equation": [[8, "Solution-to-the-Heat-Conduction-Equation"]], "Solving Ordinary Differential Equations with the Runge-Kutta Methods": [[12, "Solving-Ordinary-Differential-Equations-with-the-Runge-Kutta-Methods"]], "Spatial Discretization": [[16, "Spatial-Discretization"]], "Specific example": [[13, "Specific-example"]], "Stability": [[15, "Stability"]], "Stability of Difference Approximations": [[10, "Stability-of-Difference-Approximations"]], "Stability of the Linearized Problem": [[14, "Stability-of-the-Linearized-Problem"]], "Stability: the CFL condition": [[15, "Stability:-the-CFL-condition"]], "Staggered Grids": [[15, "Staggered-Grids"]], "Starting the Simulation Full Equations": [[15, "Starting-the-Simulation-Full-Equations"]], "Steady States": [[14, "Steady-States"]], "Stiff Equations": [[10, "Stiff-Equations"]], "Student installs": [[1, "Student-installs"]], "Suggested Extensions": [[3, "Suggested-Extensions"]], "Summary": [[8, "Summary"], [8, "id1"], [8, "id2"], [10, "Summary"], [11, "Summary"], [11, "id3"], [14, "Summary"]], "Summary: Daisy World Equations": [[13, "Summary:-Daisy-World-Equations"]], "Supporting Diversity and Inclusion": [[5, "supporting-diversity-and-inclusion"]], "Supporting Diversity and Inclusions": [[21, "supporting-diversity-and-inclusions"]], "Systems of First-order ODE\u2019s": [[8, "Systems-of-First-order-ODE's"]], "Taylor Polynomials and Taylor Series": [[10, "Taylor-Polynomials-and-Taylor-Series"]], "Temporal Discretization": [[16, "Temporal-Discretization"]], "The Daisy Growth Rate - Coupling to the Environment": [[13, "The-Daisy-Growth-Rate---Coupling-to-the-Environment"]], "The Daisy Population": [[13, "The-Daisy-Population"]], "The Daisyworld Model": [[13, "The-Daisyworld-Model"]], "The Feedback Loop - Feedback Through the Planetary Albedo": [[13, "The-Feedback-Loop---Feedback-Through-the-Planetary-Albedo"]], "The Local Temperature - Dependence on Surface Heat Conductivity": [[13, "The-Local-Temperature---Dependence-on-Surface-Heat-Conductivity"]], "The Lorenz Equations": [[14, "The-Lorenz-Equations"]], "The Midpoint Method: A Two-Stage Runge-Kutta Method": [[12, "The-Midpoint-Method:-A-Two-Stage-Runge-Kutta-Method"]], "The Quasi-Geostrophic Model": [[16, "The-Quasi-Geostrophic-Model"]], "The Runge-Kutta Tableau": [[12, "The-Runge-Kutta-Tableau"]], "The command line shell and git": [[2, "The-command-line-shell-and-git"]], "To configure bash or zsh on MacOS": [[2, "To-configure-bash-or-zsh-on-MacOS"]], "To configure powershell on windows": [[2, "To-configure-powershell-on-windows"]], "Truncation error:": [[10, "Truncation-error:"]], "Undergraduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: ATSC 409": [[21, "undergraduate-numerical-techniques-for-atmosphere-ocean-and-earth-scientists-atsc-409"]], "University Statement on Values and Policies": [[5, "university-statement-on-values-and-policies"], [21, "university-statement-on-values-and-policies"]], "Using Error to Adjust the Stepsize": [[13, "Using-Error-to-Adjust-the-Stepsize"]], "Using the Integrator class": [[14, "Using-the-Integrator-class"]], "Using the command line": [[2, "Using-the-command-line"]], "VScode notes": [[3, "VScode-notes"]], "What is a Matrix?": [[11, "What-is-a-Matrix?"]], "Where does the error come from?": [[10, "Where-does-the-error-come-from?"]], "White Daisies": [[13, "White-Daisies"]], "Why Adaptive Stepsize?": [[13, "Why-Adaptive-Stepsize?"]], "Why bother?": [[13, "Why-bother?"]], "Windowing": [[17, "Windowing"]], "finding the attributes and methods of a class instance": [[13, "finding-the-attributes-and-methods-of-a-class-instance"]], "power spectrum layout": [[17, "power-spectrum-layout"]], "start with wvel": [[17, "start-with-wvel"]], "ubc_fft class": [[17, "ubc_fft-class"]]}, "docnames": ["getting_started", "getting_started/installing_jupyter", "getting_started/python", "getting_started/vscode", "grad_schedule", "gradsyllabus", "index", "notebook_toc", "notebooks/lab1/01-lab1", "notebooks/lab10/01-lab10", "notebooks/lab2/01-lab2", "notebooks/lab3/01-lab3", "notebooks/lab4/01-lab4", "notebooks/lab5/01-lab5", "notebooks/lab6/01-lab6", "notebooks/lab7/01-lab7", "notebooks/lab8/01-lab8", "notebooks/lab9/01-lab9", "rubrics", "texts", "ugrad_schedule", "ugradsyllabus"], "envversion": {"nbsphinx": 4, "sphinx": 61, "sphinx.domains.c": 3, "sphinx.domains.changeset": 1, "sphinx.domains.citation": 1, "sphinx.domains.cpp": 9, "sphinx.domains.index": 1, "sphinx.domains.javascript": 3, "sphinx.domains.math": 2, "sphinx.domains.python": 4, "sphinx.domains.rst": 2, "sphinx.domains.std": 2}, "filenames": ["getting_started.rst", "getting_started/installing_jupyter.ipynb", "getting_started/python.ipynb", "getting_started/vscode.ipynb", "grad_schedule.rst", "gradsyllabus.rst", "index.rst", "notebook_toc.rst", "notebooks/lab1/01-lab1.ipynb", "notebooks/lab10/01-lab10.ipynb", "notebooks/lab2/01-lab2.ipynb", "notebooks/lab3/01-lab3.ipynb", "notebooks/lab4/01-lab4.ipynb", "notebooks/lab5/01-lab5.ipynb", "notebooks/lab6/01-lab6.ipynb", "notebooks/lab7/01-lab7.ipynb", "notebooks/lab8/01-lab8.ipynb", "notebooks/lab9/01-lab9.ipynb", "rubrics.rst", "texts.rst", "ugrad_schedule.rst", "ugradsyllabus.rst"], "indexentries": {}, "objects": {}, "objnames": {}, "objtypes": {}, "terms": {"": [2, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "0": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "00": 8, "000": [10, 11], "00000001": 10, "00001": 8, "00005": 11, "0001": 11, "0001e_": 11, "0002": 11, "0004": 11, "000e_": 11, "001": [14, 15], "0010": 11, "0028": 11, "003265": 13, "0038": 8, "01": [1, 14, 15], "011": 8, "025": 17, "04": 15, "05": [11, 12, 13, 17], "0e": 13, "0f": 13, "0time": 10, "1": [0, 2, 4, 5, 7, 9, 10, 11, 12, 13, 15, 16, 17, 20, 21], "10": [1, 5, 9, 10, 11, 12, 13, 14, 15, 16, 17], "100": [5, 13, 14, 15, 16, 17, 21], "1000": 11, "10000": 16, "100000000": 10, "1006": 9, "101": [15, 16], "1015": 9, "109": 16, "10z": 11, "11": [4, 11, 12, 13, 15, 16, 20], "110592": 12, "111": 12, "117": 9, "12": [4, 9, 10, 12, 13, 16, 17, 20], "123": 16, "125": 12, "13": [11, 16, 17], "130": 14, "1332": 8, "1338": 8, "13391": 10, "13391482": 10, "13391483": 10, "13525": 12, "1367": 13, "13824": 12, "14": 16, "141": 14, "1415926535897932385": 10, "1415927": 10, "14336": 12, "1440": 17, "15": [4, 5, 8, 13, 14, 17, 20], "150": 15, "15th": 10, "16": [4, 10, 12, 13, 20, 21], "1631": 12, "164": 16, "16c_": 9, "17": [15, 16], "175": 12, "1771": 12, "18": [4, 11, 16, 20], "18575": 12, "188": 10, "19": [4, 11, 14, 20], "1956": 16, "1962": 14, "1963": [14, 16], "1966": 16, "1976": [15, 16], "198": 10, "1981": [8, 10, 16], "1982": [14, 15], "1983": [13, 15], "1986": [8, 10, 11], "1987": [14, 16], "1988": [11, 16], "1989": 9, "1992": 15, "1993": 14, "1994": [8, 15], "1_": 16, "1a": 17, "1b": 17, "1d": 8, "1dx": 9, "1j": 11, "1n": 11, "2": [2, 4, 5, 7, 8, 9, 11, 12, 14, 15, 19, 20, 21], "20": [8, 9, 10, 14, 15, 16, 17, 21], "200": 11, "2000": [11, 13, 17], "2003": 17, "200301": 10, "2004": 17, "2008": 13, "2013": 10, "2020": 12, "2022": 12, "2048": 17, "20c": 13, "21": [10, 11, 12], "215": 21, "22": [4, 12, 13, 20], "23": [4, 11, 16, 20], "24": [9, 10, 14, 16], "24995": 11, "25": [4, 10, 11, 12, 13, 17, 20], "250": 12, "253": 12, "25m": 17, "26": [4, 12, 13, 20], "27": [12, 14, 16], "27648": 12, "277": [12, 13], "28": [12, 14], "2825": 12, "29": [4, 20], "295": 13, "2_": 16, "2_h": 16, "2_p": 13, "2c_": 9, "2c_j": 9, "2d": [11, 15, 16], "2diff": 16, "2diffgrid": 16, "2dim": 17, "2dt": 15, "2dx": [9, 15], "2e_": 11, "2f_": [11, 16], "2f_0": 11, "2f_2": 11, "2f_n": 11, "2g": 17, "2hz": 17, "2i": 15, "2j": 11, "2k": 10, "2kx": 16, "2n": [9, 11, 15], "2nd": [10, 11, 15, 16], "2q": 15, "2t": [8, 10], "2u_": 11, "2u_0": 11, "2u_1": 11, "2u_2": 11, "2u_i": [8, 11], "2u_n": 11, "2x": 11, "2x_": 11, "2y": [8, 10], "3": [4, 5, 7, 8, 9, 10, 12, 13, 14, 15, 17, 19, 20, 21], "30": [8, 10, 14, 15, 16, 21], "300": [15, 17], "308": 10, "30c_j": 9, "31": [14, 16], "312": [13, 17, 21], "316": 21, "327": 10, "329": 14, "335": 10, "341": 14, "346": 14, "35": [12, 15, 17], "35ff": 16, "36": [11, 16], "360": 11, "3600": 15, "36000": 17, "3668": 13, "37": [12, 16], "375": [13, 14], "378": 12, "38": 10, "3c_": 9, "3c_j": 9, "3d": 14, "3dt": 15, "3dx": 15, "3f": [13, 17], "3j": 11, "3x": 11, "3x_": 11, "3y": 11, "4": [4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 20, 21], "40": [5, 8, 12, 13, 15, 16, 17], "400": [10, 15], "4000": [15, 16], "40002": 11, "403ff": 11, "4096": 12, "41": 14, "44": 16, "44275": 12, "44597662": 1, "45": [15, 16], "470": 14, "48": [10, 11], "482": 10, "483": 10, "48384": 12, "490": 17, "498": 17, "4999": 11, "4_e": 13, "4_i": 13, "4c_": 9, "4d": 16, "4diff": 16, "4dx": 15, "4e_": 11, "4f": 17, "4gh": 15, "4th": [8, 10, 12], "4x_": 11, "5": [4, 5, 7, 8, 10, 11, 12, 14, 15, 16, 17, 20, 21], "50": [5, 9, 11, 13, 15, 16, 17, 21], "500": [11, 15, 16, 17], "506": 21, "51": 8, "511": 21, "512": 12, "52": 8, "53": 11, "54": 12, "54501167": 1, "549": 8, "55": [14, 16], "550": 17, "551": 17, "55296": 12, "55c": 13, "5600": 11, "575": 12, "587": 8, "59": 8, "594": [12, 16], "5a": [4, 20], "5dx": 15, "5hz": 17, "5th": 12, "5x": 8, "5x_": 11, "5y": 11, "6": [4, 5, 7, 8, 9, 10, 11, 12, 13, 15, 16, 17, 20, 21], "60": [5, 8, 9, 13, 15, 16, 17, 21], "606": 16, "61": 12, "621": 12, "64": [1, 8, 10], "643699": 17, "65": 15, "66": [8, 12], "6666": 14, "67": 13, "67e": 13, "6c_": 9, "6c_j": 9, "6dx": 15, "6x_": 11, "6y": 11, "6z": 11, "7": [5, 7, 8, 10, 11, 12, 13, 16, 17, 21], "70": [5, 12, 21], "70711": 11, "74": 14, "75": 14, "79": 9, "7a": [4, 7, 20], "7b": 7, "8": [4, 5, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 20, 21], "80": [5, 8, 9, 10, 15, 17, 21], "800": 8, "82": 16, "8213": 16, "8225": 16, "833": 17, "8333": 17, "85": [5, 21], "8c_": 9, "8t": 10, "8x_": 11, "9": [4, 5, 7, 10, 11, 12, 13, 14, 15, 20, 21], "90": [5, 10, 21], "92": [8, 16], "98": 8, "998": 11, "999": 11, "9998": 11, "9999": 11, "9z": 11, "A": [1, 2, 5, 8, 9, 10, 11, 13, 15, 16, 21], "AND": [12, 21], "And": 10, "As": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16], "At": [10, 11, 13, 15, 16], "Be": 10, "But": [9, 10, 14, 15, 16], "By": [2, 8, 10, 11, 14, 15, 16], "For": [0, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "If": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "In": [1, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "It": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "Its": [1, 11], "NO": 15, "NOT": 12, "No": [4, 11, 16, 20], "Not": [15, 16], "Of": 11, "On": [3, 10, 15, 16], "One": [5, 9, 10, 12, 14, 21], "Or": 16, "Such": [12, 15], "That": [8, 9, 10, 11, 13, 14, 16, 17], "The": [0, 1, 3, 5, 6, 8, 9, 10, 11, 15, 17, 19, 21], "Then": [8, 9, 10, 11, 13, 14, 15, 16], "There": [5, 8, 10, 11, 12, 13, 14, 15, 16, 21], "These": [2, 8, 10, 11, 13, 15, 16, 21], "To": [0, 1, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "Will": 14, "With": [10, 11, 14, 16], "_": [8, 9, 10, 11, 12, 15, 16], "_1k_1": 12, "_2k_2": 12, "_3k_3": 12, "_4k_4": 12, "_5k_5": 12, "_6k_6": 12, "__": 16, "__init__": [13, 14, 17], "_asdict": 12, "_h": 16, "_i": 12, "a17": 17, "a_": [9, 11, 12, 16], "a_1": [11, 12], "a_1h": 12, "a_2": [11, 12], "a_2h": 12, "a_3": 11, "a_4": 11, "a_6h": 12, "a_b": 13, "a_g": 13, "a_h": 16, "a_hneq": 16, "a_i": 12, "a_ih": 12, "a_v": 16, "a_w": 13, "aa": [11, 14], "ab": [11, 17], "abandon": 16, "abbrevi": [8, 16], "abil": [8, 12], "abl": [3, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 21], "about": [2, 5, 8, 9, 10, 11, 12, 13, 15, 16, 21], "abov": [1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "abrupt": 13, "absenc": 15, "absent": 16, "absolut": [11, 13], "absorb": 13, "academ": 15, "acceler": [8, 14, 15, 16], "accept": [1, 12, 13], "access": [1, 5, 10, 12, 13, 21], "accommod": [5, 21], "accompani": 16, "accomplish": [5, 13, 21], "accord": [8, 9, 10, 13], "accordingli": 13, "account": [0, 10, 14, 16], "accur": [8, 10, 11, 12, 13, 14, 15, 16], "accuraci": [8, 9, 12, 13, 21], "accuracy2d": 15, "accuracy_demo": 15, "achiev": [9, 12, 13], "acknowledg": [5, 21], "across": [15, 16, 17], "act": 16, "action": [5, 14, 21], "activ": [1, 5, 16, 17], "actual": [1, 8, 10, 11, 13, 14, 15, 16], "actualerrorlin": 13, "ad": [2, 10, 11, 12, 13, 14, 15], "adapt": [12, 14], "adaptvar": 13, "add": [2, 3, 8, 10, 11, 12, 13, 16], "add_ax": 14, "add_subplot": 12, "addit": [5, 8, 10, 11, 12, 13, 14, 15, 16, 19, 21], "additionaln": 9, "address": [1, 11, 13], "adequ": 15, "adhoc": 13, "adjac": [15, 16], "adjust": [5, 16, 21], "admin": 2, "advanc": 12, "advantag": [10, 11, 12, 13, 14, 16], "advect": [9, 16, 21], "advection2": 9, "advection3": 9, "advection_fun": 9, "advent": 14, "advers": 10, "advic": 5, "af": 9, "affect": [8, 9, 10, 11, 13, 14, 15, 16], "after": [2, 5, 8, 9, 10, 11, 12, 13, 15, 16, 17, 21], "afterward": 17, "again": [3, 8, 9, 10, 11, 12, 13, 15], "against": [10, 12], "agre": 1, "ah": 12, "ahead": [2, 10, 12, 15], "ai": 14, "aid": 9, "aim": 16, "air": [8, 9, 10, 14], "aircraft": 17, "ak_1": 12, "al": [12, 13, 15, 17], "al2vrnkyyd0": 15, "albedo_black": 13, "albedo_grei": 13, "albedo_ground": 13, "albedo_p": 13, "albedo_whit": 13, "algebra": [5, 8, 15, 16, 19], "algorithm": [10, 11, 12, 13], "alia": [2, 17], "align": [3, 8, 10, 12, 14, 15], "all": [1, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "all_info_the_integ": 13, "allen": [5, 15, 21], "allmag": 15, "alln": 9, "allow": [5, 8, 10, 12, 13, 14, 15, 16, 17, 21], "almost": [11, 12, 14, 15, 16], "alon": [5, 10, 13, 14, 21], "along": [8, 9, 10, 11, 12, 13, 15, 16], "alpha": [8, 9, 14, 15], "alpha_": 15, "alpha_b": 13, "alpha_g": 13, "alpha_i": 13, "alpha_p": 13, "alpha_w": 13, "alreadi": [1, 8, 9, 10, 11, 13, 14, 16], "also": [2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "alter": [11, 13], "altern": [8, 10, 12, 13, 16], "although": [8, 9, 10, 11, 15, 16], "altogeth": 10, "alwai": [1, 3, 8, 9, 10, 11, 12, 13, 14, 15, 16], "alwaysn": 9, "amazon": 19, "ambient": 8, "ambit": 13, "amount": [9, 10, 11, 13, 14, 16], "amplif": [11, 16], "amplifi": [11, 16], "amplitud": [14, 15], "an": [1, 2, 3, 5, 7, 9, 10, 13, 14, 16, 21], "anaconda": 1, "analg": 15, "analog": 11, "analogu": [11, 16], "analys": [10, 12, 17], "analysi": [2, 8, 10, 11, 14, 15, 16, 17, 19], "analyt": [8, 12, 13, 14, 15, 16], "analyz": [8, 12, 17], "andn": 9, "andrea": 9, "angl": 14, "angular": [14, 16], "ani": [2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "anim": 14, "annoint": 12, "annular": 17, "annular_averag": 17, "annular_avg": 17, "anoth": [5, 8, 9, 10, 11, 12, 13, 14, 16, 17], "answer": [5, 8, 10, 11, 13, 14, 15, 16, 21], "anyon": [5, 21], "anyth": [14, 15], "anywai": 15, "ap": 10, "apart": [11, 15], "appar": [13, 14], "appear": [8, 10, 11, 12, 13, 14, 15, 16], "append": [11, 12], "append_ax": 17, "appendic": 16, "appendix": [8, 10, 12, 14, 15, 16], "appl": 13, "appli": [5, 8, 10, 11, 12, 13, 14, 16, 17, 19, 21], "applic": [8, 10, 11, 14, 16], "appreci": [5, 21], "approach": [5, 8, 10, 11, 13, 14, 15, 16, 17, 21], "appropri": [5, 6, 10, 12, 13, 16, 21], "approx": [8, 10, 11, 14, 15, 16], "approxim": [9, 11, 12, 13, 14, 15, 16], "apr": [4, 20], "apsnew": 10, "ar": [1, 2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "arakawa": [15, 16], "arang": [12, 17], "arc": 14, "area": [5, 8, 9, 13], "arean": 9, "aren": [2, 17], "argument": [2, 9, 11, 13, 14, 15], "aris": [5, 8, 10, 11, 12, 13, 14, 15, 16, 21], "arithmet": [10, 14, 16], "aros": [10, 16], "around": [8, 11, 12, 14, 15, 16, 17], "arrai": [8, 9, 10, 11, 12, 13, 14, 15, 17], "arriv": 13, "arrow": [2, 8, 13], "art": [10, 15], "articl": [10, 13, 14], "artifact": [10, 15, 16], "ascend": 14, "ask": [3, 5, 8, 11, 13, 14, 16, 21], "aspect": [10, 14, 16], "assign": [4, 5, 8, 10, 12, 15, 20, 21], "associ": [3, 11], "assum": [5, 8, 9, 10, 11, 13, 14, 15, 16, 17], "assumpt": [12, 13, 14, 15, 16], "ast": 14, "asymptot": 14, "athlet": [5, 21], "atmopsher": 5, "atmospher": [8, 9, 11, 13, 14, 15, 16, 17], "atn": 9, "atol": 13, "atsc500": 17, "attach": [13, 14], "attain": 14, "attempt": [5, 10, 13, 15, 21], "attent": 2, "attract": 14, "attractor": 14, "attribut": [12, 17], "augment": [11, 13], "austin": 2, "authent": 1, "author": 19, "auto": 2, "auto_data": 17, "auto_fft": 17, "autocorr": 17, "autocorrel": 17, "avail": [1, 2, 5, 8, 10, 11, 13, 16, 21], "averag": [11, 13, 14, 15, 16, 17], "averaged_freq": 17, "avert": 16, "avg_binwidth": 17, "avg_spec": 17, "avoid": [8, 9, 11, 15, 16], "awai": [5, 9, 10, 13, 14, 15, 21], "awar": [2, 16], "ax": [10, 11, 14, 17], "axes3d": 14, "axes_grid1": 17, "axi": [9, 10, 11, 14, 16, 17], "azimuth": [14, 16], "b": [8, 9, 10, 11, 12, 13, 15, 16, 17], "b2": 11, "b4d2ktttw7e": 8, "b_": [11, 12, 16], "b_i": 11, "b_n": 8, "ba": 11, "back": [2, 8, 10, 11, 13, 14, 15, 16], "background": [5, 8, 10, 12, 13, 14, 16, 21], "backward": [8, 9, 12, 15], "backwardn": 9, "balanc": 16, "balloon": [8, 10], "band": 14, "bar": [8, 10], "bare": 13, "barotrop": 16, "bartlett": 17, "base": [1, 5, 8, 10, 11, 12, 13, 14, 15, 16, 21], "basevar": 13, "bash": [0, 1], "bash_profil": 2, "basi": [8, 10], "basic": [8, 10, 11, 13, 14, 16], "basin": 16, "bay": 2, "bc": [11, 16], "bear": 13, "becam": 13, "becaus": [5, 8, 9, 10, 11, 13, 15, 16, 17], "becausen": 9, "becom": [8, 9, 10, 11, 13, 14, 15, 16], "been": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "befor": [10, 11, 13, 14, 15, 16, 17, 21], "began": 14, "begin": [7, 8, 10, 11, 12, 14, 16, 17], "beginn": 2, "behav": [11, 13, 14], "behavior": 13, "behaviour": [8, 10, 12, 14, 16], "behind": [2, 3, 10, 14, 15, 16], "being": [10, 11, 12, 13, 14, 15, 16, 17], "believ": 14, "below": [1, 5, 6, 8, 10, 11, 12, 13, 14, 15, 16, 17, 21], "ben": 9, "benefit": 12, "besid": 2, "best": [5, 8, 10, 12, 13, 14, 15, 16, 17, 21], "beta": [8, 10, 14], "beta_b": 13, "beta_i": 13, "beta_w": 13, "better": [9, 10, 11, 12, 17], "between": [5, 8, 10, 11, 12, 13, 14, 15, 16, 17], "beuler": 10, "beyond": [8, 10, 12, 13, 14, 16], "bf": 12, "bi": 16, "bifurc": 14, "big": [10, 12], "bigger": 10, "biharmon": 16, "bin": [1, 17], "bin_count": 17, "bin_num": 17, "binari": 10, "biologi": 13, "biospher": 13, "birth": 13, "birthrat": 13, "bit": [1, 10, 14], "black": [14, 15], "blackbodi": 13, "blackconc": 13, "blank": 13, "blend": 13, "blindli": [5, 21], "blob": 2, "block": 9, "blog": 17, "blow": [10, 14], "blowingn": 9, "blue": [8, 14, 16], "boltzman": 13, "boltzmann": 13, "book": [0, 11, 14, 15], "boston": 8, "both": [2, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "bother": 16, "bott": 9, "bottom": [14, 15, 16, 17], "bound": [8, 10, 13, 14], "boundari": [9, 11, 12, 14, 17], "boundaryn": 9, "box": [3, 9, 11, 14, 16], "boxn": 9, "boyc": [8, 19], "brace": [11, 16], "branch": 2, "brazil": 13, "break": [8, 15], "breviti": 15, "brick": 8, "bridg": 12, "brief": [5, 11, 13, 16, 21], "briefli": 13, "brighter": 13, "bring": 10, "britain": 15, "broken": 12, "browser": [5, 21], "bryan": 16, "bt": [14, 15], "bucket": 14, "bug": 12, "bui": 13, "built": [3, 11, 16], "builtin": [12, 13], "bulirsch": 12, "bullet": 10, "bunni": 13, "buoyanc": [8, 16], "buoyant": 8, "burden": [8, 10, 12, 13, 19], "butn": 9, "butterfli": 14, "button": [1, 8], "bvp": [8, 12], "byte": 10, "c": [1, 3, 8, 9, 10, 11, 12, 13, 14, 15, 16], "c1": [12, 13], "c2": [12, 13], "c3": [12, 13], "c_": [9, 12], "c_0": 8, "c_1": [12, 14], "c_1hy": 12, "c_1k_1": 12, "c_2": [12, 14], "c_2a": 12, "c_2h": 12, "c_2k_2": 12, "c_3": 14, "c_3k_3": 12, "c_4k_4": 12, "c_5k_5": 12, "c_6k_6": 12, "c_i": [8, 12, 14], "c_j": 9, "c_jk_j": 12, "ca": [5, 11, 16, 17, 19, 21], "cal": [10, 15, 16], "calc_trig": 13, "calc_window": 17, "calcul": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "calculu": [5, 8], "call": [1, 8, 10, 11, 12, 13, 14, 15, 16], "cambridg": [8, 11, 15], "came": [3, 14], "can": [1, 2, 3, 5, 8, 9, 11, 12, 13, 14, 15, 16, 17, 19, 21], "cancel": [10, 11, 12, 15], "candid": 16, "cannot": [8, 10, 13], "canva": [5, 10, 12, 13, 21], "capac": 8, "capacit": 16, "captur": [10, 14, 16], "car": 15, "care": [5, 11, 13, 21], "carefulli": [10, 15], "carmen": [9, 15], "carp": 13, "carrot": 13, "cartesian": 17, "case": [1, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "cash": [12, 13], "cat": 2, "catalina": 2, "catastroph": 10, "categori": [13, 14, 17], "caus": [13, 16, 17], "caution": 10, "cax": 17, "cc": [11, 12, 15], "ccc": [11, 15], "cccc": [11, 12], "ccccc": 11, "cccccc": [11, 12], "ccccccc": 11, "cccccccc": 11, "ccccccccc": 11, "ccccccccccc": 11, "ccl": 11, "cd": [1, 2, 12], "cdot": [8, 10, 11, 13, 14, 16], "ceas": 13, "cell": [9, 11, 12, 13, 15, 17], "cell_typ": 9, "center": [8, 9, 10, 13, 15, 16, 17], "centr": [8, 9, 15], "central": 13, "centredn": 9, "centren": 9, "certain": [8, 10, 11, 14, 17], "cfl": 16, "cfm": 10, "ch": 15, "chain": 16, "chanc": 14, "chang": [0, 1, 3, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "chao": [10, 14], "chapter": [2, 8, 10, 12, 14, 15, 16, 17, 19], "charact": [2, 16], "character": 13, "characterist": [10, 13, 14], "charg": 12, "cheaper": [10, 16], "cheat": [5, 21], "chebyshev": 8, "check": [3, 9, 10, 11, 12, 16, 17], "checker": 3, "chen": 8, "chi": [13, 16], "chi_": 16, "chi_prev": 16, "child": 13, "childitem": 2, "children": 13, "chip": 1, "choic": [5, 8, 10, 11, 12, 13, 14, 15, 16], "choleski": 11, "choos": [1, 3, 5, 8, 9, 10, 12, 13, 16, 17], "chose": 8, "chosen": [5, 8, 10, 11, 13, 15, 16], "circ": [15, 16], "circl": [14, 16], "circul": [14, 16], "circumfer": 14, "cl": 11, "class": [3, 6, 8, 10, 11, 12, 16, 21], "classic": [1, 8, 12], "classifi": 16, "classmat": [5, 21], "clean": 10, "clear": [8, 10, 11, 13, 14, 16], "clearli": [10, 13], "clf": 12, "click": [2, 3, 10, 15], "cliff": 8, "climat": [13, 14], "climb": 8, "clipboard": 3, "clobber": 2, "clockwis": [14, 15], "clone": [1, 2, 10], "close": [8, 10, 11, 13, 14, 15, 16, 17], "closer": [10, 16], "cloud": [8, 17], "cm": [8, 15], "cmd": 1, "cmdlet": 2, "co": [8, 14, 15, 16, 17], "coars": 17, "code": [0, 2, 9, 10, 11, 12, 14, 15, 16, 17], "codemirror_mod": 9, "coder": 13, "coeff": 12, "coeff_file_nam": [13, 14], "coefffilenam": 13, "coeffici": [8, 9, 11, 12, 13, 16, 17], "coefficientsn": 9, "cold": [5, 14], "collabor": [1, 5, 21], "collaps": [8, 9], "collect": [12, 13, 14], "colon": 13, "color": [13, 15, 17], "colorbar": 17, "colormap": 15, "colour": [3, 15], "column": [11, 13, 15, 16], "com": [0, 1, 2, 17], "combin": [8, 9, 10, 11, 12, 16], "come": [1, 5, 8, 11, 12, 15, 21], "command": [0, 1, 3, 11], "comment": [8, 10, 12, 13], "commit": [2, 3, 10], "commmonli": 13, "common": [0, 10, 11, 16], "commonli": [8, 10, 11, 16], "commun": [5, 21], "compact": [16, 19], "compani": 13, "compar": [8, 9, 10, 11, 12, 13, 14, 15, 16], "comparison": [8, 10, 13, 15, 16], "competit": 13, "compil": 14, "complet": [1, 2, 5, 8, 10, 12, 13, 14, 15, 16, 17, 21], "complex": [10, 13, 14, 15, 16], "complic": [8, 14, 15, 16], "compon": [14, 16], "componentwis": 13, "compos": [8, 11, 14], "comprehens": [13, 19], "comput": [2, 5, 8, 9, 10, 13, 14, 16, 19, 21], "computation": 13, "concentr": [9, 11, 13, 14], "concept": [8, 10, 11, 13, 15], "concern": [5, 9, 10, 11, 12, 13, 14, 16], "conclus": [9, 10, 14], "cond": 11, "conda": [1, 17], "condens": 8, "condit": [8, 9, 10, 12, 13, 14], "conditionn": 9, "conduct": [10, 14], "conduction_quiz": 8, "confid": 10, "config": [1, 13, 14], "configur": [0, 1, 13, 14], "confin": [8, 14, 16], "confirm": [10, 11, 15], "confus": 16, "conj": 17, "conjectur": 11, "conjug": [14, 16], "connect": [1, 5, 13, 15], "consecut": 10, "consequ": [8, 10, 11, 14, 15, 16], "conserv": [9, 10, 13, 14, 16], "consid": [5, 8, 9, 10, 11, 13, 14, 15, 16, 21], "consider": [10, 14, 16, 21], "consist": [5, 11, 12, 15, 16, 17, 21], "constant": [8, 9, 10, 11, 14, 15, 16, 17], "constantgrowth": 13, "construct": [8, 12, 13, 15, 16, 17], "constructor": [12, 14], "consult": [5, 21], "consum": 16, "contact": [5, 21], "contain": [8, 9, 10, 11, 13, 14, 15, 16, 17, 19], "content": [2, 8, 10, 12, 13, 14], "context": [8, 10, 11, 12, 13, 14, 15, 16], "contextn": 9, "continu": [8, 9, 10, 11, 14, 15, 16, 17], "contrast": [8, 9, 12], "contribut": [5, 13, 16], "control": [8, 12, 14], "convect": 14, "conveni": [8, 10, 14, 16], "convent": [13, 17], "converg": [8, 11, 16], "convers": 13, "convert": [12, 16, 17], "convolv": 17, "cookbook": 2, "cool": 12, "cooler": 13, "coord": 14, "coordin": [16, 17], "copi": [1, 2, 5, 10, 12, 13, 17, 21], "corioli": [15, 16], "corn": 13, "corner": 11, "correct": [1, 3, 5, 10, 11, 15, 17, 21], "corrector": [12, 16], "correspond": [8, 10, 11, 12, 13, 14, 15, 16, 17], "correspondingli": 13, "corrupt": 16, "cosin": 17, "cost": [10, 12, 13, 14, 16], "costli": [10, 11], "could": [10, 11, 13, 14, 16], "count": [2, 10, 11, 14, 16], "counter": [14, 15], "counteract": 13, "counterpart": [10, 15, 16], "coupl": [8, 12, 15, 16], "courant": [9, 15], "cours": [0, 4, 6, 8, 10, 11, 13, 19, 20], "cover": [8, 10, 13, 21], "coverag": 13, "cp": [1, 2], "cpu": 16, "crc": 11, "creat": [0, 2, 5, 10, 12, 13, 17, 21], "credit": [5, 21], "crest": 15, "crise": [5, 21], "criteria": [10, 15], "critic": [8, 14], "cropland": 13, "cross": 16, "csp": 2, "ctrl": 8, "cubic": 8, "cultur": [5, 21], "cure": 10, "curl": 16, "current": [1, 2, 8, 10, 12, 13, 14, 15, 16], "curv": [8, 9, 10, 12, 14, 15, 16], "curvatur": 16, "cushman": 15, "customari": [10, 16], "cut": [1, 12], "cvybmbfyrxm": 15, "cycl": [11, 17], "cylind": 14, "d": [8, 10, 11, 12, 13, 14, 15, 16, 19], "d_1": 10, "d_2": 10, "d_3": 10, "d_i": 10, "d_k": 10, "da": 8, "da_b": 13, "da_w": 13, "dai": [5, 16, 21], "daisi": 14, "daisyworld": 14, "damp": [10, 12, 14, 16], "darker": 13, "data": [1, 2, 8, 13, 17], "data_download": 17, "datafram": 13, "dataset": 17, "datatyp": 12, "date": [5, 21], "david": 2, "ddccixlak2u": 14, "ddot": 11, "de": [8, 13, 15, 19], "deadlin": [5, 21], "deal": [8, 10, 11, 16], "dealt": [10, 12], "death": 13, "decai": [10, 11, 14, 15], "decid": [8, 10, 13], "decim": 10, "decis": 9, "declar": 10, "decompos": 16, "decoupl": 15, "decreas": [8, 9, 10, 12, 13, 15], "decreasingli": 17, "deep": [15, 16], "def": [12, 13, 14, 15, 17], "default": [1, 2, 8, 10, 11, 13, 14], "defici": 14, "defin": [8, 9, 10, 11, 13, 14, 15, 16, 17], "definen": 9, "definit": [8, 9, 10, 11, 12, 13], "definitenessn": 9, "deflect": [15, 16], "deform": 9, "deg": 10, "degc": 10, "degener": 11, "degre": 14, "delai": 10, "delet": 2, "delic": 13, "delta": [8, 10, 12, 16], "delta_": [12, 13, 15], "delta_i": 15, "delta_k": 17, "delta_x": 15, "deltat": 17, "delv": [5, 10, 21], "demo": [12, 15, 16], "demonstr": [3, 8, 10, 13, 16], "denomin": 10, "denot": [13, 15, 16], "dens": 10, "densiti": [8, 10, 13, 16, 17], "depend": [8, 9, 10, 11, 12, 14, 15, 16, 17], "depict": [14, 15, 16], "deplet": 14, "depth": [2, 5, 10, 14, 15, 16, 19, 21], "deriv": [9, 11, 14, 15, 16], "derivative_approx": 8, "derivativesn": 9, "derivit": 13, "derivs4": 12, "derivs5": [13, 14], "descend": 14, "describ": [8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "describedn": 9, "descript": [2, 11, 16, 17], "deserv": [5, 21], "design": [8, 10, 16, 17], "desir": [11, 12, 13], "det": [11, 14], "detail": [2, 5, 6, 8, 9, 10, 11, 13, 14, 16, 21], "detect": 17, "determin": [5, 8, 12, 13, 14, 15, 16], "determinist": 14, "devast": 14, "develop": [8, 9, 10, 13, 16], "deviat": 13, "devis": 13, "df": [12, 16], "diagon": [11, 16], "diagram": [14, 16], "dial": 13, "dict": [12, 13, 14, 17], "dictionari": [10, 12, 13, 14], "did": [4, 10, 11, 14, 16], "didn": 10, "die": 10, "diego": [11, 16], "differ": [1, 7, 9, 11, 12, 13, 14, 16, 17], "differenc": [9, 16], "differencingn": 9, "differenti": [7, 10, 11, 13, 14, 16, 19, 21], "differentn": 9, "difficult": [8, 10, 12, 13, 14], "difficulti": [10, 13, 14, 16], "difficultn": 9, "diffus": [9, 11], "digit": [10, 11], "dimens": [11, 13, 15, 16], "dimension": [8, 11, 14, 15, 16, 17], "dimensionless": [14, 16], "diprima": 8, "dir": [2, 13], "direct": [5, 8, 9, 11, 13, 14, 15, 16], "directli": [5, 10, 11, 14, 16, 21], "directori": [1, 2, 12, 13, 17], "directoryn": 9, "dirichlet": [11, 15], "disabl": [5, 21], "disadvantag": [10, 16], "disciplin": 19, "discov": [12, 13, 14, 15, 16], "discret": [11, 15, 17], "discretization_quiz": 8, "discrimin": [5, 21], "discuss": [5, 8, 10, 11, 12, 13, 14, 15, 16, 17], "dish": 15, "dismal": 15, "disord": 14, "disp_a": 15, "disp_analyt": 15, "disp_b": 15, "disp_c": 15, "dispersion_2d": 15, "dispersion_quiz": 15, "dispersionn": 9, "displac": [8, 14], "displai": [8, 9, 14, 15, 16], "display_nam": 9, "displaystyl": [8, 12], "distanc": [8, 9, 13, 14, 15], "distinct": [10, 14], "distort": [14, 17], "distribut": [8, 9, 11], "div": 11, "diverg": [13, 14, 16], "divid": [8, 9, 11, 13, 14, 16, 17], "divis": 11, "do": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "do_exampl": 12, "do_fft": 17, "do_plot": 17, "doc": [1, 12, 13, 17], "document": [12, 13, 16], "doe": [5, 8, 9, 11, 12, 13, 14, 15, 16, 17, 21], "doesn": [9, 13, 15, 16, 17], "domain": [8, 11, 15, 16, 17], "domin": [5, 8, 10, 16, 21], "don": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 16, 17, 21], "done": [3, 5, 10, 11, 13, 14, 16, 21], "dot": [8, 10, 11, 14, 16], "doubl": [10, 14], "doublesplat": 12, "down": [1, 2, 8, 10, 11, 14, 15, 16], "download": [0, 1, 17, 19], "dramat": 13, "draw": 17, "drawback": [11, 12], "drawn": 14, "dream": 13, "drive": [10, 13], "driven": [14, 16], "driver": 12, "drop": [10, 12, 13, 14, 15, 16], "droplet": 9, "dt": [8, 9, 10, 12, 13, 14, 15, 16, 17], "dtfailmax": 13, "dtfailmin": 13, "dtmin": 13, "dtpassmax": 13, "dtpassmin": 13, "dtype": 17, "du": [8, 10, 16], "dub": 13, "due": [4, 5, 9, 10, 11, 13, 14, 15, 16, 20, 21], "dummi": 8, "dump": [12, 13], "dumpit": 13, "duplic": [10, 16], "dure": [5, 9, 10, 13, 21], "dx": [8, 9, 11, 14, 15, 16], "dy": [8, 10, 12, 13, 14, 16], "dynam": [14, 15, 16], "dz": [10, 14, 16], "e": [2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "e_": [11, 13], "e_k": 17, "each": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "earli": [5, 16], "earlier": [10, 13, 14], "earliest": 15, "earth": [10, 13, 15, 16], "easi": [9, 10, 11, 12, 13, 14, 15, 16], "easier": [8, 11, 12], "easiest": [10, 14], "easili": [8, 10, 11, 13, 16], "east": 15, "easterli": 16, "ebook": 2, "echo": 2, "ecologi": 13, "eddi": 16, "edit": [3, 8, 11, 12, 13, 15, 16, 19], "editor": [3, 14, 15], "edward": [10, 14], "effect": [8, 10, 11, 13, 14, 15, 16], "effici": [11, 12, 13, 14, 16], "effort": [5, 21], "eg": [8, 15], "eggplant": 13, "eig": 11, "eigen0": 14, "eigen01": 14, "eigenvalu": [10, 14], "eigenvector": 14, "eight": 10, "eighth": [4, 20], "eignevalu": 14, "eigval": 11, "either": [2, 3, 5, 9, 10, 11, 13, 15], "ek8": 15, "ekman": 16, "elabor": 13, "elaps": 14, "eleg": 16, "elemenatari": 19, "element": [11, 12, 13, 16, 17], "elementari": [8, 11], "elev": [14, 15], "eleventh": [4, 20], "elimin": [10, 15, 16], "ell": [9, 11, 15], "ell0": 9, "ell1": 9, "ell2": 9, "ell3": 9, "ell4": 9, "els": [13, 14], "elsewher": [8, 15], "email": [1, 5], "embodi": [14, 16], "emiss": 13, "emit": 13, "empir": 8, "emploi": [8, 11, 16], "empti": [11, 12, 13, 17], "empty_lik": [12, 13, 14, 17], "en": 1, "enabl": 10, "encod": 9, "encount": 16, "encourag": [5, 10, 21], "end": [5, 7, 8, 9, 10, 11, 12, 13, 14, 16, 17, 21], "endpoint": 10, "energi": [10, 15, 16], "enforc": 15, "engin": 2, "englewood": 8, "enhanc": 10, "enough": [5, 8, 9, 10, 11, 13, 14, 15, 16, 21], "ensembl": 17, "enstrophi": 16, "ensur": [5, 13, 16, 21], "enter": [1, 8, 11, 15], "entir": [10, 15, 17], "entri": [11, 13, 16, 17], "env": 1, "envelope_sig": 17, "envelopewav": 17, "environ": [0, 5, 17, 21], "environment": 13, "eo": 17, "eoa": [5, 21], "eosc": 21, "epsilon": [9, 13, 16], "eq": [9, 10, 12, 13, 14, 16], "eq_eigen0": 14, "eq_eigen01": 14, "eqn": [9, 15, 16], "equal": [8, 9, 10, 13, 14, 15, 16], "equat": [7, 9, 19, 21], "equilibrium": [8, 10, 11, 13], "equip": 17, "equiv": [12, 13, 15, 16], "equival": [2, 8, 10, 16, 21], "error": [8, 14, 17], "errorlist": [13, 14], "escap": 11, "especi": [10, 11], "essenti": [8, 12, 13, 16], "est": [12, 13], "estim": [8, 10, 15, 17, 21], "estimatederrorlin": 13, "et": [12, 13, 15, 17], "eta": 13, "etc": [3, 5, 8, 11, 12, 13, 14, 15, 21], "euler": [9, 12, 14, 15, 16], "euler4": 12, "eulerinter41": 12, "evalu": [4, 8, 10, 12, 13, 16, 17, 20], "evapor": 8, "evelop": 17, "even": [8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "eventu": [10, 11, 13, 14], "ever": [11, 14], "everi": [8, 11, 13, 14, 15, 16, 17], "everyth": [11, 13], "everywher": 3, "evid": [14, 16], "evlout": 14, "evolut": [8, 14, 16], "evolv": [10, 16], "exact": [8, 10, 11, 12, 13, 14, 15, 16, 17], "exactli": [5, 8, 10, 11, 16, 21], "exacttemp": 10, "exacttim": 10, "examin": [5, 11, 21], "exampl": [1, 9, 12, 14, 17, 19], "exc_info": 12, "excel": [5, 21], "except": [8, 10, 11, 12, 15, 16], "exchang": 11, "execut": [1, 9, 13, 16], "execution_count": 9, "executionpolici": 2, "exercis": [5, 8, 11, 14, 15, 21], "exert": 13, "exhibit": [8, 10, 14], "exist": [2, 10, 11, 13, 14, 16], "exit": 2, "exp": [8, 9, 10, 12, 13, 15, 17], "expand": [9, 10, 11, 12, 14, 15], "expans": [10, 11, 12, 14], "expect": [5, 10, 11, 12, 14, 15, 17, 21], "expens": [10, 11, 12, 13, 14, 16], "experi": [5, 9, 10, 13, 14, 16, 21], "experienc": 16, "experiment": 8, "expertis": 5, "explain": [8, 9, 10, 11, 12, 13, 15, 16], "explan": [5, 13, 14, 16, 21], "explicit": [7, 8, 10, 16], "explicitli": [8, 11], "explicitrk1": 12, "explicitrk2": 12, "explod": 16, "exploit": [11, 13, 16], "explor": [2, 10, 12, 13, 14, 17], "expon": [10, 13, 17], "exponenti": [8, 13, 15], "express": [8, 10, 12, 13, 16], "extend": [8, 13, 14, 16, 17], "extens": [0, 10], "extent": 16, "extim": 13, "extra": [9, 10, 11, 12, 13, 14], "extract": 13, "extran": 9, "extrem": [13, 16], "ey": [14, 15, 17], "f": [1, 8, 9, 10, 11, 12, 13, 14, 15, 16], "f_": [9, 11, 16], "f_0": [11, 16], "f_1": 11, "f_2": 11, "f_3": 11, "f_b": 13, "f_i": 11, "f_n": 11, "f_t": 12, "f_x": 14, "f_y": 14, "f_yi": 12, "f_z": 14, "facm": 17, "facp": 17, "fact": [8, 10, 11, 12, 13, 14, 16, 17], "factor": [8, 9, 10, 11, 13, 14, 16], "factori": [10, 13], "fail": [5, 8, 12, 13, 15, 21], "fair": [8, 10, 12, 13, 19], "fairli": [8, 13], "fals": [8, 9], "famili": [10, 12], "familiar": [11, 14], "famou": 12, "fangohr": 2, "far": [8, 9, 10, 11, 12, 13, 15, 16], "fascin": 13, "fashion": 15, "fast": [15, 16], "faster": [11, 13, 15], "fastest": 14, "favorit": 2, "feasibl": 14, "featur": [3, 13], "feb": [4, 20], "feed": 13, "feel": [16, 21], "fehlberg": [12, 13], "fertil": 13, "fetch": [2, 10], "feuler": [8, 10], "few": [8, 10, 11, 13, 14, 16], "fewer": [5, 13, 14, 21], "fft2": 17, "fft_data": 17, "fft_shift": 17, "fftfreq": 17, "fftshift": 17, "fictiti": [11, 16], "field": [9, 16], "fifth": [4, 10, 11, 12, 13, 20], "fig": [9, 10, 14, 17], "figsiz": [10, 12, 14, 17], "figur": [5, 8, 9, 10, 11, 12, 14, 15, 16], "file": [1, 2, 10, 11, 13, 15, 17], "file_extens": 9, "filelist": 17, "filenam": [2, 8, 9, 14, 15, 16, 17], "fill": [8, 14], "filt_fft": 17, "filter": [13, 16], "filter_func": 17, "filtered_wvel": 17, "filterwarn": 17, "fin": 17, "final": [2, 5, 11, 12, 13, 14, 15, 16, 21], "find": [8, 9, 10, 11, 12, 14, 16], "find_temp": 13, "finder": 2, "fine": [10, 14, 16], "finish": [11, 16], "finit": [7, 8, 10, 11, 12, 14, 16], "first": [2, 4, 5, 6, 9, 11, 12, 13, 14, 15, 16, 17, 20, 21], "firsthalf": 17, "fit": [14, 15], "five": [5, 10, 21], "fix": [10, 11, 12, 13, 14, 15], "fixed_growth": 13, "flanneri": 15, "flap": 14, "flat": [15, 16], "flatten": 17, "flavor": 12, "flexibl": [5, 12, 14, 16, 21], "flip": 14, "float": [11, 13, 14, 17], "float64": [12, 13, 17], "floor": 17, "flow": [8, 9, 11, 14, 15, 16], "flu": 5, "fluid": [8, 14, 15, 16], "flux": [9, 13, 16], "flux2": 9, "fluxform": 9, "fo": 16, "focu": [10, 12], "focus": 3, "folder": [0, 2, 3], "follow": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "forc": [2, 8, 12, 15, 16], "forecast": 14, "forest": 13, "forev": 14, "forget": [10, 16], "fork": [0, 2], "form": [8, 9, 10, 11, 12, 14, 15], "format": [5, 9, 13, 14, 17, 21], "formul": 11, "formula": [8, 9, 10, 11, 12, 14, 15, 16], "forn": 9, "forth": 14, "fortran": [12, 15, 17], "forward": [9, 10, 12, 14, 15, 16], "found": [1, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 19], "four": [3, 5, 10, 12, 13, 14, 21], "fourgrid": 15, "fourier": [8, 10, 16], "fourth": [4, 10, 13, 16, 20], "foward": 10, "frac": [8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "fractal": 14, "fractiion": 13, "fraction": 13, "frame": [15, 16], "free": [1, 2, 13, 14, 16, 19], "freedom": [5, 21], "freq": 17, "freq_bin_width": 17, "frequenc": [15, 16, 17], "frequenic": 17, "fresh": [1, 11], "friction": [14, 16], "friedrich": 15, "frog": [14, 15, 16], "from": [0, 1, 4, 5, 8, 9, 11, 13, 14, 15, 16, 17, 20, 21], "from_record": 13, "front": [9, 10], "fruit": 13, "fu": [15, 16], "full": [5, 16, 17, 21], "fullest": 13, "fulli": [5, 21], "fun_nam": 10, "function": [3, 8, 9, 10, 11, 13, 14, 15, 16, 17], "functionn": 9, "fundament": [2, 10, 12], "further": [8, 10, 11, 12, 13, 16], "furthermor": [8, 10, 11], "futur": [10, 12, 14], "futurewarn": [14, 17], "fv": [15, 16], "fv_": 15, "g": [5, 8, 11, 12, 13, 14, 15, 16, 17], "gaia": 13, "gain": [8, 12, 13, 14, 17], "game": [5, 21], "gamma": [8, 10], "gap": 12, "garcia": 8, "garp": [15, 16], "gase": 14, "gauss": 16, "gaussian": [10, 16], "gaussiann": 9, "gc": 2, "gci": 2, "ge": 16, "gener": [2, 5, 9, 10, 11, 12, 13, 14, 16, 21], "geophys": [10, 13, 15, 16], "geostrop": 16, "geotroph": 16, "get": [1, 2, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "get_init": 12, "getmemb": 13, "getn": 9, "getsiz": 17, "getting_start": 10, "ggplot": [10, 14, 17], "gh": [14, 15], "gh_": 15, "ghdt": 15, "ghk": 15, "ghost": [11, 16], "ghost3": 16, "gilbert": 19, "gill": 15, "git": [0, 3, 12], "github": [0, 5, 10, 21], "gitlen": 3, "give": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "given": [2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "glanc": 16, "gleick": 14, "global": [1, 10, 14, 15, 16], "globe": [14, 16], "go": [1, 2, 6, 8, 9, 10, 11, 14, 15, 16], "goal": [9, 11, 12, 13, 17, 21], "goe": [10, 13, 15, 17], "gone": 10, "good": [2, 10, 15, 16, 17, 19, 21], "goodn": 9, "got": [5, 21], "govern": [8, 11, 13, 14, 15, 16], "grace": [5, 11, 21], "grad": [6, 18], "grade": 10, "gradient": [14, 15, 16], "gram": 8, "granit": 8, "grant": 21, "graph": [8, 10, 13, 15, 17], "graph_spectrum": 17, "graphic": 11, "gravit": 16, "graviti": [14, 15], "great": [10, 11, 12, 13, 15], "greater": [9, 10, 11, 13, 14, 15, 19], "greatest": 11, "greatli": [5, 10], "green": 14, "greenhous": 14, "grei": 13, "grep": 2, "greyconc": 13, "grid": [8, 9, 10, 11, 14, 16], "grid1": 15, "gridn": 9, "groceri": 13, "gross": 13, "ground": 13, "group": [5, 15, 16, 21], "grow": [8, 13, 14, 16], "growth": [10, 15, 21], "growti": 13, "guarante": [9, 12], "guess": [10, 16], "gui": 2, "guid": [2, 12, 14], "guidanc": [5, 21], "guo": [9, 15], "h": [10, 11, 12, 13, 14, 15, 16, 17], "h_": [13, 15], "ha": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "habit": 13, "had": [8, 10, 11, 14], "haf_t": 12, "haf_yi": 12, "hai": 12, "half": [5, 10, 12, 13, 15, 16, 17], "half_hz_index": 17, "halfpoint": 17, "hall": [8, 14, 15], "halv": [12, 15], "han": 10, "hand": [5, 8, 9, 10, 11, 12, 13, 14, 15, 21], "handl": [8, 9, 10, 12, 15, 16], "haphazard": 14, "happen": [5, 8, 9, 13, 14, 15, 16], "happi": [14, 16], "harass": [5, 21], "harcourt": [11, 16], "hard": [11, 13], "hardcod": 12, "harmon": [10, 12], "hashabl": 12, "hasn": 14, "hat": [15, 16], "have": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "haven": [10, 17], "hbox": 9, "he": [14, 16], "head": 8, "headach": 16, "health": 5, "healthi": [5, 21], "heat": [9, 10, 11, 14], "heavili": 14, "height": [8, 14, 15], "held": [8, 11], "helium": 8, "help": [2, 5, 6, 10, 11, 13, 14, 15, 16, 17, 21], "henc": [8, 10, 11, 14, 15, 16], "here": [1, 2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "hesit": [5, 21], "heun": [12, 14], "hf": 12, "hg": 15, "hgq66frbbe": 15, "hi": [2, 14, 16], "high": [10, 11, 16], "higher": [9, 12, 13, 16, 17], "highest": [5, 17, 21], "highli": 13, "hilbert": 17, "hint": [8, 9, 10, 11, 13, 14, 16, 17], "hist": 17, "histor": [5, 14, 15, 21], "histori": [10, 12, 13, 14], "hit": [1, 2, 8, 17], "hline": 12, "hnew": 13, "hnewnorm": 13, "hold": [1, 9, 11, 12, 13, 14], "hole": 10, "home": [1, 2, 5], "homogen": [11, 15, 16], "honesti": [5, 21], "honour": [5, 21], "hope": [8, 11, 13, 14], "hopefulli": 14, "hopeless": 13, "horizont": [9, 14, 15, 16, 17], "hot": 11, "hotter": 13, "hour": [5, 9, 15, 21], "how": [2, 5, 8, 9, 11, 12, 13, 14, 15, 16, 17, 21], "howev": [8, 10, 11, 12, 13, 14, 15, 16, 17], "html": [1, 5, 10, 13, 21], "http": [0, 1, 2, 5, 10, 13, 17, 21], "hub": 1, "human": 15, "hundr": 16, "hy": 12, "hydrodynam": 14, "hydrostat": 16, "hypothesi": 13, "hysteresi": 13, "hz": 17, "i": [1, 2, 3, 5, 8, 9, 10, 12, 13, 14, 15, 16, 21], "i1": 11, "i2": 11, "i3": 11, "i_": 9, "i_j": 9, "iat": 15, "ib": [14, 15], "ibvp": 8, "ic": 8, "icon": 10, "id": [1, 11], "idea": [5, 8, 10, 11, 12, 14, 15, 16, 21], "ideal": [9, 14], "ident": [5, 11, 13, 15, 16, 17, 21], "identifi": [5, 8, 10], "ie": [9, 15], "ifft": 17, "ifftshift": 17, "ignor": [2, 8, 10, 14, 15, 16, 17], "ii": [10, 11, 15], "iii": 10, "ij": [11, 12], "ikd": 15, "ikgh_": 15, "ill": [5, 11, 21], "illustr": [8, 10, 11, 12, 13, 14, 16], "im0": 17, "im1": 17, "imag": [8, 9, 14, 15, 16], "imag_coeff": 17, "imagen": 9, "imagin": [10, 14], "imaginari": [10, 17], "immedi": [8, 10, 14, 16], "immers": 10, "impact": [5, 17, 21], "implement": [12, 13, 16], "impli": [12, 13, 14, 15, 16], "implicit": [10, 12, 16], "implicitli": [10, 12], "import": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "importantli": 11, "importlib": 15, "imposs": [8, 9, 13, 14], "improv": [5, 8, 10, 11, 16, 21], "imshow": 17, "inaccur": 11, "inaccuraci": [11, 13], "inadequ": 8, "incid": 13, "includ": [2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "incompress": [14, 16], "incorrect": [5, 21], "increas": [8, 9, 10, 12, 13, 14, 15, 16], "increment": 13, "inde": 13, "indent": 12, "independ": [8, 10, 14, 15, 16], "index": [2, 9, 11, 12, 16, 17], "indic": [8, 12, 13, 14, 16], "individu": [5, 14, 21], "ineffici": [10, 11, 16], "inequ": 10, "inevit": 10, "inexpens": 13, "infinit": [8, 10, 11], "inflow": 16, "influenc": [11, 14], "influenti": [5, 21], "inform": [2, 3, 8, 10, 12, 13, 14, 15, 16, 17, 19], "infti": [8, 10, 14, 16, 17], "inhomogen": 11, "init": 13, "init_dict": 13, "init_num": 16, "initail": 13, "initi": [5, 8, 9, 10, 11, 12, 14, 16, 21], "initial_cond": 12, "initialcond": 12, "initialdict": 12, "initialv": 12, "initialvar": 12, "initinter41": 12, "inittup": 12, "initvar": [13, 14], "inlin": [10, 14, 15, 17], "inlinen": 9, "inn": 9, "input": [2, 9, 10, 12, 13, 16, 17], "inputdict": 12, "insert": [16, 17], "insid": [1, 13], "insight": [8, 13], "insignific": 14, "inspect": 13, "instabl": [8, 9, 10, 14, 15], "instal": [0, 10, 17], "instanc": 12, "instead": [1, 8, 9, 10, 11, 12, 13, 15], "instruct": [1, 10], "insul": [8, 11, 13], "int": [8, 9, 17], "int_": [8, 9, 16, 17], "int_0": [8, 11, 17], "integ51": 13, "integ53": 13, "integ54": 13, "integ55": 13, "integ61": 14, "integcoupl": 13, "integr": [3, 8, 9, 10, 11, 13, 15, 16, 17], "integrator61": 14, "intel": 1, "intend": 5, "intens": 14, "inter": 8, "interact": [2, 5, 8, 12, 13, 14, 15, 21], "interactive1": 15, "intercept": 17, "interest": [8, 13, 14, 16, 17], "interior": [8, 11, 16], "interlock": 13, "intermedi": 8, "intern": [8, 13, 15], "interp": 8, "interpol": [9, 13], "interpol_f": 8, "interpol_g": 8, "interpolation_quiz": 8, "interpret": [3, 14], "interv": [8, 10, 12, 13, 14], "intial": 13, "intim": 14, "intrins": [10, 14], "intro": [14, 19], "introduc": [8, 9, 10, 11, 13, 14, 16, 17], "introducint": 13, "introduct": [0, 2, 11, 15, 19, 21], "introductori": [8, 11, 15, 21], "intuit": 15, "inv": 11, "invers": 17, "invert": 11, "investig": [10, 13, 14, 15, 16], "involv": [5, 8, 9, 10, 12, 13, 14, 16, 21], "io": [1, 2, 5, 10, 21], "ip": 8, "ipeer": [4, 20], "ipykernel": 9, "ipynb": [1, 2, 9, 10, 13], "ipython": [2, 8, 9, 14, 15, 16], "ipython3": 9, "irregular": 14, "ismethod": 13, "isn": 9, "isol": [13, 17], "issu": [1, 8, 16], "item": [2, 10, 13, 16], "itemtyp": 2, "iter": [10, 16], "its": [2, 3, 8, 9, 10, 11, 13, 14, 15, 16], "itself": [8, 10, 11, 12, 13, 14, 16], "itsn": 9, "iv": 10, "ivp": [8, 12], "iwu_": 15, "j": [8, 9, 11, 12, 14, 15, 16, 17], "jac": 16, "jacobi": [11, 16], "jacobian_2": 16, "jacobian_3": 16, "jake": [2, 14, 17], "jakevdp": 2, "jan": [4, 12, 20], "ji": 11, "jigsaw": 10, "job": [9, 16], "john": [8, 16], "join": [5, 8, 11], "journal": [8, 14, 16], "jovanovich": [11, 16], "jrjohansson": 2, "jsonin": 12, "jsonout": 12, "juli": 13, "jump": [10, 13, 14, 16], "jupyt": [1, 3, 10, 13], "jupytext": 9, "just": [1, 3, 8, 9, 10, 11, 13, 14, 15, 16, 17], "k": [9, 10, 11, 13, 14, 15, 16, 17], "k_": 16, "k_1": [10, 12], "k_2": [10, 12], "k_3": 12, "k_4": 12, "k_6": 12, "k_b": 13, "k_bin": 17, "k_i": [12, 13], "k_j": 12, "k_val": 17, "k_w": 13, "k_x": 17, "k_y": 17, "kappa": [8, 16], "karp": [12, 13], "kd": 15, "ke_": 11, "keen": [14, 16], "keep": [1, 2, 10, 11, 12, 13, 14, 15, 16, 21], "kei": [1, 2, 5, 10, 12, 13, 14, 17], "kent": 8, "kept": 16, "kernel": 3, "kernelspec": 9, "keyboard": 2, "keyword": [10, 12], "kg": 16, "kh": 15, "khichin": 17, "kilometr": 16, "kind": [11, 13], "kinet": 16, "kj": 11, "km": [9, 15, 17], "kmd": 15, "know": [2, 5, 8, 9, 10, 11, 12, 13, 14, 16, 17, 21], "knowledg": [5, 16], "known": [8, 10, 11, 12, 13, 14, 15, 16, 17], "knum": 17, "ko": 13, "kol": 17, "kol_offset": 17, "kol_slop": 17, "kolmogorov": 17, "kradial": 17, "kutt": 13, "kutta": [10, 14, 16], "kutta4": 12, "kuttan": 14, "kx": [15, 16, 17], "l": [2, 8, 9, 11, 12, 13, 15], "l1": [11, 12], "l2": [11, 12], "l3": [11, 12], "l4": 12, "l_": 11, "l_0": 13, "la": 8, "lab": [0, 4, 5, 6, 7, 8, 11, 12, 16, 17, 19, 20, 21], "lab1": [1, 8], "lab10": 9, "lab2": [1, 10], "lab2_funct": 10, "lab3": 11, "lab4": 12, "lab4_funct": 12, "lab5": [13, 14], "lab5_fun": [13, 14], "lab6": 14, "lab7": 15, "lab8": 16, "lab9": 17, "lab_exampl": 9, "label": [2, 10, 12, 13, 14, 17], "laboratori": [7, 9, 21], "lack": [9, 12, 15], "lag": [16, 17], "lamb": [11, 16], "lambda": [8, 10, 11, 14, 15], "lambda_": 11, "lambda_1": 14, "lambda_2": 14, "lambda_3": 14, "lambda_i": 8, "lambdadelta": 10, "land": [13, 15], "landsat": 17, "langtangen": 10, "languag": [9, 10, 13], "language_info": 9, "laplac": 8, "laplacian": 16, "larg": [5, 8, 10, 11, 13, 14, 15, 16, 17, 21], "larger": [8, 9, 10, 11, 13, 14, 15, 17], "largest": [8, 10, 11], "last": [4, 9, 10, 11, 14, 20], "lastli": 11, "late": [5, 21], "later": [8, 10, 12, 13, 14, 15, 16], "latest": 1, "latex_env": 9, "latitud": 16, "latter": 10, "law": [12, 13, 16], "layer": [11, 14, 15, 16, 17], "lcr": 11, "ld": 15, "ldl": 11, "ldot": [8, 9, 10, 11, 16], "le": [9, 15], "lead": [3, 5, 8, 10, 11, 12, 13, 14, 15, 16, 21], "leak": 14, "leaki": 14, "leap": [14, 15, 16], "leapfrog": [10, 16], "learn": [2, 5, 9, 17, 21], "learnt": 13, "least": [10, 12, 13, 14, 16], "leav": [8, 9, 11, 13, 14, 16], "lectur": [5, 13, 21], "led": [8, 10], "left": [2, 3, 8, 9, 10, 11, 13, 14, 15, 16, 17], "leftarrow": 16, "leftrightarrow": 11, "leftspec": 17, "legend": [10, 12, 13, 14, 17], "len": [12, 13, 14, 17], "length": [8, 9, 10, 11, 13, 16], "leq": [10, 12, 13, 14], "less": [8, 9, 12, 13, 14, 15, 16], "lesson": 2, "let": [5, 8, 9, 10, 11, 13, 14, 15, 16, 17, 21], "level": [8, 15, 16], "levi": 15, "lewi": 17, "li": [10, 13, 16], "librari": [2, 8, 13, 15], "libraryn": 9, "licenc": 1, "lid": 16, "lie": 13, "life": [9, 13], "lifestyl": [5, 21], "light": [10, 14], "lighter": 13, "like": [1, 2, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "likewis": 16, "lim_": 8, "limit": [8, 9, 10, 13, 14, 15], "lin": [15, 16], "linalg": 11, "line": [0, 1, 3, 8, 12, 13, 14, 15, 16, 17], "line1": 13, "line2": 13, "linear": [5, 8, 10, 12, 13, 15, 16, 19], "linearli": [10, 11], "linestyl": [13, 17], "linewidth": 17, "linexp": 12, "ling": 10, "link": [10, 11, 13, 14, 15, 16, 19], "linspac": [10, 13, 17], "list": [2, 4, 9, 20, 21], "littl": [8, 11, 13, 15], "ll": [2, 10, 14, 15, 16, 17], "ln": [8, 15], "load": [8, 12, 17], "loc": [10, 12, 13, 17], "local": [10, 12, 14, 16, 17], "locat": [2, 5, 8, 9, 15, 16, 21], "log10": 17, "logi": 17, "loglog": 17, "long": [10, 12, 14, 15, 16, 17], "longer": [1, 8, 10, 14, 15, 16, 21], "longest": 16, "longrightarrow": [11, 16], "look": [1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "loop": [14, 16], "lore": 13, "lorentz": 14, "lorenz": 10, "lorenz_deriv": 14, "lorenz_linear_matrix": 14, "lorenz_od": 14, "lose": [8, 10], "loss": 13, "lost": 8, "lot": [1, 2, 10, 13, 14], "lovelock": 13, "low": [11, 17], "lower": [9, 10, 11, 12, 13, 15, 16], "lrcrcrcr": 11, "ls_0": 13, "lu": 11, "luminoc": 13, "luminos": 13, "lval": 13, "lw": 10, "ly": [11, 14], "m": [8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "m1": [1, 11], "m2": 1, "ma": [8, 11, 12], "mac": 1, "machin": [10, 13], "macmillen": 2, "maco": [0, 3], "macosx": 1, "made": [3, 5, 8, 9, 10, 11, 12, 13, 14, 16, 17, 21], "magnif": 11, "magnifi": [10, 11], "magnitud": [10, 11, 14, 15, 16, 17], "mai": [5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 21], "main": [2, 10, 11, 14, 15, 16], "mainli": [11, 16], "maintain": [5, 15, 21], "major": [11, 13], "make": [1, 2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "make_axes_locat": 17, "makeup": 13, "male": [5, 21], "maltab": 2, "man": 2, "manag": [1, 5, 8, 10, 16, 21], "mani": [5, 8, 10, 11, 12, 13, 14, 16, 21], "manifest": 10, "manner": [12, 14, 15, 16], "manual": [11, 17], "mar": [4, 20], "march": 8, "margin": 14, "marin": 17, "mark": [3, 5, 21], "markdown": [9, 10, 13], "marker": 14, "mass": [8, 11, 13, 14, 16], "master": 2, "match": [8, 11, 12, 16], "materi": [1, 5, 8, 10, 16, 19, 21], "math": [8, 10, 11, 14, 16, 17, 21], "mathbf": 10, "mathemat": [5, 11, 13, 15, 19], "mathematician": [8, 10], "mathrm": 14, "matlab": [2, 12], "matplotlib": [8, 9, 10, 12, 13, 14, 15, 17], "matric": 16, "matrix": [14, 15, 17], "matrix_quiz": 11, "matter": [5, 10, 12, 13, 14, 15, 16, 17, 21], "max": [9, 11, 16], "max_": 16, "max_count": 16, "maxattempt": 13, "maxfail": 13, "maximum": [11, 13, 15, 16], "maxstep": 13, "mbox": [8, 10, 11, 16], "mbyte": 17, "mccalpin": 16, "md": 15, "me": [1, 3, 13], "mean": [5, 8, 9, 10, 12, 13, 14, 15, 17, 21], "meaning": [8, 14], "meaningless": 9, "meant": [8, 19], "measur": [2, 8, 10, 11, 14, 16, 17], "mechan": [8, 12, 13, 14], "medium": 8, "melt": 8, "member": [5, 10, 12, 13, 17, 21], "memori": [10, 16], "mention": [10, 11, 13, 14, 16], "menu": 1, "merg": 2, "merit": 14, "mesh": 8, "mesing": [15, 16], "messi": [8, 16], "metadata": 9, "meteorolog": 16, "meter": 17, "method": [7, 9, 14, 16, 17], "metr": [10, 16], "mf": 11, "miami_tow": 17, "microphys": 8, "mid": [11, 12, 15, 16], "middl": [14, 16, 17], "midpoint": [10, 15, 17], "midpointinter41": 12, "might": [1, 8, 10, 11, 14, 15, 16], "million": 14, "mimetyp": 9, "mimic": 10, "min": 11, "mind": [10, 11, 14, 16], "mini": [5, 11, 21], "miniconda": 1, "miniconda3": [1, 2], "minim": [10, 11, 16], "minimum": [11, 13], "miniproject": [4, 20], "minor": 8, "minu": [10, 13, 17], "minut": [10, 17, 21], "misconcept": [5, 21], "misl": 11, "mislead": 11, "miss": [5, 10, 11, 21], "mistak": [5, 21], "mix": [8, 14], "mixtur": [5, 21], "mkdir": [1, 2], "mn": [11, 15], "mnp": 15, "mo": 11, "modal": 15, "mode": [10, 16, 17], "model": [8, 9, 11, 15], "moder": [13, 16], "modern": [14, 15], "modestbrand": [8, 15], "modif": [8, 10, 13, 14, 16], "modifi": [2, 9, 10, 11, 12, 13, 14, 15, 16], "modul": [0, 8, 10, 13, 14, 17], "momentum": [14, 16], "mondai": [4, 20], "monthli": [9, 16], "more": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "most": [5, 8, 10, 11, 12, 13, 14, 15, 16, 21], "mostn": 9, "motion": [8, 10, 14], "motionless": 14, "motionn": 9, "motiv": [8, 16], "move": [2, 3, 8, 9, 10, 14, 15, 16, 17], "movement": 11, "movi": [14, 15], "mp": 15, "mpl_toolkit": [14, 17], "mplot3d": 14, "mu": [8, 16], "much": [5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 21], "mult": 11, "multi": [10, 12], "multigrid": [11, 16], "multimag": 15, "multipl": [3, 11, 13, 14, 16], "multipli": [5, 8, 9, 11, 16, 17, 21], "muscl": 9, "must": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "mv": 2, "my": [2, 5, 21], "mybeta": 16, "m\u00f6biu": 14, "n": [8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "nabla": 16, "nabla_h": 16, "name": [1, 2, 4, 5, 8, 9, 10, 11, 13, 14, 15, 16, 17, 20, 21], "namedtupl": [12, 13, 14], "nan": 17, "narg": 16, "nasti": 13, "natur": [8, 16], "nav_menu": 9, "navier": [8, 15, 16], "navig": 10, "nbconvert_export": 9, "nbformat": 9, "nbformat_minor": 9, "nbsphinx": [8, 9], "nc": 17, "nd": 15, "nearbi": 14, "necessari": [12, 13, 15, 16], "necessarili": [5, 11, 15, 21], "need": [1, 2, 3, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "neg": [9, 10, 14, 15], "neglect": [16, 17], "neglig": [8, 11, 15, 16], "neighbour": [8, 9, 15, 16], "neither": [8, 11, 12, 14], "nensembl": 17, "neq": [10, 11, 16], "nest": 14, "net": [13, 16], "netcdf": 17, "netcdf4": 17, "neumann": [11, 16], "neutral": [8, 16], "never": [9, 11, 12, 16], "nevern": 9, "nevertheless": [10, 11, 14], "new": [0, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "newc1": 12, "newitem": 13, "newl": 13, "newman": [10, 12, 13, 17], "newton": [10, 12, 16], "newtonian": 12, "next": [5, 8, 9, 10, 12, 13, 14, 15, 16, 17, 21], "ngrid": 15, "ni": 2, "nice": [13, 15], "ninth": [4, 20], "nj": 8, "node": 16, "nois": 17, "noisi": 17, "non": [3, 8, 9, 10, 11, 12, 13, 14, 16, 17], "none": [2, 5, 11, 13, 14, 16, 17, 21], "nonlinear": [8, 9], "nonperiod": 14, "nonrot": 16, "nonsingular": 11, "nonstagg": 15, "nonumb": 15, "nonzero": 13, "nor": [5, 12, 14, 21], "norm": [11, 13, 16], "normal": [9, 13, 16, 17], "normalis": 9, "north": 15, "norton": 14, "noslip": 16, "notat": [8, 9, 10, 11, 14, 15, 16, 17], "note": [0, 1, 5, 9, 11, 15, 17, 21], "notebook": [0, 2, 8, 10, 11, 13, 15], "notebook_metadata_filt": 9, "notebook_toc": [5, 21], "noth": 16, "notic": [8, 10, 14, 16], "notion": 16, "novic": 2, "now": [1, 2, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "nowher": 10, "np": [8, 10, 11, 12, 13, 14, 15, 17], "np_exact": 13, "np_yval": 13, "npn": 9, "npt": 10, "npz": 17, "nraw": 17, "nsimpl": 17, "nstep": 12, "nt": 8, "ntry": 17, "null": 9, "number": [2, 5, 8, 9, 12, 13, 14, 15, 16, 21], "number_sect": 9, "numbin": 17, "numer": [2, 9, 10, 11, 12, 13, 16, 17, 19], "numer_init": 16, "numeric_2022": 3, "numeric_2024": [1, 5, 10, 17, 21], "numeric_stud": 2, "numericaln": 9, "numlab": [8, 9, 10, 11, 12, 13, 14, 15, 16], "numpi": [2, 8, 9, 10, 12, 13, 14, 15, 17], "numpoint": 17, "numval": 17, "nvar": [13, 14], "nwindow": 17, "nx": [8, 16], "ny": [8, 16], "nyquist": 17, "nyquistfreq": 17, "o": [2, 10, 11, 12, 13, 14, 15, 17], "obei": [8, 13, 14, 15, 16], "object": [2, 4, 5, 20, 21], "observ": [5, 8, 10, 13, 14, 16, 21], "obtain": [8, 9, 10, 11, 13, 14, 15, 16], "obviou": [8, 10, 13, 15, 16], "obvious": [10, 11, 13, 14, 15, 16, 19], "occasion": 16, "occur": [8, 11, 14, 16], "ocean": [11, 13, 15, 16], "oceanograph": [5, 15], "oceanographi": [5, 15, 21], "od": [10, 12, 13, 14], "odd": [9, 10, 11, 15, 16, 17], "odeint": 14, "off": [9, 13, 14, 16], "offic": [5, 21], "offici": 2, "offset": 9, "ofn": 9, "often": [10, 11, 13, 15, 16], "oh": 10, "old": [1, 13, 15, 16], "older": 21, "omega": [14, 15, 16], "omegaof": 15, "onc": [1, 3, 8, 10, 13, 14, 15, 16], "one": [1, 2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "onehz": 17, "ones": [8, 17, 21], "onli": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "onlin": [2, 5, 19, 21], "only_method": 13, "onn": 9, "onset": 10, "onto": 11, "open": [0, 2, 3, 8, 10, 12, 13, 17, 21], "oper": [1, 11, 12, 14, 15, 16], "opportun": [5, 21], "oppos": [8, 13], "opposit": [15, 16], "opt": 1, "optim": [11, 13, 16], "optimum": 13, "option": [4, 6, 12, 14], "oral": [5, 21], "orbit": 14, "order": [9, 11, 13, 14, 15, 16, 17], "ordern": 9, "ordinari": [10, 11, 14, 19, 21], "oreilli": 2, "org": [10, 13], "organ": [12, 15], "organis": 13, "orient": 17, "origin": [2, 5, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "oscil": [8, 9, 10, 12, 14, 15, 17], "oscillatori": [10, 14, 15], "osx": 2, "other": [2, 5, 9, 11, 12, 13, 15, 16, 17, 21], "othern": 9, "otherwis": [10, 13, 14], "our": [2, 5, 8, 9, 10, 12, 14, 15, 16, 17, 19, 21], "ourselv": 14, "out": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "out_dict": 13, "outer": 17, "outflow": 9, "outlin": 19, "outn": 9, "outperform": 16, "output": [1, 2, 9, 10, 11, 12, 13, 15, 16, 17], "output_eul": 10, "output_leap": 10, "output_mid": 10, "outputdict": 12, "outsid": [5, 11, 16, 21], "outward": 16, "over": [1, 2, 9, 10, 11, 12, 13, 14, 15, 16], "overal": 13, "overcom": 11, "overid": [13, 14], "overlai": 16, "overlaid": 16, "overlap": 15, "overlin": [15, 17], "overrid": 14, "overrod": 13, "overview": [10, 14], "overwrit": [2, 13], "overwritten": 2, "own": [1, 2, 5, 9, 11, 13, 21], "ozon": 14, "p": [1, 3, 8, 10, 11, 15, 16, 17], "p1": 11, "p2": 11, "p_": 10, "p_3": 10, "p_n": 10, "packag": [8, 11, 14, 15, 17], "pad": 17, "page": [1, 2, 5, 6, 8, 10, 16, 17, 21], "pai": 2, "pair": [8, 12, 14], "pallett": 3, "palmer": 14, "panda": 13, "paper": [5, 14, 15, 16, 21], "parabol": 13, "parallel": 14, "parallelogram": 11, "param": [14, 16], "paramet": [2, 8, 10, 12, 13, 14, 16], "parcel": 14, "parent": 13, "parenthes": 12, "parsev": 17, "part": [3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "partial": [7, 9, 14, 16, 21], "particip": [5, 6, 21], "particl": 14, "particular": [3, 5, 8, 9, 10, 11, 12, 13, 15, 16, 17], "particularli": 9, "partner": [5, 21], "pass": [9, 10, 13, 14, 15, 16, 17], "password": 1, "past": [9, 13], "patch": 16, "path": [1, 2, 3, 17], "patter": 14, "pattern": [13, 14, 15], "paw": 11, "payload": 8, "pd": 13, "pde": [8, 9, 10, 11, 15, 16, 21], "pdf": [11, 13], "peak": [9, 13, 17], "pedloski": 16, "pellet": 8, "penalti": [5, 21], "penetr": 17, "penguin": 14, "peopl": [5, 14, 21], "per": [5, 9, 10, 12, 13, 14, 21], "percent": 9, "percentag": [11, 16], "perfect": 13, "perform": [1, 5, 8, 10, 11, 13, 14, 15, 16, 17, 21], "pergamon": 15, "perhap": [12, 13], "period": [10, 11, 14, 15], "perman": 2, "permut": 11, "perpendicular": [13, 16], "person": [1, 5, 15, 21], "perspect": [5, 21], "pertain": [8, 21], "perturb": [14, 16], "petter": 10, "phase": [8, 14, 15], "phaustin": 2, "phenomena": 15, "phenomenon": 16, "phil": [1, 2, 17], "phillip": 16, "photon": 17, "phrase": [8, 14], "phy": 21, "physic": [8, 10, 11, 12, 13, 14, 16, 19], "pi": [8, 10, 13, 14, 15, 16, 17], "piazza": [5, 21], "pick": 17, "pickard": 15, "pictori": 16, "pictur": [8, 14, 15, 16], "piec": 3, "pine": 2, "pipe": 2, "pixel": 17, "pkg": 1, "place": [2, 10, 11, 13, 14, 15, 16], "placement": 16, "plagiar": [5, 21], "plagu": 16, "plai": [10, 13, 16], "plain": [11, 13], "planar": 16, "plane": [8, 15], "planet": 13, "planetari": 16, "plate": 14, "pleas": [5, 9, 21], "plot": [5, 8, 9, 10, 12, 13, 14, 15, 16, 17, 21], "plot_3d": 14, "plot_sec": 8, "plot_titl": 12, "plotax": 17, "plotter": 15, "plt": [8, 10, 12, 13, 14, 15, 17], "pltn": 9, "plu": [13, 15], "pm": [4, 10, 14, 15, 20], "pn": 8, "png": [8, 9, 10, 14, 15, 16], "poincar": 15, "point": [5, 8, 9, 11, 12, 13, 14, 15, 16, 17, 21], "pointsn": 9, "polar": 17, "pollut": [10, 11, 16], "polynomi": [8, 9], "polynomialsn": 9, "pond": [15, 16], "poor": [9, 11], "pop": 2, "popd": 2, "popul": 21, "popular": 12, "portion": [8, 9, 14], "posdef": 9, "pose": 16, "posit": [8, 9, 11, 13, 14, 15, 16], "positive_k_ifft": 17, "possibl": [1, 8, 10, 11, 12, 13, 14, 15, 16], "possibli": [8, 10, 14], "post": 5, "postul": 13, "potenti": 16, "power": [3, 8, 10, 11, 12, 13], "power_half": 17, "power_spectrum": 17, "powershel": [0, 1], "pp": [10, 14], "practic": [8, 10, 12, 13, 15, 16, 21], "prandtl": 14, "pre": 5, "preced": 8, "precis": [10, 14], "predict": [8, 10, 14, 15], "predictor": [12, 16], "predominantli": [5, 21], "prefer": [5, 21], "prentic": [8, 15], "presenc": [10, 11, 16], "present": [2, 4, 5, 9, 10, 11, 13, 14, 15, 16, 18, 20, 21], "presentn": 9, "press": [8, 11, 12, 13, 15], "pressur": [8, 16], "pretti": [10, 13], "previou": [2, 8, 9, 10, 11, 12, 13, 14, 15, 16], "previous": 16, "previousn": 9, "prima": 19, "primari": 13, "prime": [8, 9, 10, 11, 12, 16], "prime_h": 16, "principl": [10, 11, 13, 15], "print": [2, 8, 11, 12, 13, 15, 16, 17], "print_except": 12, "print_trig": 13, "printlist": 13, "priorit": 5, "prioriti": [5, 21], "privileg": [2, 5, 21], "prob": 14, "probabl": 16, "probem": 10, "problem": [5, 9, 21], "problem_steadi": 14, "proce": [8, 11, 13, 16], "procedur": [8, 11, 15], "process": [1, 8, 9, 10, 11, 13, 14, 15, 16], "produc": [9, 13, 14], "producesn": 9, "product": [8, 11], "profil": [2, 14], "program": [2, 5, 9, 10, 14, 15, 16, 21], "programm": [15, 16], "prohibit": [11, 16], "project": [1, 4, 14, 18, 20, 21], "prompt": 1, "pronoun": [5, 21], "propag": 15, "properti": [11, 12, 13, 14, 15, 16], "proport": [8, 11, 12, 13, 14], "propos": [4, 5, 20, 21], "propto": 15, "prove": 12, "provid": [5, 8, 10, 11, 12, 13, 15, 16, 21], "prudent": 13, "psi": 16, "psi_": 16, "public": [10, 15, 16], "pull": [0, 1, 10], "pump": 16, "pure": [10, 13, 17], "purpos": [10, 11, 16, 17], "push": 2, "pushd": 2, "put": [2, 11, 13, 15, 16, 17], "puzzl": 10, "pw": 8, "py": [9, 10, 12, 14, 15, 16], "pygments_lex": 9, "pylanc": 3, "pyman": 2, "pyplot": [8, 9, 10, 12, 13, 14, 15, 17], "python": [0, 2, 8, 9, 10, 13, 14, 15, 19], "python3": 9, "pythondatasciencehandbook": 2, "pythonn": 9, "q": [11, 15], "qgbox": 16, "qr": 11, "quad": [11, 14], "quadrat": [8, 10, 13], "quadrupl": 14, "qualit": 14, "quantifi": 10, "quantit": 12, "quantiti": [8, 9, 16], "quarterli": 16, "question": [1, 5, 8, 10, 11, 13, 14, 15, 16, 17, 21], "quicker": 1, "quickli": [11, 13, 15], "quicklyn": 9, "quit": [8, 10, 11, 14, 16], "quiz": [4, 5, 10, 20, 21], "quiz1": 8, "quiz3": 11, "quiz7": 15, "quiz8": 16, "quizz": [5, 16, 21], "quotient": 8, "r": [8, 11, 12, 13, 14, 15, 16, 17], "r_": 16, "r_coeff": 16, "r_n": 10, "r_p": 13, "ra": 11, "rabbit": 13, "rachel": [5, 21], "radial": 17, "radian": 17, "radiat": [8, 13], "radioact": 11, "radiu": [10, 13, 14, 15, 16, 17], "rain": [13, 15], "rais": [13, 17], "ralston": 12, "rang": [5, 10, 12, 13, 14, 15, 16, 17, 21], "rapid": 10, "rapidli": [11, 15, 16], "rare": 12, "rate": [8, 11, 14, 17], "rather": [1, 2, 5, 8, 10, 11, 13, 14, 15, 16, 21], "ratio": [8, 14, 15, 16], "ravel": 17, "raw": 17, "rc": 11, "rcc": 11, "rcl": 11, "re": [5, 10, 13, 14, 15, 16, 17], "reach": [8, 10, 11, 13, 14, 16], "read": [4, 5, 6, 11, 17, 20, 21], "readi": [3, 14], "readili": 13, "readlin": 13, "readonli": 12, "real": [3, 9, 10, 11, 13, 14, 15, 16, 17], "real_coeff": 17, "realesterror": 13, "realist": [8, 11, 13], "realiz": [10, 14, 16], "realli": [13, 14], "realm": 8, "rear": 8, "rearrang": [8, 14, 15, 16], "reason": [8, 10, 11, 12, 13, 14, 15, 16], "reassign": 13, "recal": [11, 13], "receiv": [5, 13, 21], "recent": [2, 10, 13], "recip": [15, 17, 19], "recogn": [1, 5, 10, 16, 21], "recombin": 13, "recommend": [1, 2, 5, 17, 21], "record": 13, "recours": 8, "recov": [12, 17], "recover_sig": 17, "rect": 16, "rectangl": 9, "rectangular": 16, "recurr": 9, "recurs": 15, "red": [8, 15, 16], "redefin": 16, "redirect": 2, "redistribut": 13, "reduc": [8, 10, 11, 13, 16], "ref": 9, "refer": [0, 9, 10, 12, 13, 17], "referenc": 2, "refin": [8, 10], "reflect": [5, 11, 13, 15, 16, 17, 21], "refresh": [16, 17], "regard": [10, 14, 15, 16], "regardless": [10, 16], "regim": 14, "region": [9, 10, 13, 14, 16], "regular": 14, "regularli": 14, "reilli": 2, "reiniti": 13, "rel": [8, 11, 12, 13, 14, 15, 16], "relat": [5, 8, 9, 10, 11, 14, 16, 17], "relationship": [8, 15, 16], "relev": [9, 13, 15, 16], "reliabl": 17, "religi": [5, 21], "reload": 15, "remain": [8, 9, 11, 13, 14, 15, 16], "remaind": [8, 10, 14, 15, 16], "remedi": 11, "rememb": [2, 8, 10, 11, 14, 16], "remind": 14, "remotesign": 2, "remov": [2, 13, 17], "renam": [2, 10], "render": 3, "renorm": 9, "repeat": [10, 12, 14, 15, 16, 17], "replac": [8, 11, 13, 14, 15, 16], "repo": 1, "report": 21, "repositori": [0, 10], "repres": [8, 10, 11, 12, 14, 15, 16], "represent": [8, 9, 11, 12, 16], "representationn": 9, "reproduc": [8, 15, 16], "request": 17, "requir": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "rescal": 16, "research": [5, 15, 16], "resembl": 1, "reserv": [10, 13], "reset": 2, "reset_l": 13, "reshap": 17, "residu": 16, "resist": 8, "resolut": [15, 17], "resolv": [15, 16], "resort": 8, "resourc": [2, 5, 14, 17, 21], "respect": [5, 8, 9, 10, 11, 13, 15, 16, 21], "respons": [5, 15, 16, 21], "rest": [5, 10, 11, 14, 16], "restart": 15, "restrict": [8, 10], "resubmiss": [5, 21], "resubmit": [5, 21], "result": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "retain": [9, 13], "retak": 13, "retri": 13, "retriev": 13, "return": [5, 12, 13, 14, 15, 16, 17, 21], "reveal": 14, "reveres": 14, "revers": [10, 13, 17], "review": [9, 13, 14, 15, 16], "revolut": 14, "rewrit": [10, 12, 13], "rewritten": [8, 9, 11, 12, 16], "rh": [11, 16], "rho": [8, 14, 16], "rhwhite": [1, 2, 5, 10, 21], "rich": 14, "richardson": 15, "right": [1, 2, 8, 9, 10, 11, 13, 14, 15, 17], "rightarrow": [8, 10, 11, 14, 15, 16], "rightn": 9, "rigid": 16, "ring": [11, 17], "rippl": 9, "rise": [10, 13, 14], "rk": 12, "rk4": 12, "rk4odeinter41": 12, "rkck": 12, "rkckode5": 13, "rkckodeinter41": 12, "rm": [2, 5, 8, 10, 11, 12, 13, 15, 21], "rock": [8, 10], "rod": [8, 11, 15], "roisin": 15, "roision": 15, "role": [10, 13], "roll": 14, "room": 15, "root": [10, 13, 14, 15, 17], "rossbi": [15, 16], "rossler": 14, "rotat": [14, 15, 16], "roughli": 16, "round": [14, 16, 17], "roundoff": [13, 17], "routin": [8, 9, 12, 16], "row": [11, 16, 17], "royal": 16, "rr": 11, "rrr": 11, "rrrr": 11, "rtol": 13, "rubric": 6, "rule": [5, 13, 16, 21], "run": [1, 3, 5, 9, 10, 11, 12, 15, 16, 17], "run1": 12, "rung": [10, 14, 16], "rusti": 15, "rwhite": [2, 5, 21], "r\u00f6ssler": 14, "s0": 13, "s1": 12, "s2": 12, "s_": 8, "s_0": 13, "s_i": 8, "safe_dump": 13, "safe_load": 13, "safeguard": 13, "safeti": [12, 13], "sai": [8, 10, 11, 13, 14, 15, 16, 17], "said": [5, 8, 11, 12, 16, 21], "sallen": [5, 21], "saltzman": 14, "same": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "sampl": [8, 13, 14, 16, 17], "sampler": 17, "samprat": 17, "san": [11, 16], "sat2": 8, "satisfactori": [5, 9, 21], "satisfi": [9, 10, 11, 13, 14, 16], "satur": [8, 10], "save": [2, 3, 10, 13, 14, 16, 17], "savedata": 12, "saw": [8, 10, 11, 14, 16], "scalar": [11, 12, 14], "scale": [5, 10, 13, 17], "scan": 13, "scatter": 14, "scene": 17, "scenerio": 13, "schaum": 19, "schedul": [5, 6, 21], "schemat": 11, "scheme": [5, 8, 9, 10, 11, 12, 14, 15, 16], "schemen": 9, "scienc": [2, 5, 8, 10, 14, 16, 21], "scientif": 15, "scientist": 2, "scipi": [13, 14, 17], "scope": [5, 21], "score": [5, 21], "screen": 2, "script": [8, 11, 13, 14, 15, 16], "scroll": 2, "sea": 16, "search": 2, "season": 13, "sec": [8, 10, 11, 14], "secant": 8, "second": [4, 8, 9, 11, 13, 14, 15, 16, 17, 20], "section": [1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "secur": 1, "see": [1, 2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "seem": [8, 10, 14, 15, 16], "seemingli": [8, 14], "seen": [8, 10, 11, 13, 14, 16], "segement": 17, "segment": [8, 17], "seidel": 16, "seldom": [10, 11], "select": [1, 2, 8, 10, 11, 15, 21], "self": [10, 12, 13, 14, 17, 19], "semilogi": 17, "senat": [5, 21], "sens": [8, 10, 14, 15], "sensit": [11, 13, 14], "separ": [8, 10, 12, 15], "sequenc": [8, 10, 11, 16, 17], "seri": [5, 8, 13, 14, 15, 16, 17, 21], "serial": 12, "session": 5, "set": [0, 2, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "set_color": 13, "set_linestyl": 13, "set_mark": [13, 17], "set_markerfacecolor": 17, "set_markers": [13, 17], "set_titl": [12, 13, 17], "set_xlabel": [12, 13, 17], "set_xlim": [10, 14], "set_yinit": [13, 14], "set_yint": 13, "set_ylabel": [12, 13, 17], "set_ylim": [10, 14], "set_zlim": 14, "setup": 6, "seven": [5, 21], "seventh": [4, 20], "sever": [8, 10, 11, 14, 15, 16, 17], "sex": 5, "sexual": 21, "shade": 9, "shadedn": 9, "shadow": 13, "shallow": [11, 15, 16], "shape": [9, 14, 15, 16, 17], "share": [5, 12], "sharp": 13, "shell": [0, 3], "shift": [3, 13, 17], "shift_pow": 17, "ship": 13, "short": [1, 2, 8, 10, 11, 14, 15], "shortcut": [1, 2], "shortest": 16, "should": [1, 2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "show": [3, 8, 9, 10, 11, 12, 13, 14, 15, 17], "shown": [8, 9, 10, 12, 15, 16], "shrink": 14, "sick": 5, "side": [2, 8, 9, 10, 11, 14, 15], "sidebar": 9, "sigma": [13, 14], "sign": [1, 8, 10, 11, 15, 17], "signal": 17, "signifi": [14, 16], "signific": [10, 11, 15, 16], "significantli": [9, 10, 14], "silicon": 1, "similar": [2, 10, 11, 12, 14, 15, 16], "similarli": [8, 10, 14, 16], "simpl": [8, 9, 10, 11, 12, 13, 14, 16], "simple_accuraci": 15, "simple_grid1": 15, "simple_grid2": 15, "simplefilt": 14, "simpler": [10, 16, 17], "simplest": [8, 10, 11, 13, 14, 16], "simpli": [8, 10, 11, 13, 14, 15, 16], "simplic": [8, 9, 16], "simplier": 15, "simpliest": 8, "simplif": [13, 15], "simplifi": [13, 14, 15, 16], "simplist": 16, "simul": [10, 13, 16], "simultan": [8, 14], "sin": [8, 10, 13, 14, 15, 16, 17], "sinc": [8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "sine": [15, 17], "singl": [8, 10, 11, 13, 14, 15, 16], "singular": 11, "sinusoid": [8, 10, 15], "site": [10, 13], "situat": [8, 9, 10, 11, 12, 13], "six": [12, 13, 21], "sixth": [4, 20], "size": [8, 10, 11, 13, 15, 16, 17], "sketch": 14, "skip": [1, 9, 16], "skip_h1_titl": 9, "sl": 2, "slab": 16, "slicker": 13, "slight": 10, "slightli": [9, 10, 11, 14, 15, 16, 21], "slip": 16, "slope": [8, 12], "slow": [10, 15, 16], "slowli": [10, 15], "small": [5, 8, 10, 11, 13, 14, 15, 16, 21], "smaller": [5, 8, 10, 13, 15, 21], "smallest": [10, 11, 13, 16], "smalln": 9, "smooth": [10, 16, 17], "smoother": 10, "smoothli": 11, "so": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "societi": 16, "softwar": 11, "solar": 13, "sole": 16, "solut": [5, 9, 10, 12, 13, 21], "solv": [7, 8, 9, 10, 11, 13, 14, 16], "solvabl": 11, "solver": [10, 15], "some": [1, 2, 3, 8, 10, 11, 12, 13, 14, 15, 16, 17, 19], "somen": 9, "someth": [1, 2, 5, 10, 14, 15, 16, 21], "sometim": [5, 8, 10, 11, 15, 21], "somewhat": [2, 16], "somewher": 10, "son": [8, 9], "sophist": [12, 16], "sor": 16, "sound": 15, "sourc": [9, 10, 11, 13, 16], "south": 15, "space": [2, 5, 8, 9, 10, 11, 14, 15, 16, 21], "sparrow": 14, "spars": [11, 16], "sparsiti": 11, "spatial": [8, 9, 15], "spatialn": 9, "spec": 12, "speci": 13, "special": [8, 11, 13, 16], "specif": [2, 5, 8, 9, 10, 12, 14, 15, 17, 21], "specifi": [8, 10, 11, 12, 13, 14, 15], "spectral": [8, 17], "spectral_den": 17, "speed": [9, 14, 15, 16], "spell": 3, "spellcheck": 3, "spend": 11, "spin": 15, "spline": 8, "split": [14, 16, 17], "spoke": 15, "spot": 17, "spread": [13, 15], "springer": [2, 14, 16], "spuriou": [10, 15, 16, 17], "spyder": 15, "sqrt": [10, 11, 14, 15, 17], "squar": [11, 12, 13, 16, 17], "squeez": 13, "sra": 11, "ss": 12, "ssh": 1, "stabil": [8, 12, 13, 16, 21], "stability2": 10, "stabl": [8, 9, 10, 12, 13, 14, 15], "stackoverflow": [1, 17], "stage": 3, "stai": [5, 10], "stand": [4, 10, 20], "standard": [5, 12, 13, 16, 17, 21], "standpoint": [8, 14], "start": [1, 2, 3, 6, 8, 9, 10, 11, 12, 13, 14, 16], "stat": 2, "state": [8, 9, 11, 15, 16], "statement": [2, 13], "staten": 9, "stationari": 14, "statist": 2, "statu": [1, 2], "stdout": 13, "steadi": [8, 11], "steadili": 14, "stefan": 13, "stem": 10, "stencil": 16, "step": [8, 9, 10, 11, 12, 14, 15, 16, 17], "stepper": 13, "stepsiz": 12, "stick": 16, "stiff": 12, "still": [5, 9, 10, 11, 12, 13, 14, 15, 16, 21], "stocki": [8, 16], "stoer": 12, "stoke": [8, 15, 16], "stop": [14, 17], "storag": [8, 10, 16], "store": [10, 12, 13, 16], "straddl": 10, "straight": [8, 12], "straightforward": [8, 10, 16], "strang": [8, 10, 11, 14, 16, 19], "strategi": 12, "stratif": 16, "strawberri": 13, "stream": 16, "strength": [3, 13], "strengthen": 5, "stress": 16, "strictli": 12, "string": [2, 12], "strip": [13, 14], "strong": [9, 13, 15], "strongest": 14, "strongli": 21, "structur": [8, 12], "stuck": 13, "student": [5, 9, 21], "studi": [13, 14, 15, 16], "stull": 17, "style": [10, 14, 17], "sub": [11, 14], "subhead": 13, "subject": [5, 8, 16], "submatrix": 11, "submiss": [5, 21], "submit": [10, 15, 16], "subplot": [10, 12, 13, 14, 17], "subscript": [13, 16], "subsequ": [8, 9], "subset": [5, 17, 21], "substanti": 16, "substitut": [1, 5, 9, 10, 11, 12, 15, 16], "subtract": [10, 11], "succe": [5, 12, 21], "succeed": 8, "success": [1, 5, 8, 16, 21], "sucess": 8, "sudden": 17, "suffic": [13, 16], "suffici": [10, 11, 15], "sufficientn": 9, "suffix": 17, "suggest": [0, 1, 5, 13, 14, 15, 16, 21], "suit": 12, "suitabl": 14, "sum": [8, 9, 10, 12, 15, 16, 17], "sum_": [8, 9, 12, 13, 17], "sum_i": 8, "summar": [5, 10, 13, 14, 16, 21], "summari": [15, 16], "summaris": 9, "sumw": 17, "sun": 13, "super": [3, 13, 14], "supercharg": 13, "superior": 10, "superscript": 16, "supervisor": 5, "suppos": [8, 9, 11, 13, 14, 16, 17], "suppress": [5, 9, 16, 21], "sure": [2, 9, 10, 11, 12, 13, 14, 16], "surfac": [8, 14, 15, 16], "surpris": 14, "surprisingli": 16, "surround": [8, 10], "surviv": 13, "survivor": [5, 21], "susan": [5, 15, 21], "svein": 10, "swcarpentri": 2, "switch": [11, 14, 16], "sy": [12, 13], "syllabu": [6, 8], "symbol": [10, 16], "symmetr": [8, 17], "symplet": 10, "symptom": 11, "synchron": 5, "system": [10, 12, 13, 15, 16], "t": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "t_": [8, 10, 12], "t_0": [8, 10], "t_1": [8, 10], "t_2": 10, "t_a": [8, 10], "t_beg": 12, "t_e": 13, "t_end": 12, "t_i": [8, 10, 13], "t_n": [8, 10, 12], "t_p": 16, "ta": [5, 8, 10], "tab": [2, 9], "tabl": [2, 5, 8, 9, 14, 16, 21], "tablesn": 9, "tabular": [12, 16], "tack": 11, "tackl": 10, "tail": 2, "take": [2, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "taken": [1, 2, 8, 10, 13, 14, 16, 17], "takesn": 9, "talk": [5, 10, 14, 16, 21], "tangent": 8, "tank": [14, 15], "task": 16, "tau": 16, "tau_": 16, "tau_1": 16, "tau_2": 16, "taught": 2, "taylor": [8, 12, 14], "tb": 12, "tc": 8, "td": 17, "te_4": 13, "teach": [5, 21], "team": [4, 5, 20, 21], "technet": 2, "technic": [5, 16, 21], "techniqu": [8, 9, 11, 12, 13, 14, 16, 19], "technologi": 3, "tell": [14, 15], "temp": [10, 13, 17], "temp_": 13, "temp_b": 13, "temp_i": 13, "temp_w": 13, "temperatur": [8, 10, 11, 14, 17], "temperature_conduct": 8, "tend": [10, 12, 13, 14, 16], "tendenc": 16, "tenth": [4, 20], "term": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "termin": [1, 2, 12], "terminologi": 10, "terror": 10, "test": [1, 2, 10, 12, 13, 15], "test2": 12, "test_dict": 12, "test_fft": 17, "teukolski": 15, "text": [2, 6, 8, 9, 10, 11, 12, 13, 19], "textbf": 13, "textbook": [10, 12, 16], "textcolor": 8, "th": 10, "than": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "thann": 9, "thatn": 9, "the_dat": 13, "the_fft": 17, "the_fil": 17, "the_freq": 17, "the_fun": 10, "the_integ": 13, "the_kei": 12, "the_nam": 13, "the_seri": 17, "the_spec": 17, "the_temp": 10, "the_tim": [10, 17], "theax": [12, 13, 17], "thedat": 17, "thefft": 17, "thefig": [12, 13], "thefunc": 10, "thei": [5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 21], "thel": 13, "thelambda": 10, "thelin": 13, "theline1": 13, "them": [2, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 21], "themselv": [13, 15, 16], "theorem": [10, 17], "theoret": 14, "theori": [10, 13, 14, 15], "thepoint": 17, "therebi": [10, 11], "therefor": [8, 9, 10, 11, 13, 14, 15, 16], "thermal": [8, 14], "thermalis": 13, "thermodynam": 13, "therow": 17, "thesolv": [13, 14], "theta": [14, 16], "theta_0": 16, "thetim": 13, "thi": [1, 2, 3, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "thick": 17, "thin": 8, "thing": [8, 11, 13, 14, 16], "think": [1, 2, 5, 8, 9, 10, 11, 14, 15, 16], "third": [4, 13, 15, 16, 20], "thisn": 9, "thoroughli": [10, 13], "those": [2, 5, 8, 10, 11, 12, 13, 14, 15, 16, 17, 21], "though": [8, 10, 11, 12, 13, 14, 15, 16], "thought": [5, 8, 11, 15, 21], "three": [5, 10, 12, 13, 14, 21], "through": [2, 5, 6, 8, 9, 10, 11, 12, 14, 15, 16, 17, 21], "throughout": [8, 10, 13, 15, 16], "thu": [9, 10, 13, 14, 15, 16], "thumb": 13, "thusn": 9, "tic": 14, "tick": 17, "time": [1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "timeloop": [13, 14], "timeloop5err": 13, "timeloop5fix": [13, 14], "timeseri": [13, 17], "timestep": [9, 10, 12, 13], "timev": [13, 14], "timevar": 14, "timevec": 12, "tini": 14, "tiniest": 14, "titl": [10, 14, 17], "toc": 9, "toc_cel": 9, "toc_posit": 9, "toc_section_displai": 9, "toc_window_displai": 9, "todai": [13, 14], "togeth": [10, 11, 13, 15, 16], "toi": 13, "token": 1, "tol": 16, "told": 14, "toler": [5, 13, 16, 21], "ton": 9, "tone": 17, "tonn": 11, "too": [8, 10, 11, 13, 14, 15, 17], "tool": [8, 10, 13], "top": [14, 15, 16, 17], "topic": [5, 8, 10, 12, 13, 14, 19, 21], "toplevel": 13, "total": [9, 10, 11, 13, 14], "totalcount": 2, "totaln": 9, "totsiz": 17, "touch": 2, "tour": 2, "toward": [5, 13, 14, 21], "trace": 14, "traceback": 12, "track": [11, 12], "tractabl": 13, "trade": 10, "tradeoff": 13, "trajectori": 14, "tranpos": 11, "transcendent": 15, "transfer": [9, 16], "transform": 8, "transit": [8, 14], "transitori": 10, "translat": [2, 16], "transpos": 11, "travel": [9, 15], "travers": 16, "treat": 11, "treatment": [14, 19], "trend": 17, "triangular": [11, 12, 15], "tridiagon": [11, 16], "trig": 13, "trigonometr": 16, "trigval": 13, "trivial": [8, 13, 15], "troubleshoot": 1, "trow": 17, "true": [8, 9, 10, 11, 13, 14, 15, 16, 17], "truli": 16, "truncat": [8, 13], "try": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "tsra": 11, "tstart": 14, "tune": 13, "tupl": 13, "turbul": 14, "turn": [8, 12, 13, 14, 15, 16], "tutori": 0, "tv": 15, "tvd": 9, "twelv": 14, "twenti": 10, "twice": [13, 16], "two": [1, 2, 5, 9, 10, 13, 14, 17, 21], "twodimension": 17, "twohz": 17, "type": [1, 2, 3, 8, 11, 13, 14, 16], "typeerror": 12, "typic": [8, 10, 11, 12, 14, 15, 16, 17], "typo": 3, "u": [5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "u_": [8, 11, 15], "u_0": [8, 11, 16], "u_1": 11, "u_2": 11, "u_3": 11, "u_i": [8, 11], "u_m": 8, "u_n": [8, 11], "u_t": 8, "u_x": 11, "ual": 5, "ubc": [2, 5, 17, 21], "uc": 9, "udt": 9, "uknown": 15, "ultim": [10, 13], "unacquaint": 14, "unbalanc": 14, "unchang": 16, "unclear": [5, 21], "uncomfort": [5, 21], "uncommon": 16, "undamp": 10, "undefin": [3, 10], "under": [8, 9, 10, 13, 14, 15, 16], "underbrac": [8, 10, 11, 16], "undergo": 8, "undergrad": [6, 18], "underli": [10, 16], "underscor": 12, "understand": [5, 8, 10, 12, 15, 16, 17, 21], "understood": 16, "undetermin": 8, "undisturb": 15, "unduli": 10, "unfortun": 13, "uniformli": 14, "uniqu": [8, 11, 12], "unit": [9, 10, 11, 13, 14, 16], "univers": 15, "unknown": [8, 11, 14, 15, 16], "unless": [1, 5, 10, 13, 16], "unlik": [10, 12], "unnecessarili": 13, "unpack": 13, "unphys": 10, "unpredict": 14, "unsatisfactori": [5, 21], "unscal": 16, "unspecifi": 15, "unstabl": [9, 10, 11, 12, 13, 14, 15, 16], "unstagg": 15, "until": [8, 10, 15, 16], "untrack": 2, "up": [0, 2, 3, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "upcom": 9, "updat": [14, 15, 16], "uphold": [5, 21], "upload": [5, 13, 21], "upon": 13, "upper": [1, 9, 10, 11, 15, 16], "upstream": [2, 9, 10], "url": 17, "urllib": 17, "urlretriev": 17, "us": [0, 1, 3, 5, 7, 8, 9, 10, 12, 16, 19, 21], "usen": 9, "user": [1, 2, 13, 15, 17], "usernam": 1, "usersvar": 13, "uservar": [13, 14], "uservars_fac": 13, "usingn": 9, "usual": [10, 11, 12, 13, 14, 15, 16], "utf": 9, "uvel": 17, "v": [0, 8, 13, 14, 15, 16], "v_": 15, "v_0": 8, "valu": [9, 10, 11, 12, 14, 15, 16, 17], "valuabl": 14, "valueerror": 17, "valuesn": 9, "vanderpla": [2, 14, 17], "vapour": [8, 9], "var": [13, 17], "vari": [8, 10, 11, 13, 14, 15, 16], "variabl": [3, 8, 10, 11, 12, 13, 14, 15, 16, 17], "varianc": 17, "variant": 11, "variat": [8, 14, 16], "varieti": [14, 16], "variou": [8, 10, 12, 13, 14, 15, 16], "vdot": [11, 12], "ve": [3, 8, 10, 11, 12, 13, 14, 15, 16], "vec": [15, 16], "vector": [5, 11, 12, 13, 15, 16], "veget": 13, "veloc": [8, 14, 15, 16], "veri": [8, 10, 11, 12, 13, 14, 15, 16], "verifi": 13, "verlag": [14, 16], "veroni": 16, "version": [1, 2, 3, 8, 9, 10, 11, 13, 16, 17], "versu": [13, 14], "vertic": [8, 14, 16], "vetterl": 15, "vi": 16, "via": [12, 17], "vid": [8, 15], "video": [8, 14], "view": [14, 15], "view_init": 14, "vigor": 14, "violat": [9, 10], "violenc": [5, 21], "viridi": 15, "vis_curr": 16, "vis_par": 16, "vis_prev": 16, "viscos": [14, 16], "viscou": 16, "visibl": 17, "visual": 14, "visualstudio": 0, "vol": 15, "volum": [9, 11, 14, 16], "vortic": 16, "vscode": 0, "vvel": 17, "w": [4, 8, 10, 12, 13, 14, 15, 16, 17, 20], "w8iqiwx": 15, "wa": [1, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 21], "wagon": 15, "wai": [1, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17], "wait": 10, "wall": 15, "want": [1, 5, 8, 9, 10, 12, 13, 14, 15, 16, 17, 21], "warm": [13, 14], "warn": [14, 17], "wast": 13, "watch": 8, "water": [8, 9, 10, 11, 15, 16], "watern": 9, "watson": 13, "wave": [10, 14, 16, 21], "wave_left": 15, "wave_right": 15, "wave_stat": 15, "wavelength": [15, 16, 17], "wavelet": 17, "waven": 9, "wavenumb": [15, 16, 17], "waveumb": 16, "wc": 2, "we": [1, 2, 3, 5, 6, 8, 9, 11, 12, 13, 14, 15, 16, 17, 19, 21], "weak": [5, 15], "weaker": 16, "weakli": 9, "weather": [8, 9, 10, 13, 14, 15, 16], "web": [5, 21], "webpag": 5, "websit": [1, 5, 19], "week": [4, 5, 20, 21], "weekli": [5, 10, 21], "weight": [9, 13, 16, 17], "weightingn": 9, "welcom": 6, "well": [3, 8, 9, 10, 11, 13, 14, 15, 16, 21], "welleslei": [8, 11], "wen": 9, "were": [8, 9, 10, 11, 12, 14, 15, 16], "west": 15, "westerli": 16, "western": 15, "what": [5, 8, 9, 10, 12, 13, 14, 15, 16, 17, 21], "whatev": [2, 11, 16], "wheel": 15, "wheel_left": 15, "wheel_right": 15, "wheel_stat": 15, "when": [1, 2, 3, 8, 9, 10, 11, 12, 13, 14, 15, 16], "whenc": 1, "whenev": [9, 11], "where": [5, 8, 9, 11, 12, 13, 14, 15, 16, 17], "wherea": [8, 13, 15, 21], "wherebi": 8, "whether": [2, 10, 11, 12, 13, 14, 16, 17], "which": [1, 2, 5, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 21], "whichn": 9, "while": [8, 10, 11, 12, 13, 14, 16], "whilst": 17, "whirlwind": 2, "whirlwindtourofpython": 2, "white": [3, 5, 21], "whiteconc": 13, "whitedaisi": 13, "whitespac": 3, "who": [5, 10, 15, 16, 21], "whole": [10, 11, 15], "whose": [8, 14, 15, 16], "why": [5, 9, 10, 11, 12, 14, 15, 16, 21], "wide": [10, 17], "wider": 9, "widetild": 10, "width": [8, 9, 14, 15, 16], "wiener": 17, "wikipedia": 10, "wilei": 8, "willing": 5, "willn": 9, "wind": [9, 16, 17], "wind_par": 16, "windn": 9, "window": [0, 3], "wing": 14, "winspec": 17, "wire": 13, "wise": 13, "wish": [5, 15, 21], "withdraw": [4, 20], "within": [5, 8, 9, 13, 14, 16, 21], "withn": 9, "without": [4, 5, 8, 11, 12, 13, 15, 16, 20], "wlll": 12, "wm": 13, "won": [8, 14, 15, 16], "wood": 13, "word": [2, 5, 9, 10, 11, 12, 13, 14, 15, 21], "work": [0, 5, 9, 10, 11, 12, 13, 14, 15, 16, 21], "worksheet": [5, 21], "world": 14, "worri": [12, 15, 16], "wors": [8, 10, 11], "worthwhil": 14, "would": [2, 5, 8, 10, 11, 12, 13, 14, 16, 21], "wouldn": [9, 10], "write": [1, 2, 8, 10, 11, 12, 13, 14, 15, 16], "written": [8, 10, 11, 12, 14, 15, 16], "wrong": [9, 11], "wrote": 14, "www": [2, 10], "x": [2, 8, 9, 10, 11, 13, 14, 15, 16, 17], "x_": 11, "x_0": 10, "x_1": 11, "x_2": 11, "x_3": 11, "x_i": [8, 11, 16], "x_j": 9, "xb": 8, "xdim": 17, "xi": 10, "xlabel": [10, 14, 17], "xlim": 10, "xval": 14, "xx": [8, 11], "xxx": [8, 11, 15], "xxxx": 8, "xy": 14, "xyz": 14, "xz": 14, "y": [8, 10, 11, 12, 13, 14, 16, 17], "y0": 12, "y1": 12, "y2": 12, "y_": [8, 12], "y_0": 8, "y_i": 8, "y_j": 16, "y_n": 12, "yaml": [1, 14], "yang": [15, 16], "ye": [12, 16], "year": [11, 12], "yerror": 13, "yet": [5, 10, 14, 16], "yield": [9, 10], "yieldsn": 9, "yinit": [13, 14], "yiniti": 12, "ylabel": [10, 14, 17], "ylim": 10, "ym": 12, "ynew": 12, "york": [8, 15, 16], "you": [1, 2, 3, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 19, 21], "your": [0, 2, 3, 5, 8, 9, 10, 11, 12, 14, 15, 16, 17, 21], "yourgithubid": 1, "yourself": [10, 11, 14, 16], "youtubevideo": [8, 14, 15], "yrk": 12, "yrkck": 12, "yung": 11, "yval": [13, 14, 17], "yvec_init": 10, "z": [8, 10, 14, 15, 16], "z_": [10, 15], "z_0": 10, "z_i": 10, "zeitgeist": 13, "zero": [5, 8, 9, 10, 11, 13, 14, 15, 16, 17, 21], "zeros_lik": 17, "zeta": 16, "zimen": 17, "zipfil": 13, "zlabel": 14, "zsh": [0, 1], "zshenv": 2, "zval": 14, "\u03bb": 8}, "titles": ["Getting started", "Student installs", "The command line shell and git", "VScode notes", "Dates for Graduate Class (EOSC 511)", "Graduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: EOSC 511 / ATSC 506", "Numerical Techniques for Atmosphere, Ocean and Earth Scientists", "Numeric notebooks", "Laboratory 1: An Introduction to the Numerical Solution of Differential Equations: Discretization (Jan 2024)", "<no title>", "Lab 2: Stability and accuracy", "Laboratory 3: Linear Algebra (Sept. 12, 2017)", "Solving Ordinary Differential Equations with the Runge-Kutta Methods", "Lab 5: Daisyworld", "Lab 6: The Lorenz equations", "Laboratory 7: Solving partial differential equations using an explicit, finite difference method.", "Laboratory 8: Solution of the Quasi-geostrophic Equations", "Laboratory 9: Fast Fourier Transforms", "Links to rubrics", "Optional textbooks", "Dates for Undergraduate Class (ATSC 409)", "Undergraduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: ATSC 409"], "titleterms": {"": [5, 8], "1": [1, 8, 14], "10": 8, "11": 8, "12": 11, "2": [10, 13, 16, 17], "2017": 11, "2024": 8, "2d": 17, "3": [11, 16], "409": [20, 21], "5": 13, "506": 5, "511": [4, 5], "6": 14, "7": 15, "7a": 15, "7b": 15, "8": 16, "9": [8, 17], "A": [12, 14, 17], "For": 1, "No": 15, "Not": 5, "One": [8, 11, 15, 16], "The": [2, 12, 13, 14, 16], "To": 2, "about": [14, 17], "academ": [5, 21], "account": 1, "accuraci": [10, 15], "adapt": 13, "adjust": 13, "albedo": 13, "algebra": 11, "alias": 16, "an": [8, 11, 12, 15, 17], "appendix": 13, "approxim": [8, 10], "april": [4, 20], "asid": 17, "assign": 13, "atmospher": [5, 6, 21], "atsc": [5, 20, 21], "attribut": 13, "b": 14, "back": 12, "backward": 10, "balanc": 13, "bash": 2, "befor": 5, "begin": 15, "beta": 16, "black": 13, "book": [2, 10], "bother": [8, 13], "boundari": [8, 15, 16], "bounded": 14, "calendar": 21, "can": 10, "cell": 8, "cfl": 15, "chang": 2, "chaotic": 14, "charact": 17, "characterist": 11, "check": 13, "choos": 15, "class": [4, 5, 13, 14, 17, 20], "classic": 16, "code": [3, 8, 13], "com": 3, "come": 10, "command": 2, "common": 2, "compar": 17, "complex": 17, "comput": [11, 15, 17], "conclus": 13, "condit": [11, 15, 16], "conduct": [8, 13], "configur": [2, 12], "confirm": 17, "conjug": 17, "constant": 13, "constructor": 13, "content": 21, "control": [10, 13], "cool": 8, "corrector": [10, 15], "correl": 17, "cost": 11, "coupl": 13, "cours": [1, 5, 21], "creat": 1, "d": 17, "daisi": 13, "daisyworld": 13, "data": 12, "date": [4, 20], "decomposit": 11, "definit": 16, "demo": [8, 13], "depend": 13, "depth": 17, "deriv": [8, 10, 12, 13], "design": 13, "detail": 15, "determin": [10, 11], "differ": [8, 10, 15], "differenti": [8, 12, 15], "diffus": 8, "discret": [8, 10, 16], "dispers": 15, "divers": [5, 21], "doe": 10, "doubl": 13, "download": 3, "earth": [5, 6, 21], "eigenvalu": 11, "eigenvector": 11, "eight": [8, 11, 15], "elect": 5, "elimin": 11, "embed": [12, 13], "end": 15, "energi": 13, "entri": 21, "envelop": 17, "environ": [1, 13], "eosc": [4, 5], "equat": [8, 10, 11, 12, 13, 14, 15, 16], "error": [10, 11, 12, 13, 16], "estim": [12, 13], "euler": [8, 10], "exampl": [8, 10, 11, 13, 15, 16], "expans": 16, "explicit": [12, 15], "extens": 3, "f": 17, "fast": 17, "februari": [4, 20], "feedback": 13, "feel": 5, "fft": [5, 17], "field": 17, "file": 12, "filter": 17, "find": [13, 15, 17], "finit": 15, "first": [8, 10], "five": [8, 11, 15, 16], "float": 10, "flux": 17, "folder": 1, "fork": 1, "form": 16, "forward": 8, "four": [8, 11, 15, 16], "fourier": 17, "fourth": 12, "frog": 10, "from": [2, 3, 10, 12], "full": [11, 15], "function": 12, "gaussian": 11, "gener": 8, "geostroph": 16, "get": 0, "git": [1, 2], "github": [1, 2], "global": 13, "glossari": [8, 11, 12, 15, 16], "goal": 16, "grade": [5, 21], "graduat": [4, 5], "grid": 15, "growth": 13, "hand": 16, "heat": [8, 13, 17], "higher": [8, 10], "histogram": 17, "how": 10, "http": 3, "i": [11, 17], "imag": 17, "immut": 12, "inclus": [5, 21], "inherit": 13, "initi": [13, 15], "instabl": 16, "instal": [1, 3], "instanc": 13, "instructor": [5, 21], "integr": [5, 12, 14, 21], "interpol": 8, "intro": 13, "introduc": 15, "introduct": [8, 10, 12, 13, 14, 16, 17], "invers": 11, "investig": 8, "iter": 11, "jacobian": 16, "jan": 8, "januari": [4, 20], "json": 12, "kutta": [12, 13], "lab": [1, 10, 13, 14, 15], "laboratori": [5, 8, 11, 15, 16, 17], "law": 8, "layout": 17, "leap": 10, "learn": [15, 16], "librari": 12, "line": 2, "linear": [11, 14], "link": 18, "list": [8, 10, 11, 12, 13, 14, 15, 16], "local": 13, "loop": 13, "lorenz": 14, "lorenzian": 14, "lowpass": 17, "maco": [1, 2], "manag": 12, "march": [4, 20], "mathemat": [8, 10, 12, 14, 16], "matric": 11, "matrix": [11, 16], "meet": [5, 21], "method": [8, 10, 11, 12, 13, 15], "mid": 10, "midpoint": 12, "minut": 13, "mode": 15, "model": [13, 14, 16], "modul": 3, "motion": 16, "move": 12, "movi": 8, "mutabl": 12, "name": 12, "neg": 17, "neutral": 13, "new": 1, "newton": 8, "nine": [11, 15], "non": 15, "nonlinear": 16, "note": [3, 8, 10, 12, 13, 14, 16], "notebook": [1, 3, 7, 12], "number": [10, 11], "numer": [5, 6, 7, 8, 14, 15, 21], "numpi": 11, "object": [8, 10, 11, 12, 13, 14, 15, 16, 17], "ocean": [5, 6, 21], "od": [5, 8, 11], "off": [10, 11], "open": 1, "optic": 17, "option": 19, "order": [8, 10, 12], "ordinari": [8, 12], "organ": 13, "orient": 13, "other": [8, 10, 14], "outlin": 16, "overrid": 13, "partial": [8, 11, 15], "pass": 12, "pde": 5, "physic": 15, "pivot": 11, "plane": 16, "planetari": 13, "poincar\u00e9": 15, "point": 10, "poisson": 16, "polici": [5, 21], "polynomi": 10, "popul": 13, "posit": 17, "power": 17, "powershel": 2, "predat": 13, "predictor": [10, 15], "prerequisit": [5, 11, 21], "problem": [8, 10, 11, 12, 13, 14, 15, 16, 17], "problemcodinga": 12, "problemcodingb": 12, "problemcodingc": 12, "problemembed": 12, "problemmidpoint": 12, "problemrk4": 12, "problemtableau": 12, "procedur": 16, "program": 13, "project": 5, "properti": 10, "pull": 2, "purpos": [5, 21], "python": [3, 11, 12], "qg": 16, "quasi": 16, "quick": 11, "quiz": [8, 11, 15, 16], "rate": 13, "read": [8, 10, 12, 13, 14, 15, 16], "recommend": 10, "refer": [2, 8, 11, 14, 15, 16], "relat": 15, "relax": 16, "repositori": [1, 2], "represent": 10, "review": 11, "right": 16, "round": [10, 11], "routin": 13, "rubric": 18, "run": [8, 13], "rung": [12, 13], "save": 12, "scale": 16, "scientist": [5, 6, 21], "second": [10, 12], "sensibl": 17, "sept": 11, "seri": 10, "set": [1, 5], "seven": [8, 11, 15], "shell": 2, "side": 16, "simpl": [15, 17], "simplif": 16, "simul": 15, "six": [8, 11, 15, 16], "solut": [8, 11, 14, 15, 16], "solv": [12, 15], "spatial": 16, "specif": 13, "spectra": 17, "spectrum": 17, "stabil": [10, 14, 15], "stage": 12, "stagger": 15, "start": [0, 15, 17], "state": [13, 14], "statement": [5, 21], "steadi": [13, 14], "step": 13, "stepsiz": 13, "stiff": 10, "structur": [5, 21], "student": 1, "suggest": 3, "summari": [8, 10, 11, 13, 14], "support": [5, 21], "surfac": 13, "system": [8, 11, 14], "tableau": 12, "tau": 17, "taylor": 10, "techniqu": [5, 6, 21], "temperatur": 13, "tempor": 16, "textbook": 19, "theta": 17, "three": [8, 11, 15, 16], "through": 13, "time": [5, 21], "transform": 17, "truncat": [10, 12], "tupl": 12, "turbul": 17, "tutori": 2, "two": [8, 11, 12, 15, 16], "type": 12, "ubc_fft": 17, "undergradu": [20, 21], "understand": 13, "univers": [5, 21], "up": 1, "us": [2, 11, 13, 14, 15, 17], "v": 12, "valu": [5, 8, 13, 21], "variat": 15, "veloc": 17, "vertic": 17, "visualstudio": 3, "vscode": 3, "water": 14, "wave": [15, 17], "we": 10, "well": 5, "what": 11, "wheel": 14, "where": 10, "white": 13, "why": [8, 13], "window": [1, 2, 17], "work": 1, "world": 13, "wvel": 17, "y": 15, "yaml": 13, "your": [1, 13], "zsh": 2}}) \ No newline at end of file diff --git a/texts.html b/texts.html new file mode 100644 index 0000000..53d5e67 --- /dev/null +++ b/texts.html @@ -0,0 +1,95 @@ + + + + + + + + Optional textbooks — Numeric course 22.1 documentation + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Optional textbooks

+

The labs in this course are meant to be self-contained. If you’d like additional information/greater depth here are +some texts that we have found useful:

+ +
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/ugrad_schedule.html b/ugrad_schedule.html new file mode 100644 index 0000000..3216944 --- /dev/null +++ b/ugrad_schedule.html @@ -0,0 +1,132 @@ + + + + + + + + Dates for Undergraduate Class (ATSC 409) — Numeric course 22.1 documentation + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Dates for Undergraduate Class (ATSC 409)

+
+

January

+
    +
  • Monday Jan 8 : First Class

  • +
  • Jan 15, 2 pm: Lab 1 Reading and Objectives Quiz

  • +
  • Monday Jan 15: Second Class

  • +
  • Jan 19, 6 pm: Lab 1 Assignment

  • +
  • January 19: Last date for withdrawal from class without a “W” standing

  • +
  • Jan 22, 2 pm: Lab 2 Reading and Objectives Quiz

  • +
  • Monday Jan 22: Third Class

  • +
  • Jan 26, 6 pm:Lab 2 Assignment

  • +
  • Jan 29, 2 pm: Lab 3 Reading and Objectives Quiz

  • +
  • Monday Jan 29: Fourth Class

  • +
+
+
+

February

+
    +
  • Feb 2, 6 pm: Lab 3 Assignment

  • +
  • Monday, Feb 5: Fifth Class

  • +
  • Feb 9, 6 pm: Miniproject 1

  • +
  • Feb 12, 2 pm: Lab 4 Reading and Objectives Quiz

  • +
  • Monday Feb 12: Sixth Class

  • +
  • Feb 16, 6 pm: Lab 4 Assignment

  • +
  • Feb 19-23: Reading Week, No class

  • +
  • Feb 26, 2 pm: Lab 5a Reading and Objectives Quiz

  • +
  • Monday Feb 26: Seventh Class

  • +
+
+
+

March

+
    +
  • Mar 1, 6 pm: Lab 5a Assignment

  • +
  • Mar 1, 6 pm: Teams for Projects (a list of names due)

  • +
  • March 1: Last date to withdraw from course with ‘W’

  • +
  • Monday Mar 4: Eighth Class

  • +
  • Mar 8, 6 pm: Miniproject 2

  • +
  • Monday Mar 11: Ninth Class

  • +
  • Mar 15, 6 pm: Project Proposal

  • +
  • Mar 15, 6 pm: First iPeer evaluation

  • +
  • Mar 18, 2 pm: Lab 6 or Lab 7a Reading and Objectives Quiz

  • +
  • Monday Mar 18: Tenth Class

  • +
  • Mar 22, 6 pm: Lab 6 or Lab 7a Assignment

  • +
  • Monday Mar 25: Eleventh Class

  • +
+
+
+

April

+
    +
  • Monday, Apr 8: Last Class

  • +
  • Apr 8, in class, Project Presentation

  • +
  • Apr 8, 6 pm, Second iPeer evaluation

  • +
  • Apr 12, 6 pm: Project

  • +
+
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file diff --git a/ugradsyllabus.html b/ugradsyllabus.html new file mode 100644 index 0000000..5bf4508 --- /dev/null +++ b/ugradsyllabus.html @@ -0,0 +1,346 @@ + + + + + + + + Undergraduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: ATSC 409 — Numeric course 22.1 documentation + + + + + + + + + + + + + + + + + +
+
+ +
+ + + +
+
+
+ +
+

Undergraduate Numerical Techniques for Atmosphere, Ocean and Earth Scientists: ATSC 409

+
+

Calendar Entry

+

Web-based introduction to the practical numerical solution of ordinary +and partial differential equations including considerations of stability +and accuracy. Credit will not be granted for both ATSC 409 and ATSC +506/EOSC 511.

+
+
+

Course Purpose

+

The students completing this course will be able to apply standard +numerical solution techniques to the solution of problems such as waves, +advection, population growth.

+
+
+

Meeting Times

+

See canvas course page for scheduled class times and location

+
+
+

Instructors

+
+
Rachel White, rwhite@eoas.ubc.ca
+
Susan Allen, sallen@eoas.ubc.ca
+
+

See canvas course page for office hour locations

+
+
+

Prerequisites

+

Solution of Ordinary Differential Equations (MATH 215 or equivalent) AND +a programming course. Partial Differential Equations (Math 316 or Phys +312) is recommended. [1]

+
+
+

Course Structure

+

This course is not lecture based. The course is an interactive, computer +based laboratory course. The computer will lead you through the +laboratory (like a set of lab notes) and you will answer problems most +of which use the computer. The course consists of three parts. A set of +interactive, computer based laboratory exercises, two mini-projects and +a final project.

+

During the meeting times, there will be group worksheets to delve +into the material, brief presentations to help with technical +matters, time to ask questions in a group format and also individually +and time to read and work on the laboratories.

+

It will be important to read the labs before class to do the +worksheets. To encourage good practices there are quizzes on canvas +for each lab.

+

You can use a web-browser to examine the course exercises. Point your +browser to:

+

https://rhwhite.github.io/numeric_2024/notebook_toc.html

+
+
+

Grades

+
+
    +
  • Laboratory Exercises 20% (individual with collaboration, excellent/satisfactory/unsatisfactory grading)

  • +
  • Mini-projects 30% (individual with collaboration)

  • +
  • Quizzes 5% (individual)

  • +
  • Worksheets 5% (group)

  • +
  • Project Proposal 5% (group)

  • +
  • Project 30% (group)

  • +
  • Project Oral Presentation 5% (group)

  • +
+
+

There will be 6 assigned exercise sets or ‘Laboratory Exercises’ based on the labs. +Note that these are not necessarily the same as the problems in the +lab and will generally be a much smaller set. Laboratory exercises +can be worked with partners or alone. Each student must upload their +own solution in their own words.

+

The laboratory exercise sets are to be uploaded to the course CANVAS page. +Sometimes, rather than a large series of plots, you may wish to +include a summarizing table. If you do not understand the scope of a +problem, please ask. Help with the labs is +available 1) through piazza (see CANVAS) so you can contact your classmates +and ask them 2) during the weekly scheduled lab or 3) directly from the +instructors during the scheduled office hours (see canvas).

+

Laboratory exercises will be graded as ‘excellent’, ‘satisfactory’ or ‘unsatisfactory’. +Your grade on canvas will be given as:

+

1.0 = excellent

+

0.8 = satisfactory

+

0 = unsatisfactory

+

Grades will be returned within a week of the submission deadline. +If you receive a grade of ‘satisfactory’ or ‘unsatisfactory’ on your first submission, +you will be given an opportunity to resubmit the problems you got incorrect to try to +improve your grade. To get a score of ‘excellent’ on a resubmission, you must include +a full explanation of your understanding of why your initial answer was incorrect, and +what misconception, or mistake, you have corrected to get to your new answer. Resubmissions +will be due exactly 2 weeks after the original submission deadline. It is your responsibility +to manage the timing of the resubmission deadlines with the next laboratory exercise.

+

Your final Laboratory Exercise grade will be calculated from the number of excellent, satisfactory +and unsatisfactory grades you have from the 6 exercises: +4 or more submissions at ‘Excellent’, none ‘Unsatisfactory’: 100% +2 or more submissions at ‘Excellent’, none ‘Unsatisfactory’: 90% +1 or fewer ‘Excellent’, none ‘Unsatisfactory’: 80% +1 ‘Unsatisfactory’ submission: 70% +2 ‘Unsatisfactory’ submissions: 60% +3 ‘Unsatisfactory’ submissions: 50% +4 or more ‘Unsatisfactory’ submissions: 0%: +[2]

+

The two mini-projects are longer assignments and slightly open-ended. +These mini-projects can be worked with partners or alone. Each +student much upload their own solution in their own words.

+

Quizzes are done online, reflect the learning objectives of each lab +and are assigned to ensure you do the reading with enough depth to +participate fully in the class worksheets and have the background to +do the Laboratory Exercises. There will be a “grace space” policy +allowing you to miss one quiz.

+

The in-class worksheets will be marked for a complete effort. There +will be a “grace space” policy allowing you to miss one class +worksheet. The grace space policy is to accommodate missed classes due +to illness, “away games” for athletes etc. In-class paper worksheets +are done as a group and are to handed in (one worksheet only per +group) at the end of the worksheet time.

+

The project will be done as a group. The topic of the project should +be selected from a list provided by the instructors or in consultation +with the instructors.

+

Assignments, quizzes, mini-projects and the project are expected on +time. Late ones will be marked and then the mark will be multiplied by +\((0.9)^{\rm (number\ of\ days\ or\ part\ days\ late)}\).

+
+
+

Contents

+

For each laboratory we give an estimate of number of hours. You will +need to complete six hours a week to keep up with the course material +covered in the quizzes.

+
    +
  • Introductory Meeting

  • +
  • Laboratory One

    +
      +
    • Estimate: 8 hours

    • +
    • Quiz #1 Objectives [3] pertaining to Lab 1

    • +
    • Assignment: See web.

    • +
    +
  • +
  • Laboratory Two

    +
      +
    • Estimate: 6 hours

    • +
    • Quiz #2 Objectives pertaining to Lab 2

    • +
    • Assignment: See web.

    • +
    +
  • +
  • Laboratory Three

    +
      +
    • Estimate: 8 hours

    • +
    • Quiz #3 Objectives pertaining to Lab 3

    • +
    • Assignment: See web.

    • +
    +
  • +
  • Mini-Project #1

    +
      +
    • Estimate: 4 hours

    • +
    • Details on web.

    • +
    +
  • +
  • Laboratory Four

    +
      +
    • Estimate: 8 hours

    • +
    • Quiz #5 Objectives pertaining to Lab 4

    • +
    • Assignment: See web.

    • +
    +
  • +
  • Laboratory Five

    +
      +
    • Estimate: 6 hours

    • +
    • Quiz #6 Objectives pertaining to Lab 5

    • +
    • Assignment: See web

    • +
    +
  • +
  • Mini-Project #2

    +
      +
    • Estimate: 4 hours

    • +
    • Details on web.

    • +
    +
  • +
  • Laboratory Seven (do 7 if you have PDE’s)

    +
      +
    • Estimate: 8 hours

    • +
    • Quiz #7-7 Objectives pertaining to Lab

    • +
    • Assignment: See web.

    • +
    +
  • +
  • Laboratory Six (do 6 if you do not have PDE’s)

    +
      +
    • Estimate: 8 hours

    • +
    • Quiz #7-6 Objectives pertaining to Lab 6

    • +
    +
  • +
  • Assignment: See web.

  • +
  • Project

    +
      +
    • Estimate: 16 hours

    • +
    • Proposal

    • +
    • 20 minute presentation to the class

    • +
    • Project report

    • +
    +
  • +
+
+
+

University Statement on Values and Policies

+

UBC provides resources to support student learning and to maintain +healthy lifestyles but recognizes that sometimes crises arise and so +there are additional resources to access including those for survivors +of sexual violence. UBC values respect for the person and ideas of +all members of the academic community. Harassment and discrimination +are not tolerated nor is suppression of academic freedom. UBC provides +appropriate accommodation for students with disabilities and for +religious and cultural observances. UBC values academic honesty and +students are expected to acknowledge the ideas generated by others and +to uphold the highest academic standards in all of their +actions. Details of the policies and how to access support are +available here

+

https://senate.ubc.ca/policies-resources-support-student-success.

+
+
+

Supporting Diversity and Inclusions

+

Atmospheric Science, Oceanography and the Earth Sciences have been +historically dominated by a small subset of +privileged people who are predominantly male and white, missing out on +many influential individuals thoughts and +experiences. In this course, we would like to create an environment +that supports a diversity of thoughts, perspectives +and experiences, and honours your identities. To help accomplish this:

+
+
    +
  • Please let us know your preferred name and/or set of pronouns.

  • +
  • If you feel like your performance in our class is impacted by your experiences outside of class, please don’t hesitate to come and talk with us. We want to be a resource for you and to help you succeed.

  • +
  • If an approach in class does not work well for you, please talk to any of the teaching team and we will do our best to make adjustments. Your suggestions are encouraged and appreciated.

  • +
  • We are all still learning about diverse perspectives and identities. If something was said in class (by anyone) that made you feel uncomfortable, please talk to us about it

  • +
+
+
+
+

Academic Integrity

+

Students are expected to learn material with honesty, integrity, and responsibility.

+
+
    +
  • Honesty means you should not take credit for the work of others, +and if you work with others you are careful to give them the credit they deserve.

  • +
  • Integrity means you follow the rules you are given and are respectful towards others +and their attempts to do so as well.

  • +
  • Responsibility means that you if you are unclear about the rules in a specific case +you should contact the instructor for guidance.

  • +
+
+

The course will involve a mixture of individual and group work. We try +to be flexible about this as my priority is for you to learn the +material rather than blindly follow rules, but there are +rules. Plagiarism (i.e. copying of others work) and cheating (not +following the rules) can result in penalties ranging from zero on an +assignment to failing the course.

+

For due dates etc, please see the Detailed Schedule.

+ +
+
+ + +
+
+
+
+
+ + + + + + + + \ No newline at end of file