Skip to content

Commit

Permalink
Update examples
Browse files Browse the repository at this point in the history
Signed-off-by: Tom Freudenberg <[email protected]>
  • Loading branch information
Tom Freudenberg committed Nov 3, 2023
1 parent 16675b3 commit d8bf999
Show file tree
Hide file tree
Showing 12 changed files with 126 additions and 91 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
"outputs": [],
"source": [
"!pip install torchaudio==0.13.0\n",
"!pip install torchvision==0.14.0\n",
"!pip install torchphysics"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@
"outputs": [],
"source": [
"!pip install torchaudio==0.13.0\n",
"!pip install torchvision==0.14.0\n",
"!pip install torchphysics"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Backup 1: Applying hard constraints\n",
"### Backup 1: Applying hard constrains\n",
"\n",
"For some problems, it is advantageous to apply prior knowledge about the solution in the network architecture, instead of the training process via additional loss terms. For example, in simple domains a Dirichlet boundary condition could be added to the network output with a corresponding characteristic function. Since the boundary condition is then naturally fulfilled, one has to consider fewer terms in the final loss and the optimization may become easier.\n",
"\n",
Expand All @@ -28,6 +28,7 @@
"outputs": [],
"source": [
"!pip install torchaudio==0.13.0\n",
"!pip install torchvision==0.14.0\n",
"!pip install torchphysics"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Backup 1: Applying hard constraints\n",
"### Backup 1: Applying hard constrains\n",
"\n",
"For some problems, it is advantageous to apply prior knowledge about the solution in the network architecture, instead of the training process via additional loss terms. For example, in simple domains a Dirichlet boundary condition could be added to the network output with a corresponding characteristic function. Since the boundary condition is then naturally fulfilled, one has to consider fewer terms in the final loss and the optimization may become easier.\n",
"\n",
Expand All @@ -28,6 +28,7 @@
"outputs": [],
"source": [
"!pip install torchaudio==0.13.0\n",
"!pip install torchvision==0.14.0\n",
"!pip install torchphysics"
]
},
Expand Down
24 changes: 13 additions & 11 deletions examples/new_examples/Exercise_2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
"import torchphysics as tp\n",
"X = tp.spaces.R2('x')\n",
"U = tp.spaces.R1('u')\n",
"T = tp.spaces. # TODO: Add the time variable \"t\" of dimension 1 (e.g. R1)"
"T = tp.spaces. # TODO: Add the time variable \"t\" of dimension 1 "
]
},
{
Expand All @@ -80,7 +80,7 @@
"id": "USnzJA0KYsze"
},
"source": [
"Now we define our domain. The space domain is already completed, here you have to create the time interval and the Cartesian product of both."
"Now we define our domain. The domain $\\Omega$ is already completed, here you have to create the time interval and the Cartesian product of both."
]
},
{
Expand All @@ -93,7 +93,7 @@
"source": [
"omega = tp.domains.Parallelogram(X, [0,0], [1,0], [0,1])\n",
"time_interval = tp.domains.Interval(T, ) # TODO: Add the bounds of the Interval (0, 2)\n",
"product_domain = time_interval # TODO: Create the product domain. Products are define with: * omega"
"product_domain = time_interval # TODO: Create the product domain of time and space. Products are define with: *"
]
},
{
Expand Down Expand Up @@ -131,7 +131,7 @@
"id": "GTumzU23ZDip"
},
"source": [
"The neural network to learn the solution, gets as an input the time and space variable and outputs the solution u. Add the correct spaces.\n",
"The neural network that learns the solution gets the time and space variable as an input and outputs the solution u. Add the correct spaces.\n",
"\n",
"**Hint**: A product space is also defined with *"
]
Expand All @@ -144,7 +144,7 @@
},
"outputs": [],
"source": [
"# TODO: Add spaces\n",
"# TODO: Add the spaces\n",
"model = tp.models.FCN(input_space=, output_space=, hidden=(30,30,30))"
]
},
Expand All @@ -155,9 +155,9 @@
"id": "YkdAEBllZLXR"
},
"source": [
"Now, we have to transform out mathematical conditions given by our PDE into corresponding training conditions.\n",
"Now, we have to transform our mathematical conditions given by our PDE into corresponding training conditions.\n",
"\n",
"First we handle the PDE itself. Here, you have to finish the residual function."
"First we handle the PDE-condition itself. Here, you have to finish the residual function."
]
},
{
Expand All @@ -169,8 +169,8 @@
"outputs": [],
"source": [
"def pde_residual(u, x, t):\n",
" # TODO: Pass in the correct variables \n",
" # for the derivative computation as the second argument:\n",
" # TODO: Pass in the correct variables for the derivative computation \n",
" # as the second argument below:\n",
" return tp.utils.grad(u, ) - 0.1*tp.utils.laplacian(u, ) - 1.0\n",
"\n",
"pde_cond = tp.conditions.PINNCondition(model, inner_sampler, pde_residual)"
Expand Down Expand Up @@ -218,7 +218,7 @@
"def initial_residual(u):\n",
" return \n",
"\n",
"initial_cond = tp.conditions.PINNCondition() # TODO: Add the model, sampler and residual"
"initial_cond = tp.conditions.PINNCondition() # TODO: Add the model, correct sampler and residual function"
]
},
{
Expand All @@ -240,7 +240,9 @@
"outputs": [],
"source": [
"optim = tp.OptimizerSetting(torch.optim.Adam, lr=0.005)\n",
"solver = tp.solver.Solver([.,.], optimizer_setting=optim) # TODO: Collect all conditions"
"# TODO: Collect all conditions as a list and pass them to the solver as the\n",
"# first argument. \n",
"solver = tp.solver.Solver([.,.], optimizer_setting=optim) "
]
},
{
Expand Down
97 changes: 60 additions & 37 deletions examples/new_examples/Exercise_5.ipynb

Large diffs are not rendered by default.

4 changes: 3 additions & 1 deletion examples/new_examples/Exercise_8.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"### Start of TorchPhysics Part\n",
"Next, we start with the TorchPhysics part. The first definitions are known and similiar to the other examples."
]
},
Expand Down Expand Up @@ -317,7 +318,8 @@
"metadata": {},
"outputs": [],
"source": [
"# To discretize the initial functions, we use the following sampler, that just returns our space_points from above:\n",
"# To discretize the initial functions, we use the following sampler, that just returns our \n",
"# space_points from above:\n",
"discretization_grid = tp.spaces.Points(space_points.reshape(-1, 1), X)\n",
"discretization_sampler = tp.samplers.DataSampler(...)\n",
"\n",
Expand Down
19 changes: 10 additions & 9 deletions examples/new_examples/Exercise_9.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,9 @@
" u(0) &= 0\n",
"\\end{align*}\n",
"\n",
"This ODE will guide us through the implementation and gives a simple introduction to PI-DeepONets. Our goal is to learn the integral operator for different $f$.\n",
"This ODE will guide us through the implementation and gives a simple introduction to PI-DeepONets. Our goal is to learn the operator that maps the parameter function $f$ to the solution $u$.\n",
"\n",
"First, we have to again install the library:"
"First, we have to install the library:"
]
},
{
Expand Down Expand Up @@ -97,7 +97,8 @@
"\n",
"param_sampler = tp.samplers.RandomUniformSampler(K_int, n_points=80)\n",
"\n",
"# Each function f gives a FunctionSet, where we can sample functions for the right hand side from:\n",
"# Each function above represents a FunctionSet. From these sets we can sample in TorchPhysics\n",
"# different representatives for the right hand side of our ODE.\n",
"Fn_set_1 = tp.domains.CustomFunctionSet(Fn_space, param_sampler, f1)\n",
"# TODO: Complete the FunctionSet for f2 and f3 using the same param_sampler\n",
"Fn_set_2 = tp.domains.CustomFunctionSet() \n",
Expand All @@ -111,7 +112,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next is the DeepONet itself, the definition is the same as in the data driven case. Since our problem is now alot simpler, we can use a smaller network"
"Next, consider the DeepONet. The definition is the same as in the data driven case. Since our problem is now a lot simpler, we can use a smaller network."
]
},
{
Expand All @@ -124,8 +125,8 @@
"# use an arbitrary sampler to discretize them.\n",
"dis_sampler = tp.samplers.GridSampler(T_int, 50).make_static()\n",
"\n",
"# TODO: Implement the a fully connected Trunk and Branchnet. The Trunk should have two hidden layers with\n",
"# 30 neurons each and the Branch two hidden layers with 50 neurons.\n",
"# TODO: Implement a fully connected Trunk and Branch net. The Trunk net should have two hidden layers with\n",
"# 30 neurons each and the Branch net two hidden layers with 50 neurons.\n",
"trunk_net = tp.models.FCTrunkNet(...)\n",
"branch_net = tp.models.FCBranchNet(...)\n",
"model = tp.models.DeepONet(trunk_net, branch_net, U, output_neurons=50)"
Expand All @@ -142,11 +143,11 @@
"ode_sampler = ...\n",
"\n",
"def ode_residual(u, t, f): \n",
" # The f is the function that was inputed in the Branch, but now evaluated\n",
" # The f is the function that was the input of the Branch, but now evaluated\n",
" # at the time points that are used in this condition.\n",
" return\n",
"\n",
"# Here we now use the \"PIDeepONetCondition\" instead of the \"PINNCondition\", since\n",
"# Here, we now use the \"PIDeepONetCondition\" instead of the \"PINNCondition\", since\n",
"# we also have to handle the Branch-inputs.\n",
"ode_cond = tp.conditions.PIDeepONetCondition(deeponet_model=model, \n",
" function_set=Fn_set, \n",
Expand All @@ -161,7 +162,7 @@
"metadata": {},
"outputs": [],
"source": [
"# The initial condition is also the same as in PINNs:\n",
"# The initial condition is also the same as for PINNs:\n",
"left_sampler = tp.samplers.RandomUniformSampler(T_int.boundary_left, 1000)\n",
"\n",
"def initial_residual(u):\n",
Expand Down
12 changes: 6 additions & 6 deletions examples/new_examples/Sol_2.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@
"import torchphysics as tp\n",
"X = tp.spaces.R2('x')\n",
"U = tp.spaces.R1('u')\n",
"T = tp.spaces.R1('t') # <- Add the time variable \"t\" of dimension 1 (e.g. R1)"
"T = tp.spaces.R1('t') # <- Add the time variable \"t\" of dimension 1"
]
},
{
Expand All @@ -90,7 +90,7 @@
"source": [
"omega = tp.domains.Parallelogram(X, [0,0], [1,0], [0,1])\n",
"time_interval = tp.domains.Interval(T, 0, 2.0) # <-add the bounds of the Interval (0, 2)\n",
"product_domain = time_interval * omega # <- products are define with: * omega"
"product_domain = time_interval * omega # TODO: Create the product domain of time and space. Products are define with: *"
]
},
{
Expand Down Expand Up @@ -128,7 +128,7 @@
"id": "GTumzU23ZDip"
},
"source": [
"The neural network to learn the solution, gets as an input the time and space variable and outputs the solution u. Add the correct spaces.\n",
"The neural network that learns the solution gets the time and space variable as an input and outputs the solution u. Add the correct spaces.\n",
"\n",
"**Hint**: A product space is also defined with *"
]
Expand All @@ -151,7 +151,7 @@
"id": "YkdAEBllZLXR"
},
"source": [
"Now, we have to transform out mathematical conditions given by our PDE into corresponding training conditions.\n",
"Now, we have to transform our mathematical conditions given by our PDE into corresponding training conditions.\n",
"\n",
"First we handle the PDE itself. Here, you have to finish the residual function."
]
Expand All @@ -165,8 +165,8 @@
"outputs": [],
"source": [
"def pde_residual(u, x, t):\n",
" # in the differential operators you have to pass in the correct variables \n",
" # for the derivative computation as the second argument:\n",
" # TODO: Pass in the correct variables for the derivative computation \n",
" # as the second argument below:\n",
" return tp.utils.grad(u, t) - 0.1*tp.utils.laplacian(u, x) - 1.0\n",
"\n",
"pde_cond = tp.conditions.PINNCondition(model, inner_sampler, pde_residual)"
Expand Down
46 changes: 24 additions & 22 deletions examples/new_examples/Sol_5.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,28 +7,30 @@
"# Heat equation on a time-dependent domain: Example 5\n",
"\n",
"With the knowledge gained from the previous examples, we can now solve the drilling problem. Here, we consider\n",
"the heat equation on a time-dependent domain. For the PDE we consider, for $t\\in[0, 5]$,\n",
"the heat equation on a time-dependent domain $\\Omega(t)$. For the PDE we consider, for $t\\in[0, 5]$,\n",
"\\begin{align*}\n",
" \\partial_t u(x, t) -0.1\\Delta u(x, t) &= 0.0, && \\text{ in } \\Omega(t), \\\\\n",
" \\partial_t u(x, t) -0.1\\Delta u(x, t) &= 0, && \\text{ in } \\Omega(t), \\\\\n",
" u(x, 0) &= 0 , && \\text{ in } \\Omega(0), \\\\\n",
" u(x, t) &= 0 , &&\\text{ for } x_2=0, \\\\\n",
" -\\kappa \\nabla u(x, t)\\cdot n &= 0 , &&\\text{ on } \\Gamma_N(t), \\\\\n",
" -\\kappa \\nabla u(x, t)\\cdot n &= f , &&\\text{ on } \\Gamma_H(t).\n",
" 0.1 \\nabla u(x, t)\\cdot n &= 0 , &&\\text{ on } \\Gamma_N(t), \\\\\n",
" 0.1\\nabla u(x, t)\\cdot n &= f , &&\\text{ on } \\Gamma_H(t).\n",
"\\end{align*}\n",
"\n",
"Here, $\\Omega(t) = ([0, 1] \\times [0, 1]) \\setminus D(t)$ and $D(t) = [0.2, 0.4] \\times [h(t), 1.0]$, with\n",
"Now let $D(t)$ be the drill that we use for removing material from our original domain $\\Omega(0)=[0, 1] \\times [0, 1]$. Here, we consider a rectangle $D(t) = [0.2, 0.4] \\times [h(t), 1.0]$, where $h(t)$ describes the depth of the drill at point $t$ and is given by\n",
"\\begin{align*}\n",
" h(t) = 1.0 - v_\\text{drill} * t,\n",
" h(t) = 1.0 - v_\\text{drill} \\cdot t.\n",
"\\end{align*}\n",
"such that $h(t)$ describes the depth of the drill at point $t$ and we remove a rectangular part from our domain.\n",
"The speed of removing the material is given by $v_\\text{drill}$.\n",
"\n",
"Then our time dependent domain is given by $\\Omega(t) = ([0, 1] \\times [0, 1]) \\setminus D(t)$.\n",
"\n",
"The boundary parts are given by\n",
"\\begin{align*}\n",
" \\Gamma_H(t) &= [0.2, 0.4] \\times \\{h(t)\\}, \\\\\n",
" \\Gamma_N(t) &= \\partial ([0, 1] \\times [0, 1]) \\setminus (\\{x_2=0\\} \\cup \\Gamma_H(t)),\n",
"\\end{align*}\n",
"e.g. at the bottom boundary we have a fixed temperature $u=0$, at the bottom of the drill part a heat source and \n",
"everywhere else homogeneous Neumann conditions.\n",
"at the bottom boundary ($x_2=0$) we have a fixed temperature $u=0$, at the bottom of the drill part ($\\Gamma_H$) a heat source and \n",
"everywhere else ($\\Gamma_N$) homogeneous Neumann conditions.\n",
"\n",
"First we have to install the library:"
]
Expand Down Expand Up @@ -74,8 +76,8 @@
" heat_source *= (1.0 - torch.exp(-prod_speed*t))\n",
" return heat_source\n",
"\n",
"# Movement of drill (returns the y position of the bottom of the drill head)\n",
"def drill_height(t):\n",
"# Movement of drill (returns the x_2 position of the bottom of the drill head)\n",
"def drill_depth(t):\n",
" height = size_y - drill_speed * t\n",
" return height"
]
Expand Down Expand Up @@ -118,18 +120,18 @@
" # First construct a tensor, where we can input the correct corrdinates.\n",
" # The x-coordinate is \n",
" coordinate = x_start * torch.ones((len(t), 2), device=t.device)\n",
" coordinate[:, 1:] = drill_height(t)\n",
" coordinate[:, 1:] = drill_depth(t)\n",
" return coordinate\n",
"\n",
"# TODO: Implement the lower right corner:\n",
"def right_corner(t):\n",
" coordinate = x_end * torch.ones((len(t), 2), device=t.device)\n",
" coordinate[:, 1:] = drill_height(t)\n",
" coordinate[:, 1:] = drill_depth(t)\n",
" return coordinate\n",
"\n",
"# Now we can use the above functions as input arguments in the Parallelogram.\n",
"# The top left corner is fixed (outside our square for numerical reasons), such that we obtain \n",
"# a rectangle that grows in the negative y direction in time.\n",
"# The top left corner is fixed outside the unit square (for numerical reasons), such that we obtain \n",
"# a rectangle that grows in the negative x_2 direction over time.\n",
"drill_hole = tp.domains.Parallelogram(X, left_corner, right_corner, [x_start, size_y+0.1])\n",
"\n",
"# TODO: Construct the difference of the \"box\" and \"drill_hole\"\n",
Expand All @@ -144,10 +146,10 @@
"metadata": {},
"source": [
"The defined domain $\\Omega$ is now dependent on time, since the *drill_hole* depends on time.\n",
"This we have to keep in mind when we want to create points inside this domain. To sample points we\n",
"we have to say what time point $t$ we are considering, e.g. we can only sample in $\\Omega \\times [0, T]$.\n",
"This we have to keep in mind when we want to create points inside this domain. To sample points,\n",
"we have to say what time point $t$ we are considering.\n",
"\n",
"This order is also important for the *Samplers* in TorchPhysics. If we input the Cartesian Product *omega \\* I_t*, the sampler will create points from right to left, first points in the time interval and then pass them to the space domain to create valid points. \n",
"This order is also important for the *Samplers* in TorchPhysics. If we input the \"Cartesian\" Product *omega \\* I_t*, the sampler will create points from right to left, first points in the time interval and then pass them to the space domain to create valid points. \n",
"\n",
"The combination *I_t \\* omega* does **not** work. "
]
Expand Down Expand Up @@ -203,8 +205,8 @@
"metadata": {},
"outputs": [],
"source": [
"# When making a sampler static (e.g. fixing the points it creates) we can also pass an \n",
"# after how many training iterations it will create new points.\n",
"# When making a sampler static (e.g. fixing the points it creates) we can also pass specify\n",
"# after how many training iterations new points will be created.\n",
"# We do this here to speed up the training process, since sampling in time dependent domains\n",
"# is a little bit slower than in normal domains. \n",
"inner_sampler = tp.samplers.RandomUniformSampler(omega * I_t, 75000).make_static(resample_interval=250)\n",
Expand Down Expand Up @@ -378,9 +380,9 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we combine all loss terms in the training process. Here, we only train for a short time frame. Because we need a rather large network and the GPUs on Google Colab at not that fast, the training is slow. \n",
"Next, we combine all loss terms in the training process. Here, we only train for a short time frame. Because we need a rather large network and the GPUs on Google Colab are not that fast, so that the training is slow. \n",
"\n",
"But we can still obtain a good first approximation and in the later cells also load a pretrained network, that we trained for roughly 25 minutes on a RTX 2080 beforehand."
"But we can still obtain a good first approximation and in the later cells also load a pretrained network, that we trained for roughly 25 minutes on a GTX 2080 beforehand."
]
},
{
Expand Down
Loading

0 comments on commit d8bf999

Please sign in to comment.