Skip to content

Commit

Permalink
tutorial validation (#185)
Browse files Browse the repository at this point in the history
Co-authored-by: Ben Volokh <[email protected]>
  • Loading branch information
ndem0 and benv123 committed Nov 17, 2023
1 parent 2e2fe93 commit 32ff5de
Show file tree
Hide file tree
Showing 38 changed files with 1,066 additions and 1,000 deletions.
343 changes: 139 additions & 204 deletions docs/source/_rst/tutorial1/tutorial.rst

Large diffs are not rendered by default.

115 changes: 53 additions & 62 deletions docs/source/_rst/tutorial2/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,18 @@ This tutorial presents how to solve with Physics-Informed Neural
Networks a 2D Poisson problem with Dirichlet boundary conditions. Using
extrafeatures.

The problem is written as: :raw-latex:`\begin{equation}
\begin{cases}
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}` where :math:`D` is a square domain :math:`[0,1]^2`, and
The problem is written as:

.. raw:: latex

\begin{equation}
\begin{cases}
\Delta u = \sin{(\pi x)} \sin{(\pi y)} \text{ in } D, \\
u = 0 \text{ on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}

where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
square.

Expand All @@ -37,8 +43,8 @@ First of all, some useful imports.
Now, the Poisson problem is written in PINA code as a class. The
equations are written as *conditions* that should be satisfied in the
corresponding domains. *truth_solution* is the exact solution which will
be compared with the predicted one.
corresponding domains. *truth\_solution* is the exact solution which
will be compared with the predicted one.

.. code:: ipython3
Expand Down Expand Up @@ -107,12 +113,20 @@ of 0.006. These parameters can be modified as desired.
.. parsed-literal::
GPU available: False, used: False
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
/Users/dariocoscia/anaconda3/envs/pina/lib/python3.9/site-packages/lightning/pytorch/trainer/connectors/logger_connector/logger_connector.py:67: UserWarning: Starting from v1.9.0, `tensorboardX` has been removed as a dependency of the `lightning.pytorch` package, due to potential conflicts with other packages in the ML ecosystem. For this reason, `logger=True` will use `CSVLogger` as the default logger, unless the `tensorboard` or `tensorboardX` packages are found. Please `pip install lightning[extra]` or one of them to enable TensorBoard support by default
warning_cache.warn(
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial2/lightning_logs
2023-10-17 10:09:18.208459: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-17 10:09:18.235849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-17 10:09:20.462393: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
Expand All @@ -125,21 +139,18 @@ of 0.006. These parameters can be modified as desired.
0.001 Total estimated model params size (MB)
.. parsed-literal::
Epoch 999: : 1it [00:00, 129.50it/s, v_num=45, mean_loss=0.00196, gamma1_loss=0.0093, gamma2_loss=0.000146, gamma3_loss=8.16e-5, gamma4_loss=0.000201, D_loss=8.44e-5]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
Training: 0it [00:00, ?it/s]
.. parsed-literal::
Epoch 999: : 1it [00:00, 101.25it/s, v_num=45, mean_loss=0.00196, gamma1_loss=0.0093, gamma2_loss=0.000146, gamma3_loss=8.16e-5, gamma4_loss=0.000201, D_loss=8.44e-5]
`Trainer.fit` stopped: `max_epochs=1000` reached.
Now the *Plotter* class is used to plot the results. The solution
Now the ``Plotter`` class is used to plot the results. The solution
predicted by the neural network is plotted on the left, the exact one is
represented at the center and on the right the error between the exact
and the predicted solutions is showed.
Expand All @@ -151,7 +162,7 @@ and the predicted solutions is showed.
.. image:: tutorial_files/tutorial_11_0.png
.. image:: output_11_0.png


The problem solution with extra-features
Expand All @@ -162,9 +173,11 @@ is now defined, with an additional input variable, named extra-feature,
which coincides with the forcing term in the Laplace equation. The set
of input variables to the neural network is:

:raw-latex:`\begin{equation}
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
\end{equation}`
.. raw:: latex

\begin{equation}
[x, y, k(x, y)], \text{ with } k(x, y)=\sin{(\pi x)}\sin{(\pi y)},
\end{equation}

where :math:`x` and :math:`y` are the spatial coordinates and
:math:`k(x, y)` is the added feature.
Expand Down Expand Up @@ -210,10 +223,11 @@ new extra feature.
.. parsed-literal::
GPU available: False, used: False
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
Expand All @@ -226,18 +240,15 @@ new extra feature.
0.001 Total estimated model params size (MB)
.. parsed-literal::
Epoch 999: : 1it [00:00, 112.55it/s, v_num=46, mean_loss=2.73e-7, gamma1_loss=1.13e-6, gamma2_loss=7.1e-8, gamma3_loss=4.69e-8, gamma4_loss=6.81e-8, D_loss=4.65e-8]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
Training: 0it [00:00, ?it/s]
.. parsed-literal::
Epoch 999: : 1it [00:00, 92.69it/s, v_num=46, mean_loss=2.73e-7, gamma1_loss=1.13e-6, gamma2_loss=7.1e-8, gamma3_loss=4.69e-8, gamma4_loss=6.81e-8, D_loss=4.65e-8]
`Trainer.fit` stopped: `max_epochs=1000` reached.
The predicted and exact solutions and the error between them are
Expand All @@ -251,7 +262,7 @@ of magnitudes in accuracy.
.. image:: tutorial_files/tutorial_16_0.png
.. image:: output_16_0.png


The problem solution with learnable extra-features
Expand All @@ -263,9 +274,11 @@ Another way to exploit the extra features is the addition of learnable
parameter inside them. In this way, the added parameters are learned
during the training phase of the neural network. In this case, we use:

:raw-latex:`\begin{equation}
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
\end{equation}`
.. raw:: latex

\begin{equation}
k(x, \mathbf{y}) = \beta \sin{(\alpha x)} \sin{(\alpha y)},
\end{equation}

where :math:`\alpha` and :math:`\beta` are the abovementioned
parameters. Their implementation is quite trivial: by using the class
Expand Down Expand Up @@ -306,10 +319,11 @@ need, and they are managed by ``autograd`` module!
.. parsed-literal::
GPU available: False, used: False
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
Expand All @@ -322,18 +336,15 @@ need, and they are managed by ``autograd`` module!
0.001 Total estimated model params size (MB)
.. parsed-literal::
Epoch 999: : 1it [00:00, 91.07it/s, v_num=47, mean_loss=2.11e-6, gamma1_loss=1.03e-5, gamma2_loss=4.17e-8, gamma3_loss=4.28e-8, gamma4_loss=5.65e-8, D_loss=6.21e-8]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
Training: 0it [00:00, ?it/s]
.. parsed-literal::
Epoch 999: : 1it [00:00, 76.19it/s, v_num=47, mean_loss=2.11e-6, gamma1_loss=1.03e-5, gamma2_loss=4.17e-8, gamma3_loss=4.28e-8, gamma4_loss=5.65e-8, D_loss=6.21e-8]
`Trainer.fit` stopped: `max_epochs=1000` reached.
Umh, the final loss is not appreciabily better than previous model (with
Expand Down Expand Up @@ -365,10 +376,11 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
.. parsed-literal::
GPU available: False, used: False
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
Expand All @@ -381,18 +393,15 @@ removing all the hidden layers in the ``FeedForward``, keeping only the
0.000 Total estimated model params size (MB)
.. parsed-literal::
Epoch 999: : 1it [00:00, 149.45it/s, v_num=48, mean_loss=1.34e-16, gamma1_loss=6.66e-16, gamma2_loss=2.6e-18, gamma3_loss=4.84e-19, gamma4_loss=2.59e-18, D_loss=4.84e-19]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=1000` reached.
Training: 0it [00:00, ?it/s]
.. parsed-literal::
Epoch 999: : 1it [00:00, 117.81it/s, v_num=48, mean_loss=1.34e-16, gamma1_loss=6.66e-16, gamma2_loss=2.6e-18, gamma3_loss=4.84e-19, gamma4_loss=2.59e-18, D_loss=4.84e-19]
`Trainer.fit` stopped: `max_epochs=1000` reached.
In such a way, the model is able to reach a very high accuracy! Of
Expand All @@ -413,23 +422,5 @@ features.
.. image:: tutorial_files/tutorial_23_0.png


.. code:: ipython3
import matplotlib.pyplot as plt
plt.figure(figsize=(16, 6))
plotter.plot_loss(trainer, label='Standard')
plotter.plot_loss(trainer_feat, label='Static Features')
plotter.plot_loss(trainer_learn, label='Learnable Features')
plt.grid()
plt.legend()
plt.show()
.. image:: tutorial_files/tutorial_24_0.png
.. image:: output_23_0.png

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
70 changes: 42 additions & 28 deletions docs/source/_rst/tutorial3/tutorial.rst
Original file line number Diff line number Diff line change
@@ -1,22 +1,24 @@
Tutorial 3: resolution of wave equation with hard constraint PINNs.
===================================================================

The problem solution
~~~~~~~~~~~~~~~~~~~~
The problem definition
----------------------

In this tutorial we present how to solve the wave equation using hard
constraint PINNs. For doing so we will build a costum torch model and
pass it to the ``PINN`` solver.

The problem is written in the following form:

:raw-latex:`\begin{equation}
\begin{cases}
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}`
.. raw:: latex

\begin{equation}
\begin{cases}
\Delta u(x,y,t) = \frac{\partial^2}{\partial t^2} u(x,y,t) \quad \text{in } D, \\\\
u(x, y, t=0) = \sin(\pi x)\sin(\pi y), \\\\
u(x, y, t) = 0 \quad \text{on } \Gamma_1 \cup \Gamma_2 \cup \Gamma_3 \cup \Gamma_4,
\end{cases}
\end{equation}

where :math:`D` is a square domain :math:`[0,1]^2`, and
:math:`\Gamma_i`, with :math:`i=1,...,4`, are the boundaries of the
Expand Down Expand Up @@ -80,21 +82,24 @@ predicted one.
problem = Wave()
Hard Constraint Model
---------------------

After the problem, a **torch** model is needed to solve the PINN.
Usually many models are already implemented in ``PINA``, but the user
has the possibility to build his/her own model in ``pyTorch``. The hard
constraint we impose are on the boundary of the spatial domain.
Specificly our solution is written as:
Usually, many models are already implemented in ``PINA``, but the user
has the possibility to build his/her own model in ``PyTorch``. The hard
constraint we impose is on the boundary of the spatial domain.
Specifically, our solution is written as:

.. math:: u_{\rm{pinn}} = xy(1-x)(1-y)\cdot NN(x, y, t),

where :math:`NN` is the neural net output. This neural network takes as
input the coordinates (in this case :math:`x`, :math:`y` and :math:`t`)
and provides the unkwown field of the Wave problem. By construction it
is zero on the boundaries. The residual of the equations are evaluated
at several sampling points (which the user can manipulate using the
method ``discretise_domain``) and the loss minimized by the neural
network is the sum of the residuals.
and provides the unknown field :math:`u`. By construction, it is zero on
the boundaries. The residuals of the equations are evaluated at several
sampling points (which the user can manipulate using the method
``discretise_domain``) and the loss minimized by the neural network is
the sum of the residuals.

.. code:: ipython3
Expand All @@ -114,6 +119,9 @@ network is the sum of the residuals.
hard = x.extract(['x'])*(1-x.extract(['x']))*x.extract(['y'])*(1-x.extract(['y']))
return hard*self.layers(x)
Train and Inference
-------------------

In this tutorial, the neural network is trained for 3000 epochs with a
learning rate of 0.001 (default in ``PINN``). Training takes
approximately 1 minute.
Expand All @@ -128,10 +136,20 @@ approximately 1 minute.
.. parsed-literal::
GPU available: False, used: False
/u/n/ndemo/.local/lib/python3.9/site-packages/torch/cuda/__init__.py:546: UserWarning: Can't initialize NVML
warnings.warn("Can't initialize NVML")
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
IPU available: False, using: 0 IPUs
HPU available: False, using: 0 HPUs
Missing logger folder: /u/n/ndemo/PINA/tutorials/tutorial3/lightning_logs
2023-10-17 10:24:02.163746: I tensorflow/core/util/port.cc:110] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2023-10-17 10:24:02.218849: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
2023-10-17 10:24:07.063047: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
/opt/sissa/apps/intelpython/2022.0.2/intelpython/latest/lib/python3.9/site-packages/scipy/__init__.py:138: UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.26.0)
warnings.warn(f"A NumPy version >={np_minversion} and <{np_maxversion} is required for this version of "
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
| Name | Type | Params
----------------------------------------
Expand All @@ -144,18 +162,15 @@ approximately 1 minute.
0.002 Total estimated model params size (MB)
.. parsed-literal::
Epoch 2999: : 1it [00:00, 79.33it/s, v_num=5, mean_loss=0.00119, D_loss=0.00542, t0_loss=0.0017, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
.. parsed-literal::
`Trainer.fit` stopped: `max_epochs=3000` reached.
Training: 0it [00:00, ?it/s]
.. parsed-literal::
Epoch 2999: : 1it [00:00, 68.62it/s, v_num=5, mean_loss=0.00119, D_loss=0.00542, t0_loss=0.0017, gamma1_loss=0.000, gamma2_loss=0.000, gamma3_loss=0.000, gamma4_loss=0.000]
`Trainer.fit` stopped: `max_epochs=3000` reached.
Notice that the loss on the boundaries of the spatial domain is exactly
Expand All @@ -177,14 +192,13 @@ results using the ``Plotter`` class of **PINA**.
.. image:: tutorial_files/tutorial_12_0.png
.. image:: output_14_0.png



.. image:: tutorial_files/tutorial_12_1.png
.. image:: output_14_1.png



.. image:: tutorial_files/tutorial_12_2.png
.. image:: output_14_2.png

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Loading

0 comments on commit 32ff5de

Please sign in to comment.