Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Add quantum information tools #16

Merged
merged 73 commits into from
Jun 6, 2024
Merged
Show file tree
Hide file tree
Changes from 67 commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
bde341e
first commit
inafergra Mar 22, 2024
b40cbb4
create spsa.py module
inafergra Mar 23, 2024
4af4a07
adding qinfo_tools to docs
inafergra Mar 28, 2024
d4e3fd6
adding python env for docs
inafergra Mar 28, 2024
9fdfe3a
Adding tests for qinfo_tools
inafergra Mar 28, 2024
0c42c06
adding comments and cleaning
inafergra Mar 28, 2024
63747b4
Merge branch 'pasqal-io:main' into qinfo_tools
inafergra Mar 28, 2024
c6ac29e
delete qinfo_tools jupyter notebook
inafergra Mar 28, 2024
edfb384
Not using vparams_tensors and other improvements
inafergra Mar 31, 2024
9d8ed71
Adding footnotes extension
inafergra Mar 31, 2024
d35bf5d
Adding docsutils to print docs plots
inafergra Mar 31, 2024
e7a5538
extend qng docs and add footnotes
inafergra Mar 31, 2024
41eb55d
Set n_qubits to 4
inafergra Mar 31, 2024
0710a80
Last updates
inafergra Mar 31, 2024
49641ae
n_qubits=3 to speed up docs
inafergra Apr 1, 2024
0206a4f
fixing linting and typing
inafergra Apr 1, 2024
3f10ba9
add functions to general namespace
inafergra Apr 1, 2024
695b0e2
linting
inafergra Apr 1, 2024
94dcf32
increase lr in test
inafergra Apr 3, 2024
cb263d5
add qinfo_tools
inafergra Apr 3, 2024
fca2f9c
Update docs/qinfo_tools/qng.md
inafergra Apr 11, 2024
ff2333a
Update docs/qinfo_tools/qng.md
inafergra Apr 11, 2024
d3d5851
Update qadence_libs/qinfo_tools/qfi.py
inafergra Apr 11, 2024
1c2fb17
Update qadence_libs/qinfo_tools/qfi.py
inafergra Apr 11, 2024
f22b534
Update qadence_libs/qinfo_tools/qfi.py
inafergra Apr 11, 2024
c3b129e
delete intermediate variable
inafergra Apr 11, 2024
fddb9fd
dict type fix
inafergra Apr 11, 2024
22bad08
delete intermediate variable
inafergra Apr 11, 2024
38cc7ba
docs reference and acronym
inafergra Apr 11, 2024
f9a27c1
Fix docstring
inafergra Apr 11, 2024
c93773f
simplify python code
inafergra Apr 11, 2024
c549fde
fix more docstrings
inafergra Apr 11, 2024
e1db4a8
fix return type
inafergra Apr 11, 2024
78b7cd3
fix docstrings and linting
inafergra Apr 12, 2024
22db86c
specify list type
inafergra Apr 12, 2024
ca975bf
add jacobian_var
inafergra Apr 14, 2024
cd92e07
delete intermediate variable
inafergra Apr 17, 2024
4cc414b
not parametrizing for a single value in pytest
inafergra Apr 19, 2024
a3204a0
moved common functions to fixtures
inafergra Apr 19, 2024
fa93d70
Added checks and docstrings to the optims
inafergra Apr 19, 2024
46f04e7
lint and format
inafergra Apr 19, 2024
ef69e76
using torch.bernoulli instead of np.choice
inafergra Apr 29, 2024
ea56729
checking for existence of vparams_values only once
inafergra Apr 29, 2024
6b3e53c
delete unused import
inafergra Apr 29, 2024
0aa8195
change FM to Fourier
inafergra Apr 29, 2024
4e7753a
Update qadence_libs/qinfo_tools/qfi.py
inafergra Apr 29, 2024
cfc4a1c
typos
inafergra Apr 29, 2024
f6e6f38
Update qadence_libs/qinfo_tools/qng.py
inafergra Apr 29, 2024
3f3fdce
Update qadence_libs/qinfo_tools/qng.py
inafergra Apr 29, 2024
4ad9f4a
rename
inafergra Apr 29, 2024
fc8e967
return type
inafergra Apr 29, 2024
763ca2d
made checks more readable
inafergra Apr 29, 2024
e287411
add description for return type
inafergra Apr 29, 2024
da28b66
renamed optimizers
inafergra Apr 29, 2024
ed37bc1
delete type ignore
inafergra Apr 29, 2024
d95e6d2
add FisherApproximation type
inafergra May 8, 2024
c9f60ac
substituting inverser with least squares
inafergra May 8, 2024
fafb567
optimizers now takes model and not circuit
inafergra May 16, 2024
c711010
qfi functions now expect a dictionary
inafergra May 21, 2024
61c2df6
use state_dict in optimizer
inafergra May 26, 2024
2489a7b
make vparam_dict optional
inafergra May 26, 2024
1a944b5
update tests
inafergra May 26, 2024
d3f4efe
update docs
inafergra May 26, 2024
61ff518
increase iterations in test
inafergra May 26, 2024
a099f0f
add sqrt test
inafergra May 27, 2024
29cf30d
reset vparams in conftest
inafergra May 27, 2024
d4cc5e1
small changes
inafergra May 27, 2024
32c443d
Update docs/qinfo_tools/qng.md
inafergra Jun 4, 2024
9c46894
Addressed last comments
inafergra Jun 4, 2024
06e21be
Change citation style
inafergra Jun 6, 2024
61fad2a
bump version
inafergra Jun 6, 2024
cb08371
Add name to authors
inafergra Jun 6, 2024
394f749
fix linting
inafergra Jun 6, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions docs/docsutils.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from __future__ import annotations

from io import StringIO

from matplotlib.figure import Figure


def fig_to_html(fig: Figure) -> str:
buffer = StringIO()
fig.savefig(buffer, format="svg")
return buffer.getvalue()
19 changes: 19 additions & 0 deletions docs/environment.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
name: readthedocs
channels:
- defaults
dependencies:
- python=3.10
- python-graphviz
- pip
- pip:
- markdown-exec
- mkdocs-exclude
- mkdocs-jupyter
- mkdocs-material
- mkdocs-section-index==0.3.6
- mkdocs==1.5.2
- mkdocstrings
- mkdocstrings-python
- -e ../
- pulser>=0.12.0
- amazon-braket-sdk
184 changes: 184 additions & 0 deletions docs/qinfo_tools/qng.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,184 @@
# The Quantum Natural Gradient optimizer

Qadence-libs provides a set of optimizers based on quantum information tools, in particular based on the Quantum Fisher Information[^1] (QFI). The Quantum Natural Gradient [^2] (QNG) is a gradient-based optimizer which uses the QFI matrix to better navigate the optimizer's descent to the minimum. The parameter update rule for the QNG optimizer is written as:

$$
\theta_{t+1} = \theta_t - \eta g^{-1}(\theta_t)\nabla \mathcal{L}(\theta_t)
$$

where $g(\theta)$ is the Fubiny-Study metric tensor (aka Quantum Geometric Tensor), which is equivalent to the Quantum Fisher Information matrix $F(\theta)$ up to a constant factor $F(\theta)= 4 g(\theta)$. The Quantum Fisher Information can be written as the Hessian of the fidelity of a quantum state:

$$
F_{i j}(\theta)=-\left.2 \frac{\partial}{\partial \theta_i} \frac{\partial}{\partial \theta_j}\left|\left\langle\psi\left(\theta^{\prime}\right) \mid \psi(\theta)\right\rangle\right|^2\right|_{{\theta}^{\prime}=\theta}
$$

However, computing the above expression is a costly operation scaling quadratically with the number of parameters in the variational quantum circuit. It is thus usual to use approximate methods when dealing with the QFI matrix. Qadence provides a SPSA-based implementation of the Quantum Natural Gradient[^3]. The [SPSA](https://www.jhuapl.edu/spsa/) (Simultaneous Perturbation Stochastic Approximation) algorithm is a well known gradient-based algorithm based on finite differences. QNG-SPSA constructs an iterative approximation to the QFI matrix with a constant number of circuit evaluations that does not scale with the number of parameters. Although the SPSA algorithm outputs a rough approximation of the QFI matrix, the QNG-SPSA has been proven to work well while being a very efficient method due to the constant overhead in circuit evaluations (only 6 extra evaluations per iteration).
inafergra marked this conversation as resolved.
Show resolved Hide resolved

In this tutorial, we use the QNG and QNG-SPSA optimizers with the Quantum Circuit Learning algorithm, a variational quantum algorithm which uses Quantum Neural Networks as universal function approximators.

Keep in mind that only *circuit* parameters can be optimized with the QNG optimizer, since we can only calculate the QFI matrix of parameters contained in the circuit. If your model holds other trainable, non-circuit parameters, such as scaling or shifting of the input/output, another optimizer must be used for to optimize those parameters.
```python exec="on" source="material-block" html="1" session="main"
import torch
from torch.utils.data import random_split
import random
import matplotlib.pyplot as plt

from qadence import QuantumCircuit, QNN, FeatureParameter
from qadence import kron, tag, hea, RX, Z, hamiltonian_factory

from qadence_libs.qinfo_tools import QuantumNaturalGradient
from qadence_libs.types import FisherApproximation
```

First, we prepare the Quantum Circuit Learning data. In this case we will fit a simple one-dimensional sin($x$) function:
```python exec="on" source="material-block" html="1" session="main"
# Ensure reproducibility
seed = 0
torch.manual_seed(seed)
random.seed(seed)

# Create dataset
def qcl_training_data(
domain: tuple = (0, 2 * torch.pi), n_points: int = 200
) -> tuple[torch.Tensor, torch.Tensor]:
start, end = domain

x_rand, _ = torch.sort(torch.DoubleTensor(n_points).uniform_(start, end))
y_rand = torch.sin(x_rand)

return x_rand, y_rand


x, y = qcl_training_data()

# random train/test split of the dataset
train_subset, test_subset = random_split(x, [0.75, 0.25])
train_ind = sorted(train_subset.indices)
test_ind = sorted(test_subset.indices)

x_train, y_train = x[train_ind], y[train_ind]
x_test, y_test = x[test_ind], y[test_ind]
```

We now create the base Quantum Circuit that we will use with all the optimizers:
```python exec="on" source="material-block" html="1" session="main"
n_qubits = 3

# create a simple feature map to encode the input data
feature_param = FeatureParameter("phi")
feature_map = kron(RX(i, feature_param) for i in range(n_qubits))
feature_map = tag(feature_map, "feature_map")

# create a digital-analog variational ansatz using Qadence convenience constructors
ansatz = hea(n_qubits, depth=n_qubits)
ansatz = tag(ansatz, "ansatz")

# Observable
observable = hamiltonian_factory(n_qubits, detuning= Z)
```

## Optimizers

We will experiment with three different optimizers: ADAM, QNG and QNG-SPSA. To train a model with the different optimizers we will create a `QuantumModel` and reset the values of their variational parameters before each training loop so that all of them have the same starting point.

```python exec="on" source="material-block" html="1" session="main"
# Build circuit and model
circuit = QuantumCircuit(n_qubits, feature_map, ansatz)
model = QNN(circuit, [observable])

# Loss function
mse_loss = torch.nn.MSELoss()

# Initial parameter values
initial_params = torch.rand(model.num_vparams)
```

We can now train the model with the different corresponding optimizers:
### ADAM
```python exec="on" source="material-block" html="1" session="main"
# Train with ADAM
n_epochs_adam = 20
lr_adam = 0.1

model.reset_vparams(initial_params)
optimizer = torch.optim.Adam(model.parameters(), lr=lr_adam)

loss_adam = []
for i in range(n_epochs_adam):
optimizer.zero_grad()
loss = mse_loss(model(values=x_train).squeeze(), y_train.squeeze())
loss_adam.append(float(loss))
loss.backward()
optimizer.step()
```

### QNG
```python exec="on" source="material-block" html="1" session="main"
# Train with QNG
n_epochs_qng = 20
lr_qng = 0.1

model.reset_vparams(initial_params)
optimizer = QuantumNaturalGradient(
model.parameters(),
lr=lr_qng,
approximation=FisherApproximation.EXACT,
model=model,
beta=0.1,
)

loss_qng = []
for i in range(n_epochs_qng):
optimizer.zero_grad()
loss = mse_loss(model(values=x_train).squeeze(), y_train.squeeze())
loss_qng.append(float(loss))
loss.backward()
optimizer.step()
```

### QNG-SPSA
```python exec="on" source="material-block" html="1" session="main"
# Train with QNG-SPSA
n_epochs_qng_spsa = 20
lr_qng_spsa = 0.01

model.reset_vparams(initial_params)
optimizer = QuantumNaturalGradient(
model.parameters(),
lr=lr_qng_spsa,
approximation=FisherApproximation.SPSA,
model=model,
beta=0.1,
epsilon=0.01,
)

loss_qng_spsa = []
for i in range(n_epochs_qng_spsa):
optimizer.zero_grad()
loss = mse_loss(model(values=x_train).squeeze(), y_train.squeeze())
loss_qng_spsa.append(float(loss))
loss.backward()
optimizer.step()
```

## Plotting

We now plot the losses corresponding to each of the optimizers:
```python exec="on" source="material-block" html="1" session="main"
# Plot losses
fig, _ = plt.subplots()
plt.plot(range(n_epochs_adam), loss_adam, label="Adam optimizer")
plt.plot(range(n_epochs_qng), loss_qng, label="QNG optimizer")
plt.plot(range(n_epochs_qng_spsa), loss_qng_spsa, label="QNG-SPSA optimizer")
plt.legend()
plt.xlabel("Training epochs")
plt.ylabel("Loss")

from docs import docsutils # markdown-exec: hide
print(docsutils.fig_to_html(plt.gcf())) # markdown-exec: hide
```

## References
[^1]: [Meyer J.](https://quantum-journal.org/papers/q-2021-09-09-539/) Fisher Information in Noisy Intermediate-Scale Quantum Applications
inafergra marked this conversation as resolved.
Show resolved Hide resolved
[^2]: [Stokes et al.](https://quantum-journal.org/papers/q-2020-05-25-269/) - Quantum Natural Gradient
[^3]: [Gacon et al.](https://arxiv.org/abs/2103.09232) - Simultaneous Perturbation Stochastic Approximation of the Quantum Fisher Information
5 changes: 4 additions & 1 deletion mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@ repo_name: "qadence_libs"

nav:
- Overview: index.md
- Sample page: sample_page.md
# - Sample page: sample_page.md
inafergra marked this conversation as resolved.
Show resolved Hide resolved
- Quantum information tools:
- Quantum Natural Gradient: qinfo_tools/qng.md

theme:
name: material
Expand Down Expand Up @@ -34,6 +36,7 @@ theme:
name: Switch to light mode

markdown_extensions:
- footnotes
- admonition # for notes
- pymdownx.arithmatex: # for mathjax
generic: true
Expand Down
1 change: 1 addition & 0 deletions qadence_libs/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@

list_of_submodules = [
".constructors",
".qinfo_tools",
]

__all__ = []
Expand Down
11 changes: 11 additions & 0 deletions qadence_libs/qinfo_tools/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
from __future__ import annotations

from .qfi import get_quantum_fisher, get_quantum_fisher_spsa
from .qng import QuantumNaturalGradient

# Modules to be automatically added to the qadence namespace
__all__ = [
"QuantumNaturalGradient",
"get_quantum_fisher",
"get_quantum_fisher_spsa",
]
Loading