-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add pvqnn #1154
add pvqnn #1154
Conversation
Thank you for opening this pull request. You can find the built site at this link. Deployment Info:
Note: It may take several minutes for updates to this pull request to be reflected on the deployed site. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cool technique and problem set up :)
I think the demo needs quite a bit of work however before approval. Here are some general comments.
- The code generally needs a bit more explanation (especially the large code blocks) and some small comments in the code here and there would improve readability.
- Some of the statements about accuracies etc are incorrect at the moment I believe. Perhaps something went wrong in the build?
- Some parts of the text could be explained more clearly (see comments).
- It might be useful to have a read through some of the existing pennylane demos and try to adapt the style accordingly.
|
||
###################################################################### | ||
# Variational algorithms are proposed to solve optimization problems in chemistry, combinatorial | ||
# optimization and machine learning, with potential quantum advantage. [#cerezo2021variational]_ Such algorithms operate by |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is more common to put the reference before the full stop. i.e. like this [1].
###################################################################### | ||
# Variational algorithms are proposed to solve optimization problems in chemistry, combinatorial | ||
# optimization and machine learning, with potential quantum advantage. [#cerezo2021variational]_ Such algorithms operate by | ||
# first encoding data :math:`x` into a :math:`n`-qubit quantum state. The quantum state is then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
More general schemes involve layers of data encoding and trainable unitaries. Is this specific structure needed for this approach? Might be worth saying so, or phrasing things a bit more generally (or perhaps just inserting an 'often').
# optimization and machine learning, with potential quantum advantage. [#cerezo2021variational]_ Such algorithms operate by | ||
# first encoding data :math:`x` into a :math:`n`-qubit quantum state. The quantum state is then | ||
# transformed by an Ansatz :math:`U(\theta)`. The parameters :math:`\theta` are optimized by | ||
# evaluating gradients of the quantum circuit via parameter-shift rules [#schuld2019evaluating]_ and calculating updates of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"typically via the parameter shift rule"? (sometimes SPSA, finite diff or other methods might be used)
# first encoding data :math:`x` into a :math:`n`-qubit quantum state. The quantum state is then | ||
# transformed by an Ansatz :math:`U(\theta)`. The parameters :math:`\theta` are optimized by | ||
# evaluating gradients of the quantum circuit via parameter-shift rules [#schuld2019evaluating]_ and calculating updates of the | ||
# parameter via optimization on classical computers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just say "calculating updates of the parameter on a classical computer". The classical computer doesn't really solve an optimization problem, but rather the optimization is what is being done to the quantum circuit.
# evaluating gradients of the quantum circuit via parameter-shift rules [#schuld2019evaluating]_ and calculating updates of the | ||
# parameter via optimization on classical computers. | ||
# | ||
# However, many Ansatze face the barren plateau problem [#mcclean2018barren]_, which leads to difficulty in convergence |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ansätze (German spelling since it is a German word)
# --------------------- | ||
# | ||
###################################################################### | ||
# When taking the strategy of observable construction, one additionally may want to use Ansatz quantum |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a bug in the formatting of the build here
ax.set_xticks(x + width / 2, locality) | ||
ax.legend(loc="upper left", ncols=3) | ||
plt.show() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would remove the code/discussion around the number of parameters to keep things shorter and more concise.
###################################################################### | ||
# Upon obtaining our hybrid results, we may now combine these results with that of the observable | ||
# construction and ansatz expansion menthods, and plot all the post-variational strategies together on | ||
# a heatmap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a best accuracy of 62% for binary classification is quite low. Is this really the best that can be done? I was expecting something much closer to 100%.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we specifically chose a more difficult binary instance. If we chose the classification between 0 and 1, the accuracies for both methods would be very close to 100% and it would not serve as a good dataset for discerning performance between the two methods. We have switched to 2 and 6, which gives a better figures.
|
||
###################################################################### | ||
# Our results show that all post-variational methods exceed the variational algorithm while using the | ||
# same Ansatz for the Ansatz expansion and hybrid strategies. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the build the variational achieves 59% which is better than some of the post variational accuracies here though?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We expect post-variational strategies with lower degrees to be less expressive and therefore have worse performance. For example, the first order ansatz expansion is just one single variational gradient update.
###################################################################### | ||
# Conclusion | ||
# --------------------- | ||
# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was expecting the conclusion to reinforce the main point of the intro, i.e. that variational q circuits are difficult to train and this approach gives a potential solution. but this isn't the message I am getting here.
demonstrations/tutorial_post-variational_quantum_neural_networks.metadata.json
Outdated
Show resolved
Hide resolved
Co-authored-by: Ivana Kurečić <[email protected]>
Title:
Post-Variational Quantum Neural Networks
Summary:
In this demo, we discuss “post-variational strategies”, where we take the classical combination of multiple fixed quantum circuits and find the optimal combination through feeding our combinations through a classical multilayer perceptron. We shift tunable parameters from the quantum computer to the classical computer, opting for ensemble strategies when optimizing quantum models.
Relevant references:
P.-W. Huang, P. Rebentrost (2023). Post-variational quantum neural networks. arXiv:2307.10560 [quant-ph]
Possible Drawbacks:
Scalability.
Related GitHub Issues:
NA
If you are writing a demonstration, please answer these questions to facilitate the marketing process.
Introduce a new architecture for quantum machine learning.
Academic Researchers and Students, Quantum Technology enthusiasts
Quantum Machine Learning, Neural Networks, Post-Variational
(more details here)