Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for random walk kernel #29

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

JuliusSchwartz
Copy link

@JuliusSchwartz JuliusSchwartz commented Sep 29, 2021

Converting the work done in https://github.com/leojklarner/GProTorch/blob/kernels/gprotorch/kernels/graph_kernels/random_walk.py
from pytorch to tensorflow and using https://www.jmlr.org/papers/volume11/vishwanathan10a/vishwanathan10a.pdf as a reference.

Have only implented eigendecomposition approach so far (meaning that one test for which GraKel uses Conjugate Gradient Descent only passes if method_type="baseline" is specified as an argument to the constructor of GraKel's random walk kernel)

Tests with non-null values of p don't pass although I suspect this might be due to a bug in GraKel (see ysig/GraKeL#71)

@@ -0,0 +1,14 @@
# Author: Henry Moss & Ryan-Rhys Griffiths
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can change the author name to yours!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Haha was waiting for permission

# Author: Henry Moss & Ryan-Rhys Griffiths
"""
Molecule kernels for Gaussian Process Regression implemented in GPflow.
"""
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we change this module-level docstring?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably

@@ -0,0 +1,102 @@
# Author: Henry Moss & Ryan-Rhys Griffiths
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having a kernel_modules directory is probably a good idea.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy for it to have another (less clumsy) name but yeah I think it was the only way for me to implement the kernel without contaminating pre-existing code


from GP.kernel_modules.random_walk import RandomWalk

sys.path.append('/Users/juliusschwartz/Mystuff/FlowMO')
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any way to avoid this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope so xD This was absolutely a hack and I completely forgot that I'd left a hardcoded file path in here. I'd definitely get this PR rejected at work

from .kernel_utils import normalize


class RandomWalk(gpflow.kernels.Kernel):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we have some documentation for the class?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely. I suppose I hoped for the main logic to be looked at first but honestly I should have made a separate feature branch and done multiple PRs onto that (one for the logic, one for documentation etc)

@Ryan-Rhys
Copy link
Owner

I'll just add Leo to the review since he coded up the random walk kernel in GProTorch.

@JuliusSchwartz
Copy link
Author

I'll just add Leo to the review since he coded up the random walk kernel in GProTorch.

Good idea (I don't seem to be able to add people myself)

:param X: array of N graph objects (represented as adjacency matrices of varying sizes).
:return: N x 1 array.
"""
return tf.linalg.tensor_diag_part(self.K(X))
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Couldn't think of anything smarter than implementing this function like this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants