Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fixes to hsic.py #6

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

fixes to hsic.py #6

wants to merge 2 commits into from

Conversation

xvdp
Copy link

@xvdp xvdp commented Sep 6, 2020

Ok, this is the correct branch to create pull request -- same coment as the one i closed. this one has only one file.

Hi Kurt, I used 3 functions from hsic and noticed a few things

you can allocate much less memory by using in place functions, one shouldnt do that in the model but in the loss function that is ok, because loss is the source of of the gradient it simply copies the value to the grad.
for some reason your kernelmat you cast a new tensor in the -e^(d/v), i removed that - so now if X device is cuda, the device doesnt die. Also, recasting a Float Tensor introduces floating point errors, not significant in this context but i dont know how they would propagate.
I added a linting mute so my vscode doesnt flag torch. or as wrong

If you want look at the reasoning, tests and timing, I have a jupyter in my dev branch where I tested the changes.
https://github.com/xvdp/HSIC-bottleneck/blob/xvdp_dev/jupyter/HSIC_Kernel.ipynb
I only changed the 3 functions that i was using
distmat()
kernelmat()
hsic_normalized_cca()

xvdp added 2 commits September 6, 2020 12:12
distmat() minimized memory allocation, added option to remove grad
kernelmat() minimized mem alloc by using in place ops, fixed devices, tensor on input device == output device,
hsic_normalized_cca() reuse temp tensors
@choasma
Copy link
Owner

choasma commented Sep 9, 2020

oh T-H-A-N-K-S!
I'll spend days after work to test your code. I'll confirm with you shortly.

@xvdp
Copy link
Author

xvdp commented Sep 14, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants