Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement non-binary Sparse Block Codes #153

Open
mikeheddes opened this issue Jul 20, 2023 · 5 comments
Open

Implement non-binary Sparse Block Codes #153

mikeheddes opened this issue Jul 20, 2023 · 5 comments

Comments

@mikeheddes
Copy link
Member

A follow up feature to the support of Binary Sparse Block Codes. See discussion in #146.

@paddywardle
Copy link

Hi, new to torchhd but would be really interested in contributing. Is this problem still open? As I understand is it related to implementing the data structure to hold vectors as sparse arrays and then implementing MAP operations for sparse vector data structures @mikeheddes?

@mikeheddes
Copy link
Member Author

Hi, thank you for your interest in Torchhd. Yes, this feature has yet to be added to the library and as far as I'm aware no one is working on it yet.

One aspect that is a bit unique about this model is that we have to decide wether to use torch.sparse tensors (which I suspect will be more efficient when working with small bundles of vectors) or to use dense tensors (which could be more efficient when working with large bundles of vectors). The main reason for this is because I'm not sure how to implement sparse circular convolution efficiently, i.e., in O(k log n) vs O(k^2) for two sparse vectors of length n with k non-zero elements. With dense vectors we can use the fast Fourier transform to compute the circular convolution in O(n log n), thus when n log n < k^2, the dense model should be more efficient.

I think it is worth trying both ways and perhaps giving users an option to convert between them if it is indeed the case that one is more efficient than the other in different settings. A good start for you before starting the implementation into Torchhd as a new VSATensor would be to benchmark the two implementations in isolation. For this, you can start with the implementations of the random, bundle, and bind methods.

Here are some example PRs to help you get started:

If you have any questions don't hesitate to ask.

@paddywardle
Copy link

Sounds like an interesting problem! I'll have a go.

Could it be useful to add conversions to non binary sparse array conversions either way?

@paddywardle
Copy link

Hi @mikeheddes,

So I've been having a bit of a think about this and my thoughts for implementation are we could go down two routes with how to sparsify the arrays of hypervectors:

  1. treating -1 as implicit (in the same way 0 is implicit in binary sparse arrays) and use either coo, csr or ccs formats), in this situation we could use torch.sparse implementations
  2. another thought I had is we could use run-length encoding (same compression format that parquet uses), then we wouldn't need to treat anything as implicit.

For the MAP operations this should be easy enough with torch.sparse, however permutation is slightly annoying as torch.sparse doesn't let you modify indices inplace, so we'd have to recreate the sparse array object each time, if we used this.

I need to do some benchmarks on circular convolution but I guess the big O would just be dependent on how much compression we get, would probably need to implement some thinning to compress more

@mikeheddes
Copy link
Member Author

Apologies for the delayed response. I think these are two good options for the implementation. I find the second one quite clever. One downside is that PyTorch doesn't support run-length encoding right now (as far as I know) which means that we will have to implement all the operations ourselves whereas for the COO format many operations are supported already.

I would not worry about having to create a new sparse array for every operation, I believe it is PyTorch's default to create a new tensor for every operation.

As for the thinning operations, I think we should provide this as a separate method on the inherited VSATensor class to give the users more flexibility.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants