Converting torch mean and var tensors into multi-output posterior objectives for qNoisyExpectedHypervolumeImprovement #2020
-
I'm attempting to use Before describing my issue, I noticed the following observation which I think provides a clue as to why my solution is not working. In the multi-objective Bayesian optimisation tutorial, the sampler generates samples of shape
Here I use a toy model that takes the first two dimensions as the mean and the next two as the variance. I try to use
I then tried the
when I use either of these surrogate models and pass them to the
I get an error at this line https://github.com/pytorch/botorch/blob/main/botorch/acquisition/multi_objective/monte_carlo.py#L580. The error I get is Based on this it seems the issue either comes from how the System InfoPlease provide information about your setup, including
|
Beta Was this translation helpful? Give feedback.
Replies: 6 comments
-
Hi @hkenlay. Here's a toy example that shows various tensor shapes that we expect when working with qNEHVI. I used a
This is the case because the posterior does not have a batch shape in your example. The acquisition functions always use an explicit batch shape (via
I am not fully sure whether you should be using For example,
|
Beta Was this translation helpful? Give feedback.
-
Hello @saitcakmak, thank you for your detailed reply, I really appreciate your help. Although I think I understand your example, I'm still struggling to get my own toy example working. The first question I have from working through your example and putting debug flags throughout BoTorch source code is the following: In the docstring of the Model base class it states the input of the Thank you for your suggestion to implement the It seems like the For my use case, I have a neural network which takes as input a batch of size The main goal here is to somehow go from these tensors to a |
Beta Was this translation helpful? Give feedback.
-
In BoTorch, we typically express the shapes of inputs to a model as
If your setup does not produce full-covariance posteriors, the effort to fit it into a GPyTorch MVN is probably unnecessary. One option for you would be to use PyTorch MVN, which should represent independent batched posteriors quite easily. We have a |
Beta Was this translation helpful? Give feedback.
-
Actually, you don't even need an MVN if it is all independent. You can just use PyTorch Normal distribution: https://pytorch.org/docs/stable/distributions.html#normal |
Beta Was this translation helpful? Give feedback.
-
Thank you for your help and patience @saitcakmak. I took the PyTorch route as you suggested and managed to get a working solution. I needed to implement some functionality from the
|
Beta Was this translation helpful? Give feedback.
-
Happy to help & thanks for sharing your solution! I'll move this over to a discussion for future discoverability |
Beta Was this translation helpful? Give feedback.
Thank you for your help and patience @saitcakmak. I took the PyTorch route as you suggested and managed to get a working solution. I needed to implement some functionality from the
Posterior
API which are not implemented byTorchPosterior
. I also needed to pass a sampler toqNoisyExpectedHypervolumeImprovement
because my new posterior class was not registered. For the benefit of anyone else who stumbles across this issue the full code is: