Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Partial exact inference, inspired by "Delayed Sampling and Automatic Rao–Blackwellization of Probabilistic Programs" and ProbZelus #224

Open
turion opened this issue Oct 26, 2022 · 3 comments

Comments

@turion
Copy link
Collaborator

turion commented Oct 26, 2022

While reading through the ProbZelus paper, I became aware that there are techniques to do partial exact inference in Bayesian networks, using conjugate priors. This is described e.g. in https://arxiv.org/pdf/1708.07787.pdf, and the algorithm is improved in the ProbZelus paper. I believe that after #177, something similar can be implemented here. Basically, we would need a way to specify a Bayesian network formally, through a monad interface. All the probability distribution functions like normal etc. would then not output a sample of that distribution, but a formal variable that can later be used to condition. This can not quite be implemented as a MonadCond, but a more specialised version (that equates two expressions in the monad instead of conditioning on an arbitrary boolean) would have to be implemented. There is some discussion and links to some initial work here: #144, #144 (comment) #144 (comment)

Some pseudocode sketch:

data BayesianNetworkT m a = ...

type VarIndex = Int
data FormalVar = Constant Double | UniformDist VarIndex | NormalDist VarIndex FormalVar FormalVar | (:+: FormalVar FormalVar) | ...

newVar :: BayesianNetworkT m FormalVar

instance MonadSample (BayesianNetworkT m) where
  type Real = FormalVar

  random = newVar

observe :: BayesianNetworkT m FormalVar -> FormalVar -> BayesianNetworkT m ()

The observe function would have to condition on the conjugate priors and output a simplified network.

@reubenharry
Copy link
Contributor

reubenharry commented Oct 26, 2022

Ah, I share your interest in investigating this. I'll have a read of the paper too. I've also thought about doing what I suggest above, but have never thought through the details.

I have four very general thoughts that are not totally related, but here seems like a place to put them:

  1. Have you seen https://arxiv.org/pdf/2112.13251.pdf? This seems like a thing you might find interesting, that is also in a similar space (i.e. partial exact inference, reactive). I'm not disinterested in implementing it in Haskell, especially given the recent progress on reactive stuff.

  2. I wonder if your proposal here is related to how Gen (another PPL) handles probabilistic programs. In general my sense is that the Gen people know what they're doing, and I know that some Gen-related techniques are used in the Haskell probzeleus implementation, so I've been meaning to inspect it carefully.

  3. There's a talk here https://www.youtube.com/watch?v=xLgqx4DK49k&t=4s&ab_channel=ACMSIGPLAN about a different way to do observations. Mostly unrelated probably, but just noting it for good measure.

  4. I know that hakaru, another PPL, which is implemented in Haskell, handles conjugacy and various other things by having a static representation of a probabilistic program.

@turion
Copy link
Collaborator Author

turion commented Feb 7, 2023

CC @idontgetoutmuch

@turion
Copy link
Collaborator Author

turion commented Feb 9, 2023

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants