Skip to content

Commit

Permalink
Sync the changes from probcomp/gen-quickstart/pull/90
Browse files Browse the repository at this point in the history
  • Loading branch information
horizon-blue committed Jan 11, 2024
1 parent c93ca70 commit 14d8ebc
Show file tree
Hide file tree
Showing 9 changed files with 21 additions and 21 deletions.
2 changes: 1 addition & 1 deletion _posts/2020-09-01-bounds.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ This result seems to make sense qualitatively.
## Applying variational inference

Let's now try to apply variational inference to this problem.
We will use Gen's support for [black box variational inference](https://www.gen.dev/dev/ref/vi/#Black-box-variational-inference-1), which is a class of algorithms introduced by Rajesh Ranganath et al. in a [2013 paper](https://arxiv.org/abs/1401.0118) that requires only the ability to evaluate the unnormalized log probability density of the model.
We will use Gen's support for [black box variational inference](https://www.gen.dev/docs/dev/ref/vi/#Black-box-variational-inference-1), which is a class of algorithms introduced by Rajesh Ranganath et al. in a [2013 paper](https://arxiv.org/abs/1401.0118) that requires only the ability to evaluate the unnormalized log probability density of the model.
Gen lets you apply black box variational inference using variational approximating families that are themselves defined as probabilistic programs.

The first step is to write the probabilistic program that defines the variational approximating family that we will optimize to match the posterior as closely as possible.
Expand Down
2 changes: 1 addition & 1 deletion ecosystem.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,4 +54,4 @@ Probability distributions and involutive MCMC kernels on orientations and rotati
Wrapper for employing the [Redner](https://github.com/BachiLi/redner) differentiable renderer in Gen generative models.

#### [GenTraceKernelDSL](https://github.com/probcomp/GenTraceKernelDSL.jl)
An alternative interface to defining [trace translators](https://www.gen.dev/dev/ref/trace_translators/).
An alternative interface to defining [trace translators](https://www.gen.dev/docs/dev/ref/trace_translators/).
8 changes: 4 additions & 4 deletions tutorials/data-driven-proposals/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -513,7 +513,7 @@ Run inference using Gen's built-in importance resampling implementation. Use

To see how to use the built-in importance resampling function, run
```?Gen.importance_resampling``` or check out the
[documentation](https://www.gen.dev/dev/ref/importance/#Gen.importance_resampling).
[documentation](https://www.gen.dev/docs/dev/ref/importance/#Gen.importance_resampling).

We have provided some starter code.

Expand Down Expand Up @@ -780,7 +780,7 @@ num_y_bins = 5;
```

We will propose the x-coordinate of the destination from a
[piecewise_uniform](https://www.gen.dev/dev/ref/distributions/#Gen.piecewise_uniform)
[piecewise_uniform](https://www.gen.dev/docs/dev/ref/distributions/#Gen.piecewise_uniform)
distribution, where we set higher probability for certain bins based on the
heuristic described above and use a uniform continuous distribution for the
coordinate within a bin. The `compute_bin_probs` function below computes the
Expand Down Expand Up @@ -1075,7 +1075,7 @@ end;

Our choice of the `score_high` value of 5. was somewhat arbitrary. To use
more informed value, we can make `score_high` into a [*trainable
parameter*](https://www.gen.dev/dev/ref/gfi/#Trainable-parameters-1)
parameter*](https://www.gen.dev/docs/dev/ref/gfi/#Trainable-parameters-1)
of the generative function. Below, we write a new version of the proposal
function that makes `score_high` trainable. However, the optimization
algorithms we will use for training work best with *unconstrained* parameters
Expand Down Expand Up @@ -1184,7 +1184,7 @@ end;
Next, we choose type of optimization algorithm we will use for training. Gen
supports a set of gradient-based optimization algorithms (see [Optimizing
Trainable
Parameters](https://www.gen.dev/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)).
Parameters](https://www.gen.dev/docs/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)).
Here we will use gradient descent with a fixed step size of 0.001.


Expand Down
2 changes: 1 addition & 1 deletion tutorials/intro-to-modeling/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -771,7 +771,7 @@ Write an inference program that generates traces of `sine_model` that explain th
What if we'd want to predict `ys` given `xs`?

Using the API method
[`generate`](https://www.gen.dev/dev/ref/gfi/#Gen.generate), we
[`generate`](https://www.gen.dev/docs/dev/ref/gfi/#Gen.generate), we
can generate a trace of a generative function in which the values of certain
random choices are constrained to given values. The constraints are a choice
map that maps the addresses of the constrained random choices to their
Expand Down
2 changes: 1 addition & 1 deletion tutorials/iterative-inference/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -1097,7 +1097,7 @@ For example, let's say we wanted to take a trace and assign each point's
`is_outlier` score to the most likely possibility. We can do this by
iterating over both possible traces, scoring them, and choosing the one with
the higher score. We can do this using Gen's
[`update`](https://www.gen.dev/dev/ref/gfi/#Update-1) function,
[`update`](https://www.gen.dev/docs/dev/ref/gfi/#Update-1) function,
which allows us to manually update a trace to satisfy some constraints:


Expand Down
12 changes: 6 additions & 6 deletions tutorials/particle-filtering/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@ We show how Gen's support for SMC integrates with its support for MCMC, enabling
"bearings only tracking" problem described in [4].

This notebook will also introduce you to the
[`Unfold`](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1) combinator,
[`Unfold`](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1) combinator,
which can be used to improve performance of SMC.
`Unfold` is just one example of the levers that Gen provides for
improving performance; once you understand it, you can check
Gen's documentation to see how similar principles apply to the
[`Map`](https://www.gen.dev/dev/ref/combinators/#Map-combinator-1) combinator
[`Map`](https://www.gen.dev/docs/dev/ref/combinators/#Map-combinator-1) combinator
and to the static DSL. (These features are also covered in the previous tutorial,
[Scaling with Combinators and the Static Modeling Language](../scaling-with-combinators-new/tutorial).)

Expand Down Expand Up @@ -238,7 +238,7 @@ sample of `num_samples` traces from the weighted collection that the particle
filter produces.

Gen provides methods for initializing and updating the state of a particle
filter, documented in [Particle Filtering](https://www.gen.dev/dev/ref/pf/).
filter, documented in [Particle Filtering](https://www.gen.dev/docs/dev/ref/pf/).

- `Gen.initialize_particle_filter`

Expand Down Expand Up @@ -300,7 +300,7 @@ and then we introduce one additional bearing measurement by calling
- The new arguments to the generative function for this step. In our case,
this is the number of measurements beyond the first measurement.

- The [argdiff](https://www.gen.dev/dev/ref/gfi/#Argdiffs-1)
- The [argdiff](https://www.gen.dev/docs/dev/ref/gfi/#Argdiffs-1)
value, which provides detailed information about the change to the
arguments between the previous step and this step. We will revisit this
value later. For now, we indicate that we do not know how the `T::Int`
Expand Down Expand Up @@ -645,7 +645,7 @@ body whenever performing a trace update. This allows the built-in modeling
DSL to be very flexible and to have a simple implementation, at the cost of
performance. There are several ways of improving performance after one has a
prototype written in the built-in modeling DSL. One of these is [Generative
Function Combinators](https://www.gen.dev/dev/ref/combinators/), which make
Function Combinators](https://www.gen.dev/docs/dev/ref/combinators/), which make
the flow of information through the generative process more explicit to Gen,
and enable asymptotically more efficient inference programs.

Expand Down Expand Up @@ -676,7 +676,7 @@ Julia `for` loop in our model.
This `for` loop has a very specific pattern of information flow—there is a
sequence of states (represented by `x`, `y`, `vx`, and `vy`), and each state is
generated from the previous state. This is exactly the pattern that the
[Unfold](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1)
[Unfold](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1)
generative function combinator is designed to handle.

Below, we re-express the Julia `for` loop over the state sequence using the
Expand Down
6 changes: 3 additions & 3 deletions tutorials/regenerate/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ layout: splash

# Reasoning About Regenerate

Gen provides a primitive called [`regenerate`](https://www.gen.dev/dev/ref/gfi/#Regenerate-1) that allows users to ask for certain random choices in a trace to be re-generated from scratch. `regenerate` is the basis of one variant of the [`metropolis_hastings`](https://www.gen.dev/dev/ref/mcmc/#Gen.metropolis_hastings) operator in Gen's inference library.
Gen provides a primitive called [`regenerate`](https://www.gen.dev/docs/dev/ref/gfi/#Regenerate-1) that allows users to ask for certain random choices in a trace to be re-generated from scratch. `regenerate` is the basis of one variant of the [`metropolis_hastings`](https://www.gen.dev/docs/dev/ref/mcmc/#Gen.metropolis_hastings) operator in Gen's inference library.

This notebook aims to help you understand the computation that `regenerate` is performing.

Expand Down Expand Up @@ -61,7 +61,7 @@ using Gen: regenerate, select, NoChange
(trace, weight, retdiff) = regenerate(trace, (0.3,), (NoChange(),), select(:a));
```

Note that unlike [`update`](https://www.gen.dev/dev/ref/gfi/#Gen.update), we do not provide the new values for the random choices that we want to change. Instead, we simply pass in a [selection](https://www.gen.dev/dev/ref/selections/#Selections-1) indicating the addresses that we want to propose new values for.
Note that unlike [`update`](https://www.gen.dev/docs/dev/ref/gfi/#Gen.update), we do not provide the new values for the random choices that we want to change. Instead, we simply pass in a [selection](https://www.gen.dev/docs/dev/ref/selections/#Selections-1) indicating the addresses that we want to propose new values for.

Note that `select(:a)` is equivalent to:
```julia
Expand Down Expand Up @@ -91,7 +91,7 @@ get_choices(trace)

Re-run the regenerate command until you get a trace where `a` is `false`. Note that the address `b` doesn't appear in the resulting trace. Then, run the command again until you get a trace where `a` is `true`. Note that now there is a value for `b`. This value of `b` was sampled along with the new value for `a`---`regenerate` will regenerate new values for the selected adddresses, but also any new addresses that may be introduced as a consequence of stochastic control flow.

What distribution is `regenerate` sampling the selected values from? It turns out that `regenerate` is using the [*internal proposal distribution family*](https://www.gen.dev/dev/ref/gfi/#Internal-proposal-distribution-family-1) $q(t; x, u)$, just like like `generate`. Recall that for `@gen` functions, the internal proposal distribution is based on *ancestral sampling*. But whereas `generate` was given the expicit choice map of constraints ($u$) as an argument, `regenerate` constructs $u$ by starting with the previous trace $t$ and then removing any selected addresses. In other words, `regenerate` is like `generate`, but where the constraints are the choices made in the previous trace less the selected choices.
What distribution is `regenerate` sampling the selected values from? It turns out that `regenerate` is using the [*internal proposal distribution family*](https://www.gen.dev/docs/dev/ref/gfi/#Internal-proposal-distribution-family-1) $q(t; x, u)$, just like like `generate`. Recall that for `@gen` functions, the internal proposal distribution is based on *ancestral sampling*. But whereas `generate` was given the expicit choice map of constraints ($u$) as an argument, `regenerate` constructs $u$ by starting with the previous trace $t$ and then removing any selected addresses. In other words, `regenerate` is like `generate`, but where the constraints are the choices made in the previous trace less the selected choices.

We can make this concrete. Let us start with a deterministic trace again:

Expand Down
2 changes: 1 addition & 1 deletion tutorials/rj/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Given a dataset of `xs`, our model will randomly divide the range `(xmin, xmax)`
It does this by sampling a number of segments (`:segment_count`), then sampling a vector of _proportions_ from a Dirichlet distribution (`:fractions`). The vector is guaranteed to sum to 1: if there are, say, three segments, this vector might be `[0.3, 0.5, 0.2]`. The length of each segment is the fraction of the interval assigned to it, times the length of the entire interval, e.g. `0.2 * (xmax - xmin)`. For each segmment, we generate a `y` value from a normal distribution. Finally, we sample the `y` values near the piecewise constant function described by the segments.

### Using `@dist` to define new distributions for convenience
To sample the number of segments, we need a distribution with support only on the positive integers. We create one using the [Distributions DSL](https://www.gen.dev/dev/ref/distributions/#dist_dsl-1):
To sample the number of segments, we need a distribution with support only on the positive integers. We create one using the [Distributions DSL](https://www.gen.dev/docs/dev/ref/distributions/#dist_dsl-1):


```julia
Expand Down
6 changes: 3 additions & 3 deletions tutorials/scaling-with-combinators-new/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,11 @@ layout: splash

# Scaling with Combinators and the Static Modeling Language

Up until this point, we have been using [Gen's generic built-in modeling language](https://www.gen.dev/dev/ref/modeling/), which is a very flexible modeling language that is shallowly embedded in Julia. However, better performance and scaling characteristics can be obtained using specialized modeling languages or modeling constructs. This notebook introduces two built-in features of Gen:
Up until this point, we have been using [Gen's generic built-in modeling language](https://www.gen.dev/docs/dev/ref/modeling/), which is a very flexible modeling language that is shallowly embedded in Julia. However, better performance and scaling characteristics can be obtained using specialized modeling languages or modeling constructs. This notebook introduces two built-in features of Gen:

- A more specialized [Static Modeling Language](https://www.gen.dev/dev/ref/modeling/#Static-Modeling-Language-1) which is built-in to Gen.
- A more specialized [Static Modeling Language](https://www.gen.dev/docs/dev/ref/modeling/#Static-Modeling-Language-1) which is built-in to Gen.

- A class of modeling constructs called [Generative function combinators](https://www.gen.dev/dev/ref/combinators/).
- A class of modeling constructs called [Generative function combinators](https://www.gen.dev/docs/dev/ref/combinators/).

These features provide both constant-factor speedups, as well as improvements in asymptotic orders of growth, over the generic built-in modeling language.

Expand Down

0 comments on commit 14d8ebc

Please sign in to comment.