Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update links to documentations in the tutorials #518

Merged
merged 2 commits into from
Jan 12, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion _posts/2020-09-01-bounds.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ This result seems to make sense qualitatively.
## Applying variational inference

Let's now try to apply variational inference to this problem.
We will use Gen's support for [black box variational inference](https://www.gen.dev/dev/ref/vi/#Black-box-variational-inference-1), which is a class of algorithms introduced by Rajesh Ranganath et al. in a [2013 paper](https://arxiv.org/abs/1401.0118) that requires only the ability to evaluate the unnormalized log probability density of the model.
We will use Gen's support for [black box variational inference](https://www.gen.dev/docs/dev/ref/vi/#Black-box-variational-inference-1), which is a class of algorithms introduced by Rajesh Ranganath et al. in a [2013 paper](https://arxiv.org/abs/1401.0118) that requires only the ability to evaluate the unnormalized log probability density of the model.
Gen lets you apply black box variational inference using variational approximating families that are themselves defined as probabilistic programs.

The first step is to write the probabilistic program that defines the variational approximating family that we will optimize to match the posterior as closely as possible.
Expand Down
2 changes: 1 addition & 1 deletion ecosystem.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,4 +54,4 @@ Probability distributions and involutive MCMC kernels on orientations and rotati
Wrapper for employing the [Redner](https://github.com/BachiLi/redner) differentiable renderer in Gen generative models.

#### [GenTraceKernelDSL](https://github.com/probcomp/GenTraceKernelDSL.jl)
An alternative interface to defining [trace translators](https://www.gen.dev/dev/ref/trace_translators/).
An alternative interface to defining [trace translators](https://www.gen.dev/docs/dev/ref/trace_translators/).
14 changes: 7 additions & 7 deletions tutorials/bottom-up-intro/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,7 +249,7 @@ plot(map(p -> query(p, 14), [0.1, 0.5, 0.9])...)

## 2. Tracing the values of random choices in generative functions

The ability to *trace* the values of random choices in a probabilistic program (i.e. record the value of each choice in a trace data structure) is one of the basic features of Gen's built-in modeling language. To write a function in this language we use the `@gen` macro provided by Gen. Note that the built-in modeling language is just one way of defining a [generative function](https://probcomp.github.io/Gen/dev/ref/distributions/).
The ability to *trace* the values of random choices in a probabilistic program (i.e. record the value of each choice in a trace data structure) is one of the basic features of Gen's built-in modeling language. To write a function in this language we use the `@gen` macro provided by Gen. Note that the built-in modeling language is just one way of defining a [generative function](https://www.gen.dev/docs/stable/ref/distributions/).

Below, we write a `@gen function` version of the function `f` defined above, this time using Gen's tracing instead of our own:

Expand Down Expand Up @@ -282,7 +282,7 @@ gen_f(0.3)



To run a `@gen` function and get a trace of the execution, we use the [`simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method:
To run a `@gen` function and get a trace of the execution, we use the [`simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method:


```julia
Expand Down Expand Up @@ -441,7 +441,7 @@ end
expected: 0.5760000000000001, actual: 0.5754


We can also get the log probability that an individual trace would be generated by the function ($\log p(t; x)$), using the [`get_score`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.get_score) method.
We can also get the log probability that an individual trace would be generated by the function ($\log p(t; x)$), using the [`get_score`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.get_score) method.

Let's generate a trace below, get its log probability with `get_score`

Expand Down Expand Up @@ -476,13 +476,13 @@ So far, we have run generative functions in two ways:
gen_f(0.3)
```

2. Using the [`simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method:
2. Using the [`simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method:

```julia
trace = simulate(gen_f, (0.3,))
```

We can also generate a trace that satisfies a set of constraints on the valus of random choices using the [`generate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.generate) method. Suppose that we want a trace where `:a` is always `true` and `:c` is always `false`. We first construct a choice map containing these constraints:
We can also generate a trace that satisfies a set of constraints on the valus of random choices using the [`generate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.generate) method. Suppose that we want a trace where `:a` is always `true` and `:c` is always `false`. We first construct a choice map containing these constraints:


```julia
Expand Down Expand Up @@ -617,7 +617,7 @@ function my_importance_sampler(gen_fn, args, constraints, num_traces)
end;
```

A more efficient and numerically robust implementation of importance resampling is provided in Gen's inference library (see [`importance_resampling`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.importance_resampling)).
A more efficient and numerically robust implementation of importance resampling is provided in Gen's inference library (see [`importance_resampling`](https://www.gen.dev/docs/stable/ref/inference/#Gen.importance_resampling)).

Suppose our goal is to sample `:a` and `:b` from the conditional distribution given that we have observed `:c` is `false`. That is, we want to sample choice map $t$ with probability $0$ if $t(c) = \mbox{false}$ and otherwise probability:

Expand Down Expand Up @@ -720,7 +720,7 @@ get_choices(trace)



Now, we use the [`update`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.update) method, to change the value of `:c` from `true` to `false`:
Now, we use the [`update`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.update) method, to change the value of `:c` from `true` to `false`:


```julia
Expand Down
18 changes: 9 additions & 9 deletions tutorials/data-driven-proposals/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -513,7 +513,7 @@ Run inference using Gen's built-in importance resampling implementation. Use

To see how to use the built-in importance resampling function, run
```?Gen.importance_resampling``` or check out the
[documentation](https://www.gen.dev/dev/ref/importance/#Gen.importance_resampling).
[documentation](https://www.gen.dev/docs/dev/ref/importance/#Gen.importance_resampling).

We have provided some starter code.

Expand Down Expand Up @@ -709,7 +709,7 @@ visualize_inference(measurements, scene_2doors, start, computation_amt=50, sampl
## 2. Writing a data-driven proposal as a generative function <a name="custom-proposal"></a>

The inference algorithm above used a variant of
[`Gen.importance_resampling`](https://probcomp.github.io/Gen/dev/ref/importance/#Gen.importance_resampling)
[`Gen.importance_resampling`](https://www.gen.dev/docs/stable/ref/importance/#Gen.importance_resampling)
that does not take a custom proposal distribution. It uses the default
proposal distribution associated with the generative model. For generative
functions defined using the built-in modeling DSL, the default proposal
Expand Down Expand Up @@ -780,7 +780,7 @@ num_y_bins = 5;
```

We will propose the x-coordinate of the destination from a
[piecewise_uniform](https://www.gen.dev/dev/ref/distributions/#Gen.piecewise_uniform)
[piecewise_uniform](https://www.gen.dev/docs/dev/ref/distributions/#Gen.piecewise_uniform)
distribution, where we set higher probability for certain bins based on the
heuristic described above and use a uniform continuous distribution for the
coordinate within a bin. The `compute_bin_probs` function below computes the
Expand Down Expand Up @@ -861,7 +861,7 @@ end;
```

We can propose values of random choices from the proposal function using
[`Gen.propose`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.propose).
[`Gen.propose`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.propose).
This method returns the choices, as well as some other information, which we
won't need for our purposes. For now, you can think of `Gen.propose` as
similar to `Gen.generate` except that it does not produce a full execution
Expand Down Expand Up @@ -937,7 +937,7 @@ Alone, this is just a heuristic. But we can use it as a proposal for importance

We now use our data-driven proposal within an inference algorithm. There is a
second variant of
[`Gen.importance_resampling`](https://probcomp.github.io/Gen/dev/ref/importance/#Gen.importance_resampling)
[`Gen.importance_resampling`](https://www.gen.dev/docs/stable/ref/importance/#Gen.importance_resampling)
that accepts a generative function representing a custom proposal. This
proposal generative function makes traced random choices at the addresses of
a subset of the unobserved random choices made by the generative model. In
Expand All @@ -956,7 +956,7 @@ proposal accepts arguments `(measurements, scene)`.

This time, use only 5 importance samples (`amt_computation`). You can run
`?Gen.importance_resampling` or check out the
[documentation](https://probcomp.github.io/Gen/dev/ref/inference/#Importance-Sampling-1)
[documentation](https://www.gen.dev/docs/stable/ref/inference/#Importance-Sampling-1)
to understand how to supply the arguments to invoke this second version of of
importance resampling.

Expand Down Expand Up @@ -1075,7 +1075,7 @@ end;

Our choice of the `score_high` value of 5. was somewhat arbitrary. To use
more informed value, we can make `score_high` into a [*trainable
parameter*](https://www.gen.dev/dev/ref/gfi/#Trainable-parameters-1)
parameter*](https://www.gen.dev/docs/dev/ref/gfi/#Trainable-parameters-1)
of the generative function. Below, we write a new version of the proposal
function that makes `score_high` trainable. However, the optimization
algorithms we will use for training work best with *unconstrained* parameters
Expand Down Expand Up @@ -1184,7 +1184,7 @@ end;
Next, we choose type of optimization algorithm we will use for training. Gen
supports a set of gradient-based optimization algorithms (see [Optimizing
Trainable
Parameters](https://www.gen.dev/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)).
Parameters](https://www.gen.dev/docs/dev/ref/parameter_optimization/#Optimizing-Trainable-Parameters-1)).
Here we will use gradient descent with a fixed step size of 0.001.


Expand All @@ -1193,7 +1193,7 @@ update = Gen.ParamUpdate(Gen.FixedStepGradientDescent(0.001), custom_dest_propos
```

Finally, we use the
[`Gen.train!`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.train!)
[`Gen.train!`](https://www.gen.dev/docs/stable/ref/inference/#Gen.train!)
method to actually do the training.

For each epoch, `Gen.train!` makes `epoch_size` calls to the data-generator
Expand Down
12 changes: 6 additions & 6 deletions tutorials/intro-to-modeling/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -148,7 +148,7 @@ Probabilistic models are represented in Gen as *generative functions*.
Generative functions are used to represent a variety of different types of
probabilistic computations including generative models, inference models,
custom proposal distributions, and variational approximations (see the [Gen
documentation](https://probcomp.github.io/Gen/dev/ref/gfi/) or the
documentation](https://www.gen.dev/docs/stable/ref/gfi/) or the
[paper](https://dl.acm.org/doi/10.1145/3314221.3314642)). In this
tutorial,
we focus on implementing _generative models_. A generative model represents
Expand All @@ -157,7 +157,7 @@ our data and our problem domain.


The simplest way to construct a generative function is by using the [built-in
modeling DSL](https://probcomp.github.io/Gen/dev/ref/modeling/). Generative
modeling DSL](https://www.gen.dev/docs/stable/ref/modeling/). Generative
functions written in the built-in modeling DSL are based on Julia function
definition syntax, but are prefixed with the `@gen` macro:

Expand Down Expand Up @@ -312,7 +312,7 @@ times, but each time, the random choice it makes is given a distinct address.
Although the random choices are not included in the return value, they *are*
included in the *execution trace* of the generative function. We can run the
generative function and obtain its trace using the [`
simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method
simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method
from the Gen API:


Expand Down Expand Up @@ -523,10 +523,10 @@ amplitude, and then generates y-coordinates from a given vector of
x-coordinates by adding noise to the value of the wave at each x-coordinate.
Use a `gamma(1, 1)` prior distribution for the period, and a `gamma(1, 1)`
prior distribution on the amplitude (see
[`Gen.gamma`](https://probcomp.github.io/Gen/dev/ref/distributions/#Gen.gamma)).
[`Gen.gamma`](https://www.gen.dev/docs/stable/ref/distributions/#Gen.gamma)).
Sampling from a Gamma distribution will ensure to give us postive real values.
Use a uniform distribution between 0 and $2\pi$ for the phase (see
[`Gen.uniform`](https://probcomp.github.io/Gen/dev/ref/distributions/#Gen.uniform)).
[`Gen.uniform`](https://www.gen.dev/docs/stable/ref/distributions/#Gen.uniform)).

The sine wave should implement:

Expand Down Expand Up @@ -771,7 +771,7 @@ Write an inference program that generates traces of `sine_model` that explain th
What if we'd want to predict `ys` given `xs`?

Using the API method
[`generate`](https://www.gen.dev/dev/ref/gfi/#Gen.generate), we
[`generate`](https://www.gen.dev/docs/dev/ref/gfi/#Gen.generate), we
can generate a trace of a generative function in which the values of certain
random choices are constrained to given values. The constraints are a choice
map that maps the addresses of the constrained random choices to their
Expand Down
2 changes: 1 addition & 1 deletion tutorials/iterative-inference/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -1097,7 +1097,7 @@ For example, let's say we wanted to take a trace and assign each point's
`is_outlier` score to the most likely possibility. We can do this by
iterating over both possible traces, scoring them, and choosing the one with
the higher score. We can do this using Gen's
[`update`](https://www.gen.dev/dev/ref/gfi/#Update-1) function,
[`update`](https://www.gen.dev/docs/dev/ref/gfi/#Update-1) function,
which allows us to manually update a trace to satisfy some constraints:


Expand Down
12 changes: 6 additions & 6 deletions tutorials/particle-filtering/tutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,12 +45,12 @@ We show how Gen's support for SMC integrates with its support for MCMC, enabling
"bearings only tracking" problem described in [4].

This notebook will also introduce you to the
[`Unfold`](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1) combinator,
[`Unfold`](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1) combinator,
which can be used to improve performance of SMC.
`Unfold` is just one example of the levers that Gen provides for
improving performance; once you understand it, you can check
Gen's documentation to see how similar principles apply to the
[`Map`](https://www.gen.dev/dev/ref/combinators/#Map-combinator-1) combinator
[`Map`](https://www.gen.dev/docs/dev/ref/combinators/#Map-combinator-1) combinator
and to the static DSL. (These features are also covered in the previous tutorial,
[Scaling with Combinators and the Static Modeling Language](../scaling-with-combinators-new/tutorial).)

Expand Down Expand Up @@ -238,7 +238,7 @@ sample of `num_samples` traces from the weighted collection that the particle
filter produces.

Gen provides methods for initializing and updating the state of a particle
filter, documented in [Particle Filtering](https://www.gen.dev/dev/ref/pf/).
filter, documented in [Particle Filtering](https://www.gen.dev/docs/dev/ref/pf/).

- `Gen.initialize_particle_filter`

Expand Down Expand Up @@ -300,7 +300,7 @@ and then we introduce one additional bearing measurement by calling
- The new arguments to the generative function for this step. In our case,
this is the number of measurements beyond the first measurement.

- The [argdiff](https://www.gen.dev/dev/ref/gfi/#Argdiffs-1)
- The [argdiff](https://www.gen.dev/docs/dev/ref/gfi/#Argdiffs-1)
value, which provides detailed information about the change to the
arguments between the previous step and this step. We will revisit this
value later. For now, we indicate that we do not know how the `T::Int`
Expand Down Expand Up @@ -645,7 +645,7 @@ body whenever performing a trace update. This allows the built-in modeling
DSL to be very flexible and to have a simple implementation, at the cost of
performance. There are several ways of improving performance after one has a
prototype written in the built-in modeling DSL. One of these is [Generative
Function Combinators](https://www.gen.dev/dev/ref/combinators/), which make
Function Combinators](https://www.gen.dev/docs/dev/ref/combinators/), which make
the flow of information through the generative process more explicit to Gen,
and enable asymptotically more efficient inference programs.

Expand Down Expand Up @@ -676,7 +676,7 @@ Julia `for` loop in our model.
This `for` loop has a very specific pattern of information flow&mdash;there is a
sequence of states (represented by `x`, `y`, `vx`, and `vy`), and each state is
generated from the previous state. This is exactly the pattern that the
[Unfold](https://www.gen.dev/dev/ref/combinators/#Unfold-combinator-1)
[Unfold](https://www.gen.dev/docs/dev/ref/combinators/#Unfold-combinator-1)
generative function combinator is designed to handle.

Below, we re-express the Julia `for` loop over the state sequence using the
Expand Down
Loading