Skip to content

Commit

Permalink
Update documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
dustinvtran committed Feb 2, 2018
1 parent 2f63603 commit cd950bd
Show file tree
Hide file tree
Showing 191 changed files with 27,839 additions and 7,961 deletions.
2 changes: 1 addition & 1 deletion api/criticism.html
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,7 @@ <h1 id="api-and-documentation">API and Documentation</h1>
</div>
</div>
<h3 id="criticism">Criticism</h3>
<p>We can never validate whether a model is true. In practice, “all models are wrong” <span class="citation">(Box, 1976)</span>. However, we can try to uncover where the model goes wrong. Model criticism helps justify the model as an approximation or point to good directions for revising the model. For background, see the criticism <a href="/tutorials/criticism">tutorial</a>.</p>
<p>We can never validate whether a model is true. In practice, “all models are wrong” <span class="citation" data-cites="box1976science">(Box, 1976)</span>. However, we can try to uncover where the model goes wrong. Model criticism helps justify the model as an approximation or point to good directions for revising the model. For background, see the criticism <a href="/tutorials/criticism">tutorial</a>.</p>
<p>Edward explores model criticism using</p>
<ul>
<li>point evaluations, such as mean squared error or classification accuracy;</li>
Expand Down
8 changes: 4 additions & 4 deletions api/data.html
Original file line number Diff line number Diff line change
Expand Up @@ -118,18 +118,18 @@ <h3 id="data">Data</h3>
<p>Data defines a set of observations. There are three ways to read data in Edward. They follow the <a href="https://www.tensorflow.org/programmers_guide/reading_data">three ways to read data in TensorFlow</a>.</p>
<p><strong>Preloaded data.</strong> A constant or variable in the TensorFlow graph holds all the data. This setting is the fastest to work with and is recommended if the data fits in memory.</p>
<p>Represent the data as NumPy arrays or TensorFlow tensors.</p>
<pre class="python" language="Python"><code>x_data = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1])
<pre class="python" data-language="Python"><code>x_data = np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1])
x_data = tf.constant([0, 1, 0, 0, 0, 0, 0, 0, 0, 1])</code></pre>
<p>During inference, we store them in TensorFlow variables internally to prevent copying data more than once in memory. As an example, see the <a href="http://nbviewer.jupyter.org/github/blei-lab/edward/blob/master/notebooks/getting_started.ipynb">getting started</a> notebook.</p>
<p><strong>Feeding.</strong> Manual code provides the data when running each step of inference. This setting provides the most fine control which is useful for experimentation.</p>
<p>Represent the data as <a href="https://www.tensorflow.org/programmers_guide/reading_data#feeding">TensorFlow placeholders</a>, which are nodes in the graph that are fed at runtime.</p>
<pre class="python" language="Python"><code>x_data = tf.placeholder(tf.float32, [100, 25]) # placeholder of shape (100, 25)</code></pre>
<pre class="python" data-language="Python"><code>x_data = tf.placeholder(tf.float32, [100, 25]) # placeholder of shape (100, 25)</code></pre>
<p>During inference, the user must manually feed the placeholders. At each step, call <code>inference.update()</code> while passing in a <code>feed_dict</code> dictionary which binds placeholders to realized values as an argument. As an example, see the <a href="https://github.com/blei-lab/edward/blob/master/examples/vae.py">variational auto-encoder</a> script. If the values do not change over inference updates, one can also bind the placeholder to values within the <code>data</code> argument when first constructing inference.</p>
<p><strong>Reading from files.</strong> An input pipeline reads the data from files at the beginning of a TensorFlow graph. This setting is recommended if the data does not fit in memory.</p>
<pre class="python" language="Python"><code>filename_queue = tf.train.string_input_producer(...)
<pre class="python" data-language="Python"><code>filename_queue = tf.train.string_input_producer(...)
reader = tf.SomeReader()
...</code></pre>
<p>Represent the data as TensorFlow tensors, where the tensors are the output of data readers. During inference, each update will be automatically evaluated over new batch tensors represented through the data readers. As an example, see the <a href="https://github.com/blei-lab/edward/blob/master/tests/test-inferences/test_inference_data.py">data unit test</a>.</p>
<p>Represent the data as TensorFlow tensors, where the tensors are the output of data readers. During inference, each update will be automatically evaluated over new batch tensors represented through the data readers. As an example, see the <a href="https://github.com/blei-lab/edward/blob/master/tests/inferences/test_inference_data.py">data unit test</a>.</p>
</div>
</div>
<div class="row" style="padding-bottom: 25%"> </div>
Expand Down
23 changes: 13 additions & 10 deletions api/ed.html
Original file line number Diff line number Diff line change
Expand Up @@ -106,8 +106,10 @@ <h1><a href="/">Edward</a></h1>
</div>
<div class="nine columns">

<div itemscope="" itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="ed" /><meta itemprop="property" content="VERSION"/><meta itemprop="property" content="__version__"/>
<div itemscope itemtype="http://developers.google.com/ReferenceObject">
<meta itemprop="name" content="ed" />
<meta itemprop="property" content="VERSION"/>
<meta itemprop="property" content="__version__"/>
</div>
<h1 id="module-ed">Module: ed</h1>
<p>Defined in <a href="https://github.com/blei-lab/edward/tree/master/edward/__init__.py"><code>edward/__init__.py</code></a>.</p>
Expand All @@ -117,32 +119,32 @@ <h2 id="modules">Modules</h2>
<p><a href="./ed/models"><code>models</code></a> module</p>
<p><a href="./ed/util"><code>util</code></a> module</p>
<h2 id="classes">Classes</h2>
<p><a href="./ed/BiGANInference"><code>class BiGANInference</code></a>: Adversarially Learned Inference <span class="citation">(Dumoulin et al., 2017)</span> or</p>
<p><a href="./ed/BiGANInference"><code>class BiGANInference</code></a>: Adversarially Learned Inference <span class="citation" data-cites="dumuolin2017adversarially">(Dumoulin et al., 2017)</span> or</p>
<p><a href="./ed/GANInference"><code>class GANInference</code></a>: Parameter estimation with GAN-style training</p>
<p><a href="./ed/Gibbs"><code>class Gibbs</code></a>: Gibbs sampling <span class="citation">(S. Geman &amp; Geman, 1984)</span>.</p>
<p><a href="./ed/Gibbs"><code>class Gibbs</code></a>: Gibbs sampling <span class="citation" data-cites="geman1984stochastic">(Geman &amp; Geman, 1984)</span>.</p>
<p><a href="./ed/HMC"><code>class HMC</code></a>: Hamiltonian Monte Carlo, also known as hybrid Monte Carlo</p>
<p><a href="./ed/ImplicitKLqp"><code>class ImplicitKLqp</code></a>: Variational inference with implicit probabilistic models</p>
<p><a href="./ed/Inference"><code>class Inference</code></a>: Abstract base class for inference. All inference algorithms in</p>
<p><a href="./ed/KLpq"><code>class KLpq</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/KLqp"><code>class KLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/Laplace"><code>class Laplace</code></a>: Laplace approximation <span class="citation">(Laplace, 1986)</span>.</p>
<p><a href="./ed/Laplace"><code>class Laplace</code></a>: Laplace approximation <span class="citation" data-cites="laplace1986memoir">(Laplace, 1986)</span>.</p>
<p><a href="./ed/MAP"><code>class MAP</code></a>: Maximum a posteriori.</p>
<p><a href="./ed/MetropolisHastings"><code>class MetropolisHastings</code></a>: Metropolis-Hastings <span class="citation">(Hastings, 1970; Metropolis, Rosenbluth, Rosenbluth, Teller, &amp; Teller, 1953)</span>.</p>
<p><a href="./ed/MetropolisHastings"><code>class MetropolisHastings</code></a>: Metropolis-Hastings <span class="citation" data-cites="metropolis1953equation hastings1970monte">(Hastings, 1970; Metropolis, Rosenbluth, Rosenbluth, Teller, &amp; Teller, 1953)</span>.</p>
<p><a href="./ed/MonteCarlo"><code>class MonteCarlo</code></a>: Abstract base class for Monte Carlo. Specific Monte Carlo methods</p>
<p><a href="./ed/Progbar"><code>class Progbar</code></a></p>
<p><a href="./ed/RandomVariable"><code>class RandomVariable</code></a>: Base class for random variables.</p>
<p><a href="./ed/ReparameterizationEntropyKLqp"><code>class ReparameterizationEntropyKLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/ReparameterizationKLKLqp"><code>class ReparameterizationKLKLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/ReparameterizationKLqp"><code>class ReparameterizationKLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/SGHMC"><code>class SGHMC</code></a>: Stochastic gradient Hamiltonian Monte Carlo <span class="citation">(Chen, Fox, &amp; Guestrin, 2014)</span>.</p>
<p><a href="./ed/SGLD"><code>class SGLD</code></a>: Stochastic gradient Langevin dynamics <span class="citation">(Welling &amp; Teh, 2011)</span>.</p>
<p><a href="./ed/SGHMC"><code>class SGHMC</code></a>: Stochastic gradient Hamiltonian Monte Carlo <span class="citation" data-cites="chen2014stochastic">(Chen, Fox, &amp; Guestrin, 2014)</span>.</p>
<p><a href="./ed/SGLD"><code>class SGLD</code></a>: Stochastic gradient Langevin dynamics <span class="citation" data-cites="welling2011bayesian">(Welling &amp; Teh, 2011)</span>.</p>
<p><a href="./ed/ScoreEntropyKLqp"><code>class ScoreEntropyKLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/ScoreKLKLqp"><code>class ScoreKLKLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/ScoreKLqp"><code>class ScoreKLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/ScoreRBKLqp"><code>class ScoreRBKLqp</code></a>: Variational inference with the KL divergence</p>
<p><a href="./ed/VariationalInference"><code>class VariationalInference</code></a>: Abstract base class for variational inference. Specific</p>
<p><a href="./ed/WGANInference"><code>class WGANInference</code></a>: Parameter estimation with GAN-style training</p>
<p><a href="./ed/WakeSleep"><code>class WakeSleep</code></a>: Wake-Sleep algorithm <span class="citation">(Hinton, Dayan, Frey, &amp; Neal, 1995)</span>.</p>
<p><a href="./ed/WakeSleep"><code>class WakeSleep</code></a>: Wake-Sleep algorithm <span class="citation" data-cites="hinton1995wake">(Hinton, Dayan, Frey, &amp; Neal, 1995)</span>.</p>
<h2 id="functions">Functions</h2>
<p><a href="./ed/check_data"><code>check_data(...)</code></a>: Check that the data dictionary passed during inference and</p>
<p><a href="./ed/check_latent_vars"><code>check_latent_vars(...)</code></a>: Check that the latent variable dictionary passed during inference and</p>
Expand All @@ -159,6 +161,7 @@ <h2 id="functions">Functions</h2>
<p><a href="./ed/get_session"><code>get_session(...)</code></a>: Get the globally defined TensorFlow session.</p>
<p><a href="./ed/get_siblings"><code>get_siblings(...)</code></a>: Get sibling random variables of input.</p>
<p><a href="./ed/get_variables"><code>get_variables(...)</code></a>: Get parent TensorFlow variables of input.</p>
<p><a href="./ed/is_independent"><code>is_independent(...)</code></a>: Assess whether a is independent of b given the random variables in</p>
<p><a href="./ed/ppc"><code>ppc(...)</code></a>: Posterior predictive check</p>
<p><a href="./ed/ppc_density_plot"><code>ppc_density_plot(...)</code></a>: Create 1D kernel density plot comparing data to samples from posterior.</p>
<p><a href="./ed/ppc_stat_hist_plot"><code>ppc_stat_hist_plot(...)</code></a>: Create histogram plot comparing data to samples from posterior.</p>
Expand All @@ -184,7 +187,7 @@ <h2 id="other-members">Other Members</h2>
<p>Hastings, W. K. (1970). Monte Carlo sampling methods using Markov chains and their applications. <em>Biometrika</em>, <em>57</em>(1), 97–109.</p>
</div>
<div id="ref-hinton1995wake">
<p>Hinton, G. E., Dayan, P., Frey, B. J., &amp; Neal, R. M. (1995). The wake-sleep algorithm for unsupervised neural networks. <em>Science</em>.</p>
<p>Hinton, G. E., Dayan, P., Frey, B. J., &amp; Neal, R. M. (1995). The &quot;wake-sleep&quot; algorithm for unsupervised neural networks. <em>Science</em>.</p>
</div>
<div id="ref-laplace1986memoir">
<p>Laplace, P. S. (1986). Memoir on the probability of the causes of events. <em>Statistical Science</em>, <em>1</em>(3), 364–378.</p>
Expand Down
Loading

0 comments on commit cd950bd

Please sign in to comment.