Fork of https://github.com/cambridge-mlg/convcnp
Trying it out on smart meter data as a direct comparison to ANP, ANP-RNN, NP, LSTM models in https://github.com/3springs/attentive-neural-processes
This repository contains code for the 1-dimensional experiments from Convolutional Convolutional Neural Processes.
Requirements:
-
Python 3.6 or higher.
-
gcc
andgfortran
: On OS X, these are both installed withbrew install gcc
. On Linux,gcc
is most likely already available, andgfortran
can be installed withapt-get install gfortran
.
To begin with, clone and enter the repo.
git clone https://github.com/cambridge-mlg/convcnp
cd convcnp
Then make a virtual environment and install the requirements.
virtualenv -p python3 venv
source venv/bin/activate
pip install --upgrade pip
pip install -r requirements.txt
This will install the latest version of torch
.
If your version of CUDA is not the latest version, then you might need to
install an earlier version of torch
.
You should now be ready to go! If you encounter any problems, feel free to open an issue, and will try to help you resolve the problem as soon as possible.
Common issues:
fatal error: Python.h: No such file or directory
: Python libraries seem to be missing. Trysudo apt-get install python3.X-dev
withX
replaced by your particular version.
For a tutorial-style exposition of ConvCNPs, see the following two expository notebooks:
- Implementing and Training Convolutional Conditional Neural Processes, and
- Sequential Inference with Convolutional Conditional Neural Processes.
To reproduce the numbers from the 1d experiments,
python train.py <data> <model> --train
can be used.
The first argument, <data>
, specifies the data that the model will be trained
on, and should be one of the following:
eq
: samples from a GP with an exponentiated quadratic (EQ) kernel;matern
: samples from a GP with a Matern-5/2 kernel;noisy-mixture
: samples from a GP with a mixture of two EQ kernels and some noise;weakly-periodic
: samples from a GP with a weakly-periodic kernel; orsawtooth
: random sawtooth functions.
The second argument, <model>
, specifies the model that will be trained,
and should be one of the following:
convcnp
: small architecture for the Convolutional Conditional Neural Process;convcnpxl
: large architecture for the Convolutional Conditional Neural Process;cnp
: Conditional Neural Process; oranp
: Attentive Conditional Neural Process.
Upon calling python train.py <data> <model> --train
, first the specified
model will be trained on the specified data source. Afterwards, the script
will print the average log-likelihood on unseen data.
To reproduce the numbers from all the 1d experiments from the paper at once, you
can use ./run_all.sh
.
For more options, please see python train.py --help
:
usage: train.py [-h] [--root ROOT] [--train] [--epochs EPOCHS]
[--learning_rate LEARNING_RATE] [--weight_decay WEIGHT_DECAY]
{eq,matern,noisy-mixture,weakly-periodic,sawtooth}
{convcnp,convcnpxl,cnp,anp}
positional arguments:
{eq,matern,noisy-mixture,weakly-periodic,sawtooth}
Data set to train the CNP on.
{convcnp,convcnpxl,cnp,anp}
Choice of model.
optional arguments:
-h, --help show this help message and exit
--root ROOT Experiment root, which is the directory from which the
experiment will run. If it is not given, a directory
will be automatically created.
--train Perform training. If this is not specified, the model
will be attempted to be loaded from the experiment
root.
--epochs EPOCHS Number of epochs to train for.
--learning_rate LEARNING_RATE
Learning rate.
--weight_decay WEIGHT_DECAY
Weight decay.
Gordon, J., Bruinsma W. P., Foong, A. Y. K., Requeima, J., Dubois Y., Turner, R. E. (2019). "Convolutional Conditional Neural Processes," International Conference on Learning Representations (ICLR), 8th.
BiBTeX:
@inproceedings{Gordon:2020:Convolutional_Conditional_Neural_Processes,
title = {Convolutional Conditional Neural Processes},
author = {Jonathan Gordon and Wessel P. Bruinsma and Andrew Y. K. Foong and James Requeima and Yann Dubois and Richard E. Turner},
year = {2020},
booktitle = {International Conference on Learning Representations},
url = {https://openreview.net/forum?id=Skey4eBYPS}
}