Skip to content

Commit

Permalink
Merge branch 'main' into patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
chrisyeh96 authored Nov 18, 2024
2 parents e513d50 + d501c28 commit db4be69
Show file tree
Hide file tree
Showing 57 changed files with 1,911 additions and 465 deletions.
15 changes: 10 additions & 5 deletions .conda/meta.yaml
Original file line number Diff line number Diff line change
@@ -1,8 +1,12 @@
{% set data = load_setup_py_data(setup_file="../setup.py", from_recipe_dir=True) %}
{% set _version_match = load_file_regex(
load_file="gpytorch/version.py",
regex_pattern="__version__ = version = '(.+)'"
) %}
{% set version = _version_match[1] %}

package:
name: {{ data.get("name")|lower }}
version: {{ data.get("version") }}
name: gpytorch
version: {{ version }}

source:
path: ../
Expand All @@ -17,9 +21,10 @@ requirements:

run:
- python>=3.8
- pytorch>=1.11
- pytorch>=2.0
- scikit-learn
- linear_operator>=0.5.2
- jaxtyping==0.2.19
- linear_operator>=0.5.3

test:
imports:
Expand Down
2 changes: 1 addition & 1 deletion .github/ISSUE_TEMPLATE/---documentation-examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,4 +21,4 @@ assignees: ''
** Think you know how to fix the docs? ** (If so, we'd love a pull request from you!)

- Link to [GPyTorch documentation](https://gpytorch.readthedocs.io)
- Link to [GPyTorch examples](https://github.com/cornellius-gp/gpytorch/tree/master/examples)
- Link to [GPyTorch examples](https://github.com/cornellius-gp/gpytorch/tree/main/examples)
5 changes: 2 additions & 3 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,8 @@ jobs:
conda config --set anaconda_upload yes
conda config --append channels pytorch
conda config --append channels gpytorch
conda config --append channels conda-forge
/usr/share/miniconda/bin/anaconda login --username ${{ secrets.CONDA_USERNAME }} --password ${{ secrets.CONDA_PASSWORD }}
python -m setuptools_scm
cd .conda
conda build .
conda build .conda
/usr/share/miniconda/bin/anaconda logout
cd ..
6 changes: 3 additions & 3 deletions .github/workflows/run_test_suite.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,9 @@ name: Run Test Suite

on:
push:
branches: [ master ]
branches: [ main, develop ]
pull_request:
branches: [ master ]
branches: [ main, develop ]
workflow_call:

jobs:
Expand Down Expand Up @@ -50,7 +50,7 @@ jobs:
if [[ ${{ matrix.pytorch-version }} = "master" ]]; then
pip install --pre torch -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html;
else
pip install torch==1.11+cpu -f https://download.pytorch.org/whl/torch_stable.html;
pip install torch==2.0.1 --index-url https://download.pytorch.org/whl/cpu
fi
pip install -e .
if [[ ${{ matrix.extras }} == "with-extras" ]]; then
Expand Down
8 changes: 4 additions & 4 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -16,23 +16,23 @@ repos:
hooks:
- id: flake8
args: [--config=setup.cfg]
exclude: ^(examples/*)|(docs/*)
exclude: ^(examples/.*)|(docs/.*)
- repo: https://github.com/omnilib/ufmt
rev: v2.0.0
hooks:
- id: ufmt
additional_dependencies:
- black == 22.3.0
- usort == 1.0.3
exclude: ^(build/*)|(docs/*)|(examples/*)
exclude: ^(build/.*)|(docs/.*)|(examples/.*)
- repo: https://github.com/jumanjihouse/pre-commit-hooks
rev: 2.1.6
hooks:
- id: require-ascii
exclude: ^(examples/.*\.ipynb)|(.github/ISSUE_TEMPLATE/*)
exclude: ^(examples/.*\.ipynb)|(.github/ISSUE_TEMPLATE/.*)
- id: script-must-have-extension
- id: forbid-binary
exclude: ^(examples/*)|(test/examples/old_variational_strategy_model.pth)
exclude: ^(examples/.*)|(test/examples/old_variational_strategy_model.pth)
- repo: https://github.com/Lucas-C/pre-commit-hooks
rev: v1.1.13
hooks:
Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ See our [**documentation, examples, tutorials**](https://gpytorch.readthedocs.io

**Requirements**:
- Python >= 3.8
- PyTorch >= 1.11
- PyTorch >= 2.0

Install GPyTorch using pip or conda:

Expand Down Expand Up @@ -88,7 +88,7 @@ If you use GPyTorch, please cite the following papers:

## Contributing

See the contributing guidelines [CONTRIBUTING.md](https://github.com/cornellius-gp/gpytorch/blob/master/CONTRIBUTING.md)
See the contributing guidelines [CONTRIBUTING.md](https://github.com/cornellius-gp/gpytorch/blob/main/CONTRIBUTING.md)
for information on submitting issues and pull requests.


Expand Down
116 changes: 70 additions & 46 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,9 @@
import sys
import sphinx_rtd_theme # noqa
import warnings
from typing import ForwardRef

import jaxtyping
from uncompyle6.semantics.fragments import code_deparse


def read(*names, **kwargs):
Expand Down Expand Up @@ -112,7 +114,8 @@ def find_version(*file_paths):
intersphinx_mapping = {
"python": ("https://docs.python.org/3/", None),
"torch": ("https://pytorch.org/docs/stable/", None),
"linear_operator": ("https://linear-operator.readthedocs.io/en/stable/", None),
"linear_operator": ("https://linear-operator.readthedocs.io/en/stable/", "linear_operator_objects.inv"),
# The local mapping here is temporary until we get a new release of linear_operator
}

# Disable docstring inheritance
Expand Down Expand Up @@ -237,41 +240,81 @@ def find_version(*file_paths):
]


# -- Function to format typehints ----------------------------------------------
# -- Functions to format typehints ----------------------------------------------
# Adapted from
# https://github.com/cornellius-gp/linear_operator/blob/2b33b9f83b45f0cb8cb3490fc5f254cc59393c25/docs/source/conf.py


# Helper function
# Convert any class (i.e. torch.Tensor, LinearOperator, etc.) into appropriate strings
# For external classes, the format will be e.g. "torch.Tensor"
# For any internal class, the format will be e.g. "~linear_operator.operators.TriangularLinearOperator"
def _convert_internal_and_external_class_to_strings(annotation):
module = annotation.__module__ + "."
if module.split(".")[0] == "gpytorch":
module = "~" + module
elif module == "torch.":
module = "~torch."
elif module == "linear_operator.operators._linear_operator.":
module = "~linear_operator."
elif module == "builtins.":
module = ""
res = f"{module}{annotation.__name__}"
return res


# Convert jaxtyping dimensions into strings
def _dim_to_str(dim):
if isinstance(dim, jaxtyping.array_types._NamedVariadicDim):
return "..."
elif isinstance(dim, jaxtyping.array_types._FixedDim):
res = str(dim.size)
if dim.broadcastable:
res = "#" + res
return res
elif isinstance(dim, jaxtyping.array_types._SymbolicDim):
expr = code_deparse(dim.expr).text.strip().split("return ")[1]
return f"({expr})"
elif "jaxtyping" not in str(dim.__class__): # Probably the case that we have an ellipsis
return "..."
else:
res = str(dim.name)
if dim.broadcastable:
res = "#" + res
return res


# Function to format type hints
def _process(annotation, config):
"""
A function to convert a type/rtype typehint annotation into a :type:/:rtype: string.
This function is a bit hacky, and specific to the type annotations we use most frequently.
This function is recursive.
"""
# Simple/base case: any string annotation is ready to go
if type(annotation) == str:
return annotation

# Jaxtyping: shaped tensors or linear operator
elif hasattr(annotation, "__module__") and "jaxtyping" == annotation.__module__:
cls_annotation = _convert_internal_and_external_class_to_strings(annotation.array_type)
shape = " x ".join([_dim_to_str(dim) for dim in annotation.dims])
return f"{cls_annotation} ({shape})"

# Convert Ellipsis into "..."
elif annotation == Ellipsis:
return "..."

# Convert any class (i.e. torch.Tensor, LinearOperator, gpytorch, etc.) into appropriate strings
# For external classes, the format will be e.g. "torch.Tensor"
# For any linear_operator class, the format will be e.g. "~linear_operator.operators.TriangularLinearOperator"
# For any internal class, the format will be e.g. "~gpytorch.kernels.RBFKernel"
# Convert any class (i.e. torch.Tensor, LinearOperator, etc.) into appropriate strings
elif hasattr(annotation, "__name__"):
module = annotation.__module__ + "."
if module.split(".")[0] == "linear_operator":
if annotation.__name__.endswith("LinearOperator"):
module = "~linear_operator."
elif annotation.__name__.endswith("LinearOperator"):
module = "~linear_operator.operators."
else:
module = "~" + module
elif module.split(".")[0] == "gpytorch":
module = "~" + module
elif module == "builtins.":
module = ""
res = f"{module}{annotation.__name__}"
res = _convert_internal_and_external_class_to_strings(annotation)

elif str(annotation).startswith("typing.Callable"):
if len(annotation.__args__) == 2:
res = f"Callable[{_process(annotation.__args__[0], config)} -> {_process(annotation.__args__[1], config)}]"
else:
res = "Callable"

# Convert any Union[*A*, *B*, *C*] into "*A* or *B* or *C*"
# Also, convert any Optional[*A*] into "*A*, optional"
Expand All @@ -291,33 +334,14 @@ def _process(annotation, config):
args = list(annotation.__args__)
res = "(" + ", ".join(_process(arg, config) for arg in args) + ")"

# Convert any List[*A*] into "list(*A*)"
elif str(annotation).startswith("typing.List"):
arg = annotation.__args__[0]
res = "list(" + _process(arg, config) + ")"

# Convert any List[*A*] into "list(*A*)"
elif str(annotation).startswith("typing.Dict"):
res = str(annotation)

# Convert any Iterable[*A*] into "iterable(*A*)"
elif str(annotation).startswith("typing.Iterable"):
arg = annotation.__args__[0]
res = "iterable(" + _process(arg, config) + ")"

# Handle "Callable"
elif str(annotation).startswith("typing.Callable"):
res = "callable"

# Handle "Any"
elif str(annotation).startswith("typing.Any"):
res = ""
# Convert any List[*A*] or Iterable[*A*] into "[*A*, ...]"
elif str(annotation).startswith("typing.Iterable") or str(annotation).startswith("typing.List"):
arg = list(annotation.__args__)[0]
res = f"[{_process(arg, config)}, ...]"

# Special cases for forward references.
# This is brittle, as it only contains case for a select few forward refs
# All others that aren't caught by this are handled by the default case
elif isinstance(annotation, ForwardRef):
res = str(annotation.__forward_arg__)
# Callable typing annotation
elif str(annotation).startswith("typing."):
return str(annotation)[7:]

# For everything we didn't catch: use the simplist string representation
else:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/distributions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ gpytorch.distributions
===================================

GPyTorch distribution objects are essentially the same as torch distribution objects.
For the most part, GpyTorch relies on torch's distribution library.
For the most part, GPyTorch relies on torch's distribution library.
However, we offer two custom distributions.

We implement a custom :obj:`~gpytorch.distributions.MultivariateNormal` that accepts
Expand Down
9 changes: 8 additions & 1 deletion docs/source/kernels.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ gpytorch.kernels


If you don't know what kernel to use, we recommend that you start out with a
:code:`gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel)`.
:code:`gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel()) + gpytorch.kernels.ConstantKernel()`.


Kernel
Expand All @@ -22,6 +22,13 @@ Kernel
Standard Kernels
-----------------------------

:hidden:`ConstantKernel`
~~~~~~~~~~~~~~~~~~~~~~~~~

.. autoclass:: ConstantKernel
:members:


:hidden:`CosineKernel`
~~~~~~~~~~~~~~~~~~~~~~

Expand Down
Binary file added docs/source/linear_operator_objects.inv
Binary file not shown.
12 changes: 6 additions & 6 deletions docs/source/utils.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,12 @@ Interpolation Utilities
.. automodule:: gpytorch.utils.interpolation
:members:

Nearest Neighbors Utilities
---------------------------------

.. automodule:: gpytorch.utils.nearest_neighbors
:members:

Quadrature Utilities
----------------------------

Expand All @@ -31,9 +37,3 @@ Transform Utilities

.. automodule:: gpytorch.utils.transforms
:members:

Nearest Neighbors Utilities
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

.. automodule:: gpytorch.utils.nearest_neighbors
:members:
2 changes: 1 addition & 1 deletion examples/00_Basic_Usage/Implementing_a_custom_Kernel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -209,7 +209,7 @@
"source": [
"### Adding hyperparameters\n",
"\n",
"Althogh the `FirstSincKernel` can be used for defining a model, it lacks a parameter that controls the correlation length. This lengthscale will be implemented as a hyperparameter. See also the [tutorial on hyperparamaters](https://docs.gpytorch.ai/en/latest/examples/00_Basic_Usage/Hyperparameters.html), for information on raw vs. actual parameters.\n",
"Although the `FirstSincKernel` can be used for defining a model, it lacks a parameter that controls the correlation length. This lengthscale will be implemented as a hyperparameter. See also the [tutorial on hyperparamaters](https://docs.gpytorch.ai/en/latest/examples/00_Basic_Usage/Hyperparameters.html), for information on raw vs. actual parameters.\n",
"\n",
"The parameter has to be registered, using the method `register_parameter()`, which `Kernel` inherits from `Module`. Similarly, we register constraints and priors."
]
Expand Down
12 changes: 10 additions & 2 deletions examples/00_Basic_Usage/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,22 @@ parameter constraints and priors, and saving and loading models.

Before checking these out, you may want to check out our `simple GP regression tutorial`_ that details the anatomy of a GPyTorch model.

- Check out our `Tutorial on Hyperparameters`_ for information on things like raw versus actual
* Check out our `Tutorial on Hyperparameters`_ for information on things like raw versus actual
parameters, constraints, priors and more.
- The `Saving and Loading Models`_ notebook details how to save and load GPyTorch models
* The `Saving and Loading Models`_ notebook details how to save and load GPyTorch models
on disk.
* The `Kernels with Additive or Product Structure`_ notebook describes how to compose kernels additively or multiplicatively,
whether for expressivity, sample efficiency, or scalability.
* The `Implementing a Custom Kernel`_ notebook details how to write your own custom kernel in GPyTorch.
* The `Tutorial on Metrics`_ describes various metrics provided by GPyTorch for assessing the generalization of GP models.

.. toctree::
:maxdepth: 1
:hidden:

Hyperparameters.ipynb
Saving_and_Loading_Models.ipynb
kernels_with_additive_or_product_structure.ipynb
Implementing_a_custom_Kernel.ipynb
Metrics.ipynb

Expand All @@ -29,6 +34,9 @@ Before checking these out, you may want to check out our `simple GP regression t
.. _Saving and Loading Models:
Saving_and_Loading_Models.ipynb

.. _Kernels with Additive or Product Structure:
kernels_with_additive_or_product_structure.ipynb

.. _Implementing a custom Kernel:
Implementing_a_custom_Kernel.ipynb

Expand Down
Loading

0 comments on commit db4be69

Please sign in to comment.