You can use Jax for your dataloading, your network, or the learning algorithm, all while still benefiting from the nice stuff that comes from using PyTorch-Lightning.
-
How does this work?
-Well, we use torch-jax-interop, another package developed here at Mila, which allows easy interop between torch and jax code. See the readme on that repo for more details.
You can use Jax for your training step, but not the entire training loop (since that is handled by Lightning).
-There are a few good reasons why you should let Lightning handle the training loop, most notably the fact that it handles all the logging, checkpointing, and other stuff that you'd lose if you swapped out the entire training framework for something based on Jax.
-
In this example Jax algorithm,
-a Neural network written in Jax (using flax) is wrapped using the torch_jax_interop.JaxFunction, so that its parameters are learnable. The parameters are saved on the LightningModule as nn.Parameters (which use the same underlying memory as the jax arrays). In this example, the loss function is written in PyTorch, while the network forward and backward passes are written in Jax.
You can use Jax for your dataloading, your network, or the learning algorithm, all while still benefiting from the nice stuff that comes from using PyTorch-Lightning.
+
How does this work?
+Well, we use torch-jax-interop, another package developed here at Mila, which allows easy interop between torch and jax code. See the readme on that repo for more details.
You can use Jax for your training step, but not the entire training loop (since that is handled by Lightning).
+There are a few good reasons why you should let Lightning handle the training loop, most notably the fact that it handles all the logging, checkpointing, and other stuff that you'd lose if you swapped out the entire training framework for something based on Jax.
+
In this example Jax algorithm,
+a Neural network written in Jax (using flax) is wrapped using the torch_jax_interop.JaxFunction, so that its parameters are learnable. The parameters are saved on the LightningModule as nn.Parameters (which use the same underlying memory as the jax arrays). In this example, the loss function is written in PyTorch, while the network forward and backward passes are written in Jax.
diff --git a/search/search_index.json b/search/search_index.json
index 44b473ef..48aa56b1 100644
--- a/search/search_index.json
+++ b/search/search_index.json
@@ -1 +1 @@
-{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Research Project Template # Please note: This is a Work-in-Progress. The goal is to make a first release by the end of summer 2024. This is a research project template. It is meant to be a starting point for ML researchers at Mila . Please follow the installation instructions here Overview # This project makes use of the following libraries: Hydra is used to configure the project. It allows you to define configuration files and override them from the command line. PyTorch Lightning is used to as the training framework. It provides a high-level interface to organize ML research code. \ud83d\udd25 Please note: You can also use Jax with this repo, as is shown in the Jax example \ud83d\udd25 Weights & Biases is used to log metrics and visualize results. pytest is used for testing. Usage # To see all available options: python project/main.py --help For a detailed list of examples, see the examples page . Project layout # pyproject.toml # Project metadata and dependencies project/ main.py # main entry-point algorithms/ # learning algorithms datamodules/ # datasets, processing and loading networks/ # Neural networks used by algorithms configs/ # configuration files docs/ # documentation conftest.py # Test fixtures and utilities","title":"Home"},{"location":"#research-project-template","text":"Please note: This is a Work-in-Progress. The goal is to make a first release by the end of summer 2024. This is a research project template. It is meant to be a starting point for ML researchers at Mila . Please follow the installation instructions here","title":"Research Project Template"},{"location":"#overview","text":"This project makes use of the following libraries: Hydra is used to configure the project. It allows you to define configuration files and override them from the command line. PyTorch Lightning is used to as the training framework. It provides a high-level interface to organize ML research code. \ud83d\udd25 Please note: You can also use Jax with this repo, as is shown in the Jax example \ud83d\udd25 Weights & Biases is used to log metrics and visualize results. pytest is used for testing.","title":"Overview"},{"location":"#usage","text":"To see all available options: python project/main.py --help For a detailed list of examples, see the examples page .","title":"Usage"},{"location":"#project-layout","text":"pyproject.toml # Project metadata and dependencies project/ main.py # main entry-point algorithms/ # learning algorithms datamodules/ # datasets, processing and loading networks/ # Neural networks used by algorithms configs/ # configuration files docs/ # documentation conftest.py # Test fixtures and utilities","title":"Project layout"},{"location":"SUMMARY/","text":"Home Overview overview/*.md Getting Started getting_started/*.md Reference reference/* Examples examples/* Tests Related projects Getting Help Contributing","title":"SUMMARY"},{"location":"contributing/","text":"Contributing # TODOs: [ ] Describe how to contribute to the project.","title":"Contributing"},{"location":"contributing/#contributing","text":"TODOs: [ ] Describe how to contribute to the project.","title":"Contributing"},{"location":"help/","text":"Help and Support # FAQ # How to get help #","title":"Getting Help"},{"location":"help/#help-and-support","text":"","title":"Help and Support"},{"location":"help/#faq","text":"","title":"FAQ"},{"location":"help/#how-to-get-help","text":"","title":"How to get help"},{"location":"reference/","text":"project main experiment utils types","title":"Reference"},{"location":"related/","text":"Related projects and resources # There are other very similar projects with significantly better documentation. In all cases that involve Hydra and PyTorch-Lightning, this documentation also applies directly to this project, so in order to avoid copying their documentation, here are some links: lightning-hydra-template How it works: https://github.com/gorodnitskiy/yet-another-lightning-hydra-template/tree/main?tab=readme-ov-file#workflow---how-it-works yet-another-lightning-hydra-template Excellent template. based on the lightning-hydra-template. Great documentation, which is referenced extensively in this project. Has a great Readme with lots of information Is really well organized doesn't support Jax doesn't have a devcontainer Great blog: https://hackernoon.com/yet-another-lightning-hydra-template-for-ml-experiments cookiecutter-data-science Awesome library for data science. Related projects: https://github.com/drivendataorg/cookiecutter-data-science/blob/master/docs/docs/related.md#links-to-related-projects-and-references","title":"Related projects"},{"location":"related/#related-projects-and-resources","text":"There are other very similar projects with significantly better documentation. In all cases that involve Hydra and PyTorch-Lightning, this documentation also applies directly to this project, so in order to avoid copying their documentation, here are some links: lightning-hydra-template How it works: https://github.com/gorodnitskiy/yet-another-lightning-hydra-template/tree/main?tab=readme-ov-file#workflow---how-it-works yet-another-lightning-hydra-template Excellent template. based on the lightning-hydra-template. Great documentation, which is referenced extensively in this project. Has a great Readme with lots of information Is really well organized doesn't support Jax doesn't have a devcontainer Great blog: https://hackernoon.com/yet-another-lightning-hydra-template-for-ml-experiments cookiecutter-data-science Awesome library for data science. Related projects: https://github.com/drivendataorg/cookiecutter-data-science/blob/master/docs/docs/related.md#links-to-related-projects-and-references","title":"Related projects and resources"},{"location":"tests/","text":"Tests # TODOs: [ ] Described what is tested by the included automated tests (a bit like what is done here ) [ ] Add some examples of how to run tests [ ] describe why the test files are next to the source files, and why TDD is good, and why ML researchers should care more about tests. [ ] Explain how the fixtures in conftest.py work (indirect parametrization of the command-line overrides, etc). [ ] Describe the Github Actions workflows that come with the template, and how to setup a self-hosted runner for template forks. [ ] Add links to relevant documentation ()","title":"Tests"},{"location":"tests/#tests","text":"TODOs: [ ] Described what is tested by the included automated tests (a bit like what is done here ) [ ] Add some examples of how to run tests [ ] describe why the test files are next to the source files, and why TDD is good, and why ML researchers should care more about tests. [ ] Explain how the fixtures in conftest.py work (indirect parametrization of the command-line overrides, etc). [ ] Describe the Github Actions workflows that come with the template, and how to setup a self-hosted runner for template forks. [ ] Add links to relevant documentation ()","title":"Tests"},{"location":"examples/SUMMARY/","text":"Here are some examples","title":"SUMMARY"},{"location":"examples/examples/","text":"Examples # TODOs: [ ] Show examples (that are also to be tested with doctest or similar) of how to add a new algo. [ ] Show examples of how to add a new datamodule. [ ] Add a link to the RL example once #13 is done. [ ] Add a link to the NLP example once #14 is done. [ ] Add an example of how to use Jax for the dataset/dataloading: Either through an RL example, or with tfds in #18 Simple run # python project/main.py algorithm = example_algo datamodule = mnist network = fcnet Running a Hyper-Parameter sweep on a SLURM cluster # python project/main.py experiment = cluster_sweep_example Using Jax # You can use Jax for your dataloading, your network, or the learning algorithm, all while still benefiting from the nice stuff that comes from using PyTorch-Lightning. How does this work? Well, we use torch-jax-interop , another package developed here at Mila, which allows easy interop between torch and jax code. See the readme on that repo for more details. Example Algorithm that uses Jax # You can use Jax for your training step, but not the entire training loop (since that is handled by Lightning). There are a few good reasons why you should let Lightning handle the training loop, most notably the fact that it handles all the logging, checkpointing, and other stuff that you'd lose if you swapped out the entire training framework for something based on Jax. In this example Jax algorithm , a Neural network written in Jax (using flax ) is wrapped using the torch_jax_interop.JaxFunction , so that its parameters are learnable. The parameters are saved on the LightningModule as nn.Parameters (which use the same underlying memory as the jax arrays). In this example, the loss function is written in PyTorch, while the network forward and backward passes are written in Jax. Example datamodule that uses Jax # (todo)","title":"Examples"},{"location":"examples/examples/#examples","text":"TODOs: [ ] Show examples (that are also to be tested with doctest or similar) of how to add a new algo. [ ] Show examples of how to add a new datamodule. [ ] Add a link to the RL example once #13 is done. [ ] Add a link to the NLP example once #14 is done. [ ] Add an example of how to use Jax for the dataset/dataloading: Either through an RL example, or with tfds in #18","title":"Examples"},{"location":"examples/examples/#simple-run","text":"python project/main.py algorithm = example_algo datamodule = mnist network = fcnet","title":"Simple run"},{"location":"examples/examples/#running-a-hyper-parameter-sweep-on-a-slurm-cluster","text":"python project/main.py experiment = cluster_sweep_example","title":"Running a Hyper-Parameter sweep on a SLURM cluster"},{"location":"examples/examples/#using-jax","text":"You can use Jax for your dataloading, your network, or the learning algorithm, all while still benefiting from the nice stuff that comes from using PyTorch-Lightning. How does this work? Well, we use torch-jax-interop , another package developed here at Mila, which allows easy interop between torch and jax code. See the readme on that repo for more details.","title":"Using Jax"},{"location":"examples/examples/#example-algorithm-that-uses-jax","text":"You can use Jax for your training step, but not the entire training loop (since that is handled by Lightning). There are a few good reasons why you should let Lightning handle the training loop, most notably the fact that it handles all the logging, checkpointing, and other stuff that you'd lose if you swapped out the entire training framework for something based on Jax. In this example Jax algorithm , a Neural network written in Jax (using flax ) is wrapped using the torch_jax_interop.JaxFunction , so that its parameters are learnable. The parameters are saved on the LightningModule as nn.Parameters (which use the same underlying memory as the jax arrays). In this example, the loss function is written in PyTorch, while the network forward and backward passes are written in Jax.","title":"Example Algorithm that uses Jax"},{"location":"examples/examples/#example-datamodule-that-uses-jax","text":"(todo)","title":"Example datamodule that uses Jax"},{"location":"examples/jax/","text":"","title":"Jax"},{"location":"getting_started/install/","text":"Installation instructions # There are two ways to install this project Using Conda (recommended for newcomers) Using a development container (recommended if you are able to install Docker on your machine) Using Conda and pip # Prerequisites # You need to have Conda installed on your machine. Installation # Clone the repository and navigate to the root directory: git clone https://www.github.com/mila-iqia/ResearchTemplate cd ResearchTemplate Create a conda environment conda create -n research_template python = 3 .12 conda activate research_template Notes: - If you don't Conda installed, you can download it from [here](https://docs.conda.io/en/latest/miniconda.html). - If you'd rather use a virtual environment instead of Conda, you can totally do so, as long as you have a version of Python >= 3.12. Install the package using pip: pip install -e . Optionally, you can also install the package using PDM . This makes it easier to add or change the dependencies later on: pip install pdm pdm install Using a development container # This repo provides a Devcontainer configuration for Visual Studio Code to use a Docker container as a pre-configured development environment. This avoids struggles setting up a development environment and makes them reproducible and consistent. and make yourself familiar with the container tutorials if you want to use them. In order to use GPUs, you can enable them within the .devcontainer/devcontainer.json file. Setup Docker on your local machine On an Linux machine where you have root access, you can install Docker using the following commands: curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh On Windows or Mac, follow these installation instructions (optional) Install the nvidia-container-toolkit to use your local machine's GPU(s). Install the Dev Containers extension for Visual Studio Code. When opening repository in Visual Studio Code, you should be prompted to reopen the repository in a container: Alternatively, you can open the command palette (Ctrl+Shift+P) and select Dev Containers: Rebuild and Reopen in Container .","title":"Installation instructions"},{"location":"getting_started/install/#installation-instructions","text":"There are two ways to install this project Using Conda (recommended for newcomers) Using a development container (recommended if you are able to install Docker on your machine)","title":"Installation instructions"},{"location":"getting_started/install/#using-conda-and-pip","text":"","title":"Using Conda and pip"},{"location":"getting_started/install/#prerequisites","text":"You need to have Conda installed on your machine.","title":"Prerequisites"},{"location":"getting_started/install/#installation","text":"Clone the repository and navigate to the root directory: git clone https://www.github.com/mila-iqia/ResearchTemplate cd ResearchTemplate Create a conda environment conda create -n research_template python = 3 .12 conda activate research_template Notes: - If you don't Conda installed, you can download it from [here](https://docs.conda.io/en/latest/miniconda.html). - If you'd rather use a virtual environment instead of Conda, you can totally do so, as long as you have a version of Python >= 3.12. Install the package using pip: pip install -e . Optionally, you can also install the package using PDM . This makes it easier to add or change the dependencies later on: pip install pdm pdm install","title":"Installation"},{"location":"getting_started/install/#using-a-development-container","text":"This repo provides a Devcontainer configuration for Visual Studio Code to use a Docker container as a pre-configured development environment. This avoids struggles setting up a development environment and makes them reproducible and consistent. and make yourself familiar with the container tutorials if you want to use them. In order to use GPUs, you can enable them within the .devcontainer/devcontainer.json file. Setup Docker on your local machine On an Linux machine where you have root access, you can install Docker using the following commands: curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh On Windows or Mac, follow these installation instructions (optional) Install the nvidia-container-toolkit to use your local machine's GPU(s). Install the Dev Containers extension for Visual Studio Code. When opening repository in Visual Studio Code, you should be prompted to reopen the repository in a container: Alternatively, you can open the command palette (Ctrl+Shift+P) and select Dev Containers: Rebuild and Reopen in Container .","title":"Using a development container"},{"location":"overview/intro/","text":"Introduction # Why should you use this template? # Why should you use a template in the first place? # For many good reasons, which are very well described here in a similar project ! \ud83d\ude0a Other good reads: https://cookiecutter-data-science.drivendata.org/why/ https://cookiecutter-data-science.drivendata.org/opinions/ https://12factor.net/ https://github.com/ashleve/lightning-hydra-template/tree/main?tab=readme-ov-file#main-ideas Why should you use this template (instead of another)? # You are welcome (and encouraged) to use other similar templates which, at the time of writing this, have significantly better documentation. However, there are several advantages to using this particular template: \u2757Support for both Jax and Torch with PyTorch-Lightning \u2757 Easy development inside a devcontainer with VsCode Tailor-made for ML researchers that run their jobs on SLURM clusters (with default configurations for the Mila and DRAC clusters.) Rich typing of all parts of the source code using Python 3.12's new type annotation syntax A comprehensive suite of automated tests for new algorithms, datasets and networks Automatically creates Yaml Schemas for your Hydra config files (as soon as #7 is merged) This template is geared specifically for ML researchers that run their jobs on SLURM clusters. A particular emphasis for development specifically with a SLURM cluster, and more particularly still, with the Mila and DRAC clusters in mind. The target audience is (currently) limited to Mila researchers, but there's no reason why this Main concepts # Datamodule # Network # Algorithm #","title":"Introduction"},{"location":"overview/intro/#introduction","text":"","title":"Introduction"},{"location":"overview/intro/#why-should-you-use-this-template","text":"","title":"Why should you use this template?"},{"location":"overview/intro/#why-should-you-use-a-template-in-the-first-place","text":"For many good reasons, which are very well described here in a similar project ! \ud83d\ude0a Other good reads: https://cookiecutter-data-science.drivendata.org/why/ https://cookiecutter-data-science.drivendata.org/opinions/ https://12factor.net/ https://github.com/ashleve/lightning-hydra-template/tree/main?tab=readme-ov-file#main-ideas","title":"Why should you use a template in the first place?"},{"location":"overview/intro/#why-should-you-use-this-template-instead-of-another","text":"You are welcome (and encouraged) to use other similar templates which, at the time of writing this, have significantly better documentation. However, there are several advantages to using this particular template: \u2757Support for both Jax and Torch with PyTorch-Lightning \u2757 Easy development inside a devcontainer with VsCode Tailor-made for ML researchers that run their jobs on SLURM clusters (with default configurations for the Mila and DRAC clusters.) Rich typing of all parts of the source code using Python 3.12's new type annotation syntax A comprehensive suite of automated tests for new algorithms, datasets and networks Automatically creates Yaml Schemas for your Hydra config files (as soon as #7 is merged) This template is geared specifically for ML researchers that run their jobs on SLURM clusters. A particular emphasis for development specifically with a SLURM cluster, and more particularly still, with the Mila and DRAC clusters in mind. The target audience is (currently) limited to Mila researchers, but there's no reason why this","title":"Why should you use this template (instead of another)?"},{"location":"overview/intro/#main-concepts","text":"","title":"Main concepts"},{"location":"overview/intro/#datamodule","text":"","title":"Datamodule"},{"location":"overview/intro/#network","text":"","title":"Network"},{"location":"overview/intro/#algorithm","text":"","title":"Algorithm"},{"location":"reference/project/experiment/","text":"Experiment dataclass # Dataclass containing everything used in an experiment. This gets created from the config that are parsed from Hydra. Can be used to run the experiment by calling run(experiment) . Could also be serialized to a file or saved to disk, which might come in handy with submitit later on. setup_experiment # setup_experiment ( experiment_config : Config ) -> Experiment Do all the postprocessing necessary (e.g., create the network, datamodule, callbacks, Trainer, Algorithm, etc) to go from the options that come from Hydra, into all required components for the experiment, which is stored as a dataclass called Experiment . NOTE: This also has the effect of seeding the random number generators, so the weights that are constructed are deterministic and reproducible. instantiate_network_from_hparams # instantiate_network_from_hparams ( network_hparams : Dataclass , datamodule : DataModule ) -> Module TODO: Refactor this if possible. Shouldn't be as complicated as it currently is. Perhaps we could register handler functions for each pair of datamodule and network type, a bit like a multiple dispatch?","title":"Experiment"},{"location":"reference/project/experiment/#project.experiment.Experiment","text":"Dataclass containing everything used in an experiment. This gets created from the config that are parsed from Hydra. Can be used to run the experiment by calling run(experiment) . Could also be serialized to a file or saved to disk, which might come in handy with submitit later on.","title":"Experiment"},{"location":"reference/project/experiment/#project.experiment.setup_experiment","text":"setup_experiment ( experiment_config : Config ) -> Experiment Do all the postprocessing necessary (e.g., create the network, datamodule, callbacks, Trainer, Algorithm, etc) to go from the options that come from Hydra, into all required components for the experiment, which is stored as a dataclass called Experiment . NOTE: This also has the effect of seeding the random number generators, so the weights that are constructed are deterministic and reproducible.","title":"setup_experiment"},{"location":"reference/project/experiment/#project.experiment.instantiate_network_from_hparams","text":"instantiate_network_from_hparams ( network_hparams : Dataclass , datamodule : DataModule ) -> Module TODO: Refactor this if possible. Shouldn't be as complicated as it currently is. Perhaps we could register handler functions for each pair of datamodule and network type, a bit like a multiple dispatch?","title":"instantiate_network_from_hparams"},{"location":"reference/project/main/","text":"Main entry-point. main # main ( dict_config : DictConfig ) -> dict Main entry point for training a model. evaluation # evaluation ( experiment : Experiment , ) -> tuple [ str , float | None , dict ] Return the classification error. By default, if validation is to be performed, returns the validation error. Returns the training error when trainer.overfit_batches != 0 (e.g. when debugging or testing). Otherwise, if trainer.limit_val_batches == 0 , returns the test error.","title":"Main"},{"location":"reference/project/main/#project.main.main","text":"main ( dict_config : DictConfig ) -> dict Main entry point for training a model.","title":"main"},{"location":"reference/project/main/#project.main.evaluation","text":"evaluation ( experiment : Experiment , ) -> tuple [ str , float | None , dict ] Return the classification error. By default, if validation is to be performed, returns the validation error. Returns the training error when trainer.overfit_batches != 0 (e.g. when debugging or testing). Otherwise, if trainer.limit_val_batches == 0 , returns the test error.","title":"evaluation"},{"location":"reference/project/utils/types/","text":"PhaseStr module-attribute # PhaseStr = Literal [ 'train' , 'val' , 'test' ] The trainer phases. TODO: There has to exist an enum for it somewhere in PyTorch Lightning. DataModule # Bases: Protocol Protocol that shows the expected attributes / methods of the LightningDataModule class. This is used to type hint the batches that are yielded by the DataLoaders. Source code in project/utils/types/protocols.py 53 54 55 56 57 58 59 60 61 62 63 64 65 66 @runtime_checkable class DataModule [ BatchType ]( Protocol ): \"\"\"Protocol that shows the expected attributes / methods of the `LightningDataModule` class. This is used to type hint the batches that are yielded by the DataLoaders. \"\"\" # batch_size: int def prepare_data ( self ) -> None : ... def setup ( self , stage : StageStr ) -> None : ... def train_dataloader ( self ) -> Iterable [ BatchType ]: ... HasInputOutputShapes # Bases: Module , Protocol Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. Source code in project/utils/types/protocols.py 40 41 42 43 44 45 46 47 48 49 50 @runtime_checkable class HasInputOutputShapes ( Module , Protocol ): \"\"\"Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. \"\"\" input_shape : tuple [ int , ... ] # input_shapes: tuple[tuple[int, ...] | None, ...] = () output_shape : tuple [ int , ... ] is_list_of # is_list_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ list [ V ]] Used to check (and tell the type checker) that object is a list of items of this type. Source code in project/utils/types/__init__.py 45 46 47 def is_list_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ]) -> TypeGuard [ list [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a list of items of this type.\"\"\" return isinstance ( object , list ) and is_sequence_of ( object , item_type ) is_sequence_of # is_sequence_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]] Used to check (and tell the type checker) that object is a sequence of items of this type. Source code in project/utils/types/__init__.py 50 51 52 53 54 55 def is_sequence_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a sequence of items of this type.\"\"\" return isinstance ( object , Sequence ) and all ( isinstance ( value , item_type ) for value in object ) is_mapping_of # is_mapping_of ( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]] Used to check (and tell the type checker) that object is a mapping with keys and values of the given types. Source code in project/utils/types/__init__.py 58 59 60 61 62 63 64 65 66 def is_mapping_of [ K , V ]( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a mapping with keys and values of the given types.\"\"\" return isinstance ( object , Mapping ) and all ( isinstance ( key , key_type ) and isinstance ( value , value_type ) for key , value in object . items () )","title":"Types"},{"location":"reference/project/utils/types/#project.utils.types.PhaseStr","text":"PhaseStr = Literal [ 'train' , 'val' , 'test' ] The trainer phases. TODO: There has to exist an enum for it somewhere in PyTorch Lightning.","title":"PhaseStr"},{"location":"reference/project/utils/types/#project.utils.types.DataModule","text":"Bases: Protocol Protocol that shows the expected attributes / methods of the LightningDataModule class. This is used to type hint the batches that are yielded by the DataLoaders. Source code in project/utils/types/protocols.py 53 54 55 56 57 58 59 60 61 62 63 64 65 66 @runtime_checkable class DataModule [ BatchType ]( Protocol ): \"\"\"Protocol that shows the expected attributes / methods of the `LightningDataModule` class. This is used to type hint the batches that are yielded by the DataLoaders. \"\"\" # batch_size: int def prepare_data ( self ) -> None : ... def setup ( self , stage : StageStr ) -> None : ... def train_dataloader ( self ) -> Iterable [ BatchType ]: ...","title":"DataModule"},{"location":"reference/project/utils/types/#project.utils.types.HasInputOutputShapes","text":"Bases: Module , Protocol Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. Source code in project/utils/types/protocols.py 40 41 42 43 44 45 46 47 48 49 50 @runtime_checkable class HasInputOutputShapes ( Module , Protocol ): \"\"\"Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. \"\"\" input_shape : tuple [ int , ... ] # input_shapes: tuple[tuple[int, ...] | None, ...] = () output_shape : tuple [ int , ... ]","title":"HasInputOutputShapes"},{"location":"reference/project/utils/types/#project.utils.types.is_list_of","text":"is_list_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ list [ V ]] Used to check (and tell the type checker) that object is a list of items of this type. Source code in project/utils/types/__init__.py 45 46 47 def is_list_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ]) -> TypeGuard [ list [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a list of items of this type.\"\"\" return isinstance ( object , list ) and is_sequence_of ( object , item_type )","title":"is_list_of"},{"location":"reference/project/utils/types/#project.utils.types.is_sequence_of","text":"is_sequence_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]] Used to check (and tell the type checker) that object is a sequence of items of this type. Source code in project/utils/types/__init__.py 50 51 52 53 54 55 def is_sequence_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a sequence of items of this type.\"\"\" return isinstance ( object , Sequence ) and all ( isinstance ( value , item_type ) for value in object )","title":"is_sequence_of"},{"location":"reference/project/utils/types/#project.utils.types.is_mapping_of","text":"is_mapping_of ( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]] Used to check (and tell the type checker) that object is a mapping with keys and values of the given types. Source code in project/utils/types/__init__.py 58 59 60 61 62 63 64 65 66 def is_mapping_of [ K , V ]( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a mapping with keys and values of the given types.\"\"\" return isinstance ( object , Mapping ) and all ( isinstance ( key , key_type ) and isinstance ( value , value_type ) for key , value in object . items () )","title":"is_mapping_of"}]}
\ No newline at end of file
+{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Research Project Template # Please note: This is a Work-in-Progress. The goal is to make a first release by the end of summer 2024. This is a research project template. It is meant to be a starting point for ML researchers at Mila . Please follow the installation instructions here Overview # This project makes use of the following libraries: Hydra is used to configure the project. It allows you to define configuration files and override them from the command line. PyTorch Lightning is used to as the training framework. It provides a high-level interface to organize ML research code. \ud83d\udd25 Please note: You can also use Jax with this repo, as is shown in the Jax example \ud83d\udd25 Weights & Biases is used to log metrics and visualize results. pytest is used for testing. Usage # To see all available options: python project/main.py --help For a detailed list of examples, see the examples page . Project layout # pyproject.toml # Project metadata and dependencies project/ main.py # main entry-point algorithms/ # learning algorithms datamodules/ # datasets, processing and loading networks/ # Neural networks used by algorithms configs/ # configuration files docs/ # documentation conftest.py # Test fixtures and utilities","title":"Home"},{"location":"#research-project-template","text":"Please note: This is a Work-in-Progress. The goal is to make a first release by the end of summer 2024. This is a research project template. It is meant to be a starting point for ML researchers at Mila . Please follow the installation instructions here","title":"Research Project Template"},{"location":"#overview","text":"This project makes use of the following libraries: Hydra is used to configure the project. It allows you to define configuration files and override them from the command line. PyTorch Lightning is used to as the training framework. It provides a high-level interface to organize ML research code. \ud83d\udd25 Please note: You can also use Jax with this repo, as is shown in the Jax example \ud83d\udd25 Weights & Biases is used to log metrics and visualize results. pytest is used for testing.","title":"Overview"},{"location":"#usage","text":"To see all available options: python project/main.py --help For a detailed list of examples, see the examples page .","title":"Usage"},{"location":"#project-layout","text":"pyproject.toml # Project metadata and dependencies project/ main.py # main entry-point algorithms/ # learning algorithms datamodules/ # datasets, processing and loading networks/ # Neural networks used by algorithms configs/ # configuration files docs/ # documentation conftest.py # Test fixtures and utilities","title":"Project layout"},{"location":"SUMMARY/","text":"Home Overview overview/*.md Getting Started getting_started/*.md Reference reference/* Examples examples/* Tests Related projects Getting Help Contributing","title":"SUMMARY"},{"location":"contributing/","text":"Contributing # TODOs: [ ] Describe how to contribute to the project.","title":"Contributing"},{"location":"contributing/#contributing","text":"TODOs: [ ] Describe how to contribute to the project.","title":"Contributing"},{"location":"help/","text":"Help and Support # FAQ # How to get help #","title":"Getting Help"},{"location":"help/#help-and-support","text":"","title":"Help and Support"},{"location":"help/#faq","text":"","title":"FAQ"},{"location":"help/#how-to-get-help","text":"","title":"How to get help"},{"location":"reference/","text":"project main experiment utils types","title":"Reference"},{"location":"related/","text":"Related projects and resources # There are other very similar projects with significantly better documentation. In all cases that involve Hydra and PyTorch-Lightning, this documentation also applies directly to this project, so in order to avoid copying their documentation, here are some links: lightning-hydra-template How it works: https://github.com/gorodnitskiy/yet-another-lightning-hydra-template/tree/main?tab=readme-ov-file#workflow---how-it-works yet-another-lightning-hydra-template Excellent template. based on the lightning-hydra-template. Great documentation, which is referenced extensively in this project. Has a great Readme with lots of information Is really well organized doesn't support Jax doesn't have a devcontainer Great blog: https://hackernoon.com/yet-another-lightning-hydra-template-for-ml-experiments cookiecutter-data-science Awesome library for data science. Related projects: https://github.com/drivendataorg/cookiecutter-data-science/blob/master/docs/docs/related.md#links-to-related-projects-and-references","title":"Related projects"},{"location":"related/#related-projects-and-resources","text":"There are other very similar projects with significantly better documentation. In all cases that involve Hydra and PyTorch-Lightning, this documentation also applies directly to this project, so in order to avoid copying their documentation, here are some links: lightning-hydra-template How it works: https://github.com/gorodnitskiy/yet-another-lightning-hydra-template/tree/main?tab=readme-ov-file#workflow---how-it-works yet-another-lightning-hydra-template Excellent template. based on the lightning-hydra-template. Great documentation, which is referenced extensively in this project. Has a great Readme with lots of information Is really well organized doesn't support Jax doesn't have a devcontainer Great blog: https://hackernoon.com/yet-another-lightning-hydra-template-for-ml-experiments cookiecutter-data-science Awesome library for data science. Related projects: https://github.com/drivendataorg/cookiecutter-data-science/blob/master/docs/docs/related.md#links-to-related-projects-and-references","title":"Related projects and resources"},{"location":"tests/","text":"Tests # TODOs: [ ] Described what is tested by the included automated tests (a bit like what is done here ) [ ] Add some examples of how to run tests [ ] describe why the test files are next to the source files, and why TDD is good, and why ML researchers should care more about tests. [ ] Explain how the fixtures in conftest.py work (indirect parametrization of the command-line overrides, etc). [ ] Describe the Github Actions workflows that come with the template, and how to setup a self-hosted runner for template forks. [ ] Add links to relevant documentation ()","title":"Tests"},{"location":"tests/#tests","text":"TODOs: [ ] Described what is tested by the included automated tests (a bit like what is done here ) [ ] Add some examples of how to run tests [ ] describe why the test files are next to the source files, and why TDD is good, and why ML researchers should care more about tests. [ ] Explain how the fixtures in conftest.py work (indirect parametrization of the command-line overrides, etc). [ ] Describe the Github Actions workflows that come with the template, and how to setup a self-hosted runner for template forks. [ ] Add links to relevant documentation ()","title":"Tests"},{"location":"examples/examples/","text":"Examples # TODOs: [ ] Show examples (that are also to be tested with doctest or similar) of how to add a new algo. [ ] Show examples of how to add a new datamodule. [ ] Add a link to the RL example once #13 is done. [ ] Add a link to the NLP example once #14 is done. [ ] Add an example of how to use Jax for the dataset/dataloading: Either through an RL example, or with tfds in #18 Simple run # python project/main.py algorithm = example_algo datamodule = mnist network = fcnet Running a Hyper-Parameter sweep on a SLURM cluster # python project/main.py experiment = cluster_sweep_example","title":"Examples"},{"location":"examples/examples/#examples","text":"TODOs: [ ] Show examples (that are also to be tested with doctest or similar) of how to add a new algo. [ ] Show examples of how to add a new datamodule. [ ] Add a link to the RL example once #13 is done. [ ] Add a link to the NLP example once #14 is done. [ ] Add an example of how to use Jax for the dataset/dataloading: Either through an RL example, or with tfds in #18","title":"Examples"},{"location":"examples/examples/#simple-run","text":"python project/main.py algorithm = example_algo datamodule = mnist network = fcnet","title":"Simple run"},{"location":"examples/examples/#running-a-hyper-parameter-sweep-on-a-slurm-cluster","text":"python project/main.py experiment = cluster_sweep_example","title":"Running a Hyper-Parameter sweep on a SLURM cluster"},{"location":"examples/jax/","text":"Using Jax # You can use Jax for your dataloading, your network, or the learning algorithm, all while still benefiting from the nice stuff that comes from using PyTorch-Lightning. How does this work? Well, we use torch-jax-interop , another package developed here at Mila, which allows easy interop between torch and jax code. See the readme on that repo for more details. Example Algorithm that uses Jax # You can use Jax for your training step, but not the entire training loop (since that is handled by Lightning). There are a few good reasons why you should let Lightning handle the training loop, most notably the fact that it handles all the logging, checkpointing, and other stuff that you'd lose if you swapped out the entire training framework for something based on Jax. In this example Jax algorithm , a Neural network written in Jax (using flax ) is wrapped using the torch_jax_interop.JaxFunction , so that its parameters are learnable. The parameters are saved on the LightningModule as nn.Parameters (which use the same underlying memory as the jax arrays). In this example, the loss function is written in PyTorch, while the network forward and backward passes are written in Jax. Example datamodule that uses Jax # (todo)","title":"Using Jax"},{"location":"examples/jax/#using-jax","text":"You can use Jax for your dataloading, your network, or the learning algorithm, all while still benefiting from the nice stuff that comes from using PyTorch-Lightning. How does this work? Well, we use torch-jax-interop , another package developed here at Mila, which allows easy interop between torch and jax code. See the readme on that repo for more details.","title":"Using Jax"},{"location":"examples/jax/#example-algorithm-that-uses-jax","text":"You can use Jax for your training step, but not the entire training loop (since that is handled by Lightning). There are a few good reasons why you should let Lightning handle the training loop, most notably the fact that it handles all the logging, checkpointing, and other stuff that you'd lose if you swapped out the entire training framework for something based on Jax. In this example Jax algorithm , a Neural network written in Jax (using flax ) is wrapped using the torch_jax_interop.JaxFunction , so that its parameters are learnable. The parameters are saved on the LightningModule as nn.Parameters (which use the same underlying memory as the jax arrays). In this example, the loss function is written in PyTorch, while the network forward and backward passes are written in Jax.","title":"Example Algorithm that uses Jax"},{"location":"examples/jax/#example-datamodule-that-uses-jax","text":"(todo)","title":"Example datamodule that uses Jax"},{"location":"getting_started/install/","text":"Installation instructions # There are two ways to install this project Using Conda (recommended for newcomers) Using a development container (recommended if you are able to install Docker on your machine) Using Conda and pip # Prerequisites # You need to have Conda installed on your machine. Installation # Clone the repository and navigate to the root directory: git clone https://www.github.com/mila-iqia/ResearchTemplate cd ResearchTemplate Create a conda environment conda create -n research_template python = 3 .12 conda activate research_template Notes: - If you don't Conda installed, you can download it from [here](https://docs.conda.io/en/latest/miniconda.html). - If you'd rather use a virtual environment instead of Conda, you can totally do so, as long as you have a version of Python >= 3.12. Install the package using pip: pip install -e . Optionally, you can also install the package using PDM . This makes it easier to add or change the dependencies later on: pip install pdm pdm install Using a development container # This repo provides a Devcontainer configuration for Visual Studio Code to use a Docker container as a pre-configured development environment. This avoids struggles setting up a development environment and makes them reproducible and consistent. and make yourself familiar with the container tutorials if you want to use them. In order to use GPUs, you can enable them within the .devcontainer/devcontainer.json file. Setup Docker on your local machine On an Linux machine where you have root access, you can install Docker using the following commands: curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh On Windows or Mac, follow these installation instructions (optional) Install the nvidia-container-toolkit to use your local machine's GPU(s). Install the Dev Containers extension for Visual Studio Code. When opening repository in Visual Studio Code, you should be prompted to reopen the repository in a container: Alternatively, you can open the command palette (Ctrl+Shift+P) and select Dev Containers: Rebuild and Reopen in Container .","title":"Installation instructions"},{"location":"getting_started/install/#installation-instructions","text":"There are two ways to install this project Using Conda (recommended for newcomers) Using a development container (recommended if you are able to install Docker on your machine)","title":"Installation instructions"},{"location":"getting_started/install/#using-conda-and-pip","text":"","title":"Using Conda and pip"},{"location":"getting_started/install/#prerequisites","text":"You need to have Conda installed on your machine.","title":"Prerequisites"},{"location":"getting_started/install/#installation","text":"Clone the repository and navigate to the root directory: git clone https://www.github.com/mila-iqia/ResearchTemplate cd ResearchTemplate Create a conda environment conda create -n research_template python = 3 .12 conda activate research_template Notes: - If you don't Conda installed, you can download it from [here](https://docs.conda.io/en/latest/miniconda.html). - If you'd rather use a virtual environment instead of Conda, you can totally do so, as long as you have a version of Python >= 3.12. Install the package using pip: pip install -e . Optionally, you can also install the package using PDM . This makes it easier to add or change the dependencies later on: pip install pdm pdm install","title":"Installation"},{"location":"getting_started/install/#using-a-development-container","text":"This repo provides a Devcontainer configuration for Visual Studio Code to use a Docker container as a pre-configured development environment. This avoids struggles setting up a development environment and makes them reproducible and consistent. and make yourself familiar with the container tutorials if you want to use them. In order to use GPUs, you can enable them within the .devcontainer/devcontainer.json file. Setup Docker on your local machine On an Linux machine where you have root access, you can install Docker using the following commands: curl -fsSL https://get.docker.com -o get-docker.sh sudo sh get-docker.sh On Windows or Mac, follow these installation instructions (optional) Install the nvidia-container-toolkit to use your local machine's GPU(s). Install the Dev Containers extension for Visual Studio Code. When opening repository in Visual Studio Code, you should be prompted to reopen the repository in a container: Alternatively, you can open the command palette (Ctrl+Shift+P) and select Dev Containers: Rebuild and Reopen in Container .","title":"Using a development container"},{"location":"overview/intro/","text":"Introduction # Why should you use this template? # Why should you use a template in the first place? # For many good reasons, which are very well described here in a similar project ! \ud83d\ude0a Other good reads: https://cookiecutter-data-science.drivendata.org/why/ https://cookiecutter-data-science.drivendata.org/opinions/ https://12factor.net/ https://github.com/ashleve/lightning-hydra-template/tree/main?tab=readme-ov-file#main-ideas Why should you use this template (instead of another)? # You are welcome (and encouraged) to use other similar templates which, at the time of writing this, have significantly better documentation. However, there are several advantages to using this particular template: \u2757Support for both Jax and Torch with PyTorch-Lightning \u2757 Easy development inside a devcontainer with VsCode Tailor-made for ML researchers that run their jobs on SLURM clusters (with default configurations for the Mila and DRAC clusters.) Rich typing of all parts of the source code using Python 3.12's new type annotation syntax A comprehensive suite of automated tests for new algorithms, datasets and networks Automatically creates Yaml Schemas for your Hydra config files (as soon as #7 is merged) This template is geared specifically for ML researchers that run their jobs on SLURM clusters. A particular emphasis for development specifically with a SLURM cluster, and more particularly still, with the Mila and DRAC clusters in mind. The target audience is (currently) limited to Mila researchers, but there's no reason why this Main concepts # Datamodule # Network # Algorithm #","title":"Introduction"},{"location":"overview/intro/#introduction","text":"","title":"Introduction"},{"location":"overview/intro/#why-should-you-use-this-template","text":"","title":"Why should you use this template?"},{"location":"overview/intro/#why-should-you-use-a-template-in-the-first-place","text":"For many good reasons, which are very well described here in a similar project ! \ud83d\ude0a Other good reads: https://cookiecutter-data-science.drivendata.org/why/ https://cookiecutter-data-science.drivendata.org/opinions/ https://12factor.net/ https://github.com/ashleve/lightning-hydra-template/tree/main?tab=readme-ov-file#main-ideas","title":"Why should you use a template in the first place?"},{"location":"overview/intro/#why-should-you-use-this-template-instead-of-another","text":"You are welcome (and encouraged) to use other similar templates which, at the time of writing this, have significantly better documentation. However, there are several advantages to using this particular template: \u2757Support for both Jax and Torch with PyTorch-Lightning \u2757 Easy development inside a devcontainer with VsCode Tailor-made for ML researchers that run their jobs on SLURM clusters (with default configurations for the Mila and DRAC clusters.) Rich typing of all parts of the source code using Python 3.12's new type annotation syntax A comprehensive suite of automated tests for new algorithms, datasets and networks Automatically creates Yaml Schemas for your Hydra config files (as soon as #7 is merged) This template is geared specifically for ML researchers that run their jobs on SLURM clusters. A particular emphasis for development specifically with a SLURM cluster, and more particularly still, with the Mila and DRAC clusters in mind. The target audience is (currently) limited to Mila researchers, but there's no reason why this","title":"Why should you use this template (instead of another)?"},{"location":"overview/intro/#main-concepts","text":"","title":"Main concepts"},{"location":"overview/intro/#datamodule","text":"","title":"Datamodule"},{"location":"overview/intro/#network","text":"","title":"Network"},{"location":"overview/intro/#algorithm","text":"","title":"Algorithm"},{"location":"reference/project/experiment/","text":"Experiment dataclass # Dataclass containing everything used in an experiment. This gets created from the config that are parsed from Hydra. Can be used to run the experiment by calling run(experiment) . Could also be serialized to a file or saved to disk, which might come in handy with submitit later on. setup_experiment # setup_experiment ( experiment_config : Config ) -> Experiment Do all the postprocessing necessary (e.g., create the network, datamodule, callbacks, Trainer, Algorithm, etc) to go from the options that come from Hydra, into all required components for the experiment, which is stored as a dataclass called Experiment . NOTE: This also has the effect of seeding the random number generators, so the weights that are constructed are deterministic and reproducible. instantiate_network_from_hparams # instantiate_network_from_hparams ( network_hparams : Dataclass , datamodule : DataModule ) -> Module TODO: Refactor this if possible. Shouldn't be as complicated as it currently is. Perhaps we could register handler functions for each pair of datamodule and network type, a bit like a multiple dispatch?","title":"Experiment"},{"location":"reference/project/experiment/#project.experiment.Experiment","text":"Dataclass containing everything used in an experiment. This gets created from the config that are parsed from Hydra. Can be used to run the experiment by calling run(experiment) . Could also be serialized to a file or saved to disk, which might come in handy with submitit later on.","title":"Experiment"},{"location":"reference/project/experiment/#project.experiment.setup_experiment","text":"setup_experiment ( experiment_config : Config ) -> Experiment Do all the postprocessing necessary (e.g., create the network, datamodule, callbacks, Trainer, Algorithm, etc) to go from the options that come from Hydra, into all required components for the experiment, which is stored as a dataclass called Experiment . NOTE: This also has the effect of seeding the random number generators, so the weights that are constructed are deterministic and reproducible.","title":"setup_experiment"},{"location":"reference/project/experiment/#project.experiment.instantiate_network_from_hparams","text":"instantiate_network_from_hparams ( network_hparams : Dataclass , datamodule : DataModule ) -> Module TODO: Refactor this if possible. Shouldn't be as complicated as it currently is. Perhaps we could register handler functions for each pair of datamodule and network type, a bit like a multiple dispatch?","title":"instantiate_network_from_hparams"},{"location":"reference/project/main/","text":"Main entry-point. main # main ( dict_config : DictConfig ) -> dict Main entry point for training a model. evaluation # evaluation ( experiment : Experiment , ) -> tuple [ str , float | None , dict ] Return the classification error. By default, if validation is to be performed, returns the validation error. Returns the training error when trainer.overfit_batches != 0 (e.g. when debugging or testing). Otherwise, if trainer.limit_val_batches == 0 , returns the test error.","title":"Main"},{"location":"reference/project/main/#project.main.main","text":"main ( dict_config : DictConfig ) -> dict Main entry point for training a model.","title":"main"},{"location":"reference/project/main/#project.main.evaluation","text":"evaluation ( experiment : Experiment , ) -> tuple [ str , float | None , dict ] Return the classification error. By default, if validation is to be performed, returns the validation error. Returns the training error when trainer.overfit_batches != 0 (e.g. when debugging or testing). Otherwise, if trainer.limit_val_batches == 0 , returns the test error.","title":"evaluation"},{"location":"reference/project/utils/types/","text":"PhaseStr module-attribute # PhaseStr = Literal [ 'train' , 'val' , 'test' ] The trainer phases. TODO: There has to exist an enum for it somewhere in PyTorch Lightning. DataModule # Bases: Protocol Protocol that shows the expected attributes / methods of the LightningDataModule class. This is used to type hint the batches that are yielded by the DataLoaders. Source code in project/utils/types/protocols.py 53 54 55 56 57 58 59 60 61 62 63 64 65 66 @runtime_checkable class DataModule [ BatchType ]( Protocol ): \"\"\"Protocol that shows the expected attributes / methods of the `LightningDataModule` class. This is used to type hint the batches that are yielded by the DataLoaders. \"\"\" # batch_size: int def prepare_data ( self ) -> None : ... def setup ( self , stage : StageStr ) -> None : ... def train_dataloader ( self ) -> Iterable [ BatchType ]: ... HasInputOutputShapes # Bases: Module , Protocol Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. Source code in project/utils/types/protocols.py 40 41 42 43 44 45 46 47 48 49 50 @runtime_checkable class HasInputOutputShapes ( Module , Protocol ): \"\"\"Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. \"\"\" input_shape : tuple [ int , ... ] # input_shapes: tuple[tuple[int, ...] | None, ...] = () output_shape : tuple [ int , ... ] is_list_of # is_list_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ list [ V ]] Used to check (and tell the type checker) that object is a list of items of this type. Source code in project/utils/types/__init__.py 45 46 47 def is_list_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ]) -> TypeGuard [ list [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a list of items of this type.\"\"\" return isinstance ( object , list ) and is_sequence_of ( object , item_type ) is_sequence_of # is_sequence_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]] Used to check (and tell the type checker) that object is a sequence of items of this type. Source code in project/utils/types/__init__.py 50 51 52 53 54 55 def is_sequence_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a sequence of items of this type.\"\"\" return isinstance ( object , Sequence ) and all ( isinstance ( value , item_type ) for value in object ) is_mapping_of # is_mapping_of ( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]] Used to check (and tell the type checker) that object is a mapping with keys and values of the given types. Source code in project/utils/types/__init__.py 58 59 60 61 62 63 64 65 66 def is_mapping_of [ K , V ]( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a mapping with keys and values of the given types.\"\"\" return isinstance ( object , Mapping ) and all ( isinstance ( key , key_type ) and isinstance ( value , value_type ) for key , value in object . items () )","title":"Types"},{"location":"reference/project/utils/types/#project.utils.types.PhaseStr","text":"PhaseStr = Literal [ 'train' , 'val' , 'test' ] The trainer phases. TODO: There has to exist an enum for it somewhere in PyTorch Lightning.","title":"PhaseStr"},{"location":"reference/project/utils/types/#project.utils.types.DataModule","text":"Bases: Protocol Protocol that shows the expected attributes / methods of the LightningDataModule class. This is used to type hint the batches that are yielded by the DataLoaders. Source code in project/utils/types/protocols.py 53 54 55 56 57 58 59 60 61 62 63 64 65 66 @runtime_checkable class DataModule [ BatchType ]( Protocol ): \"\"\"Protocol that shows the expected attributes / methods of the `LightningDataModule` class. This is used to type hint the batches that are yielded by the DataLoaders. \"\"\" # batch_size: int def prepare_data ( self ) -> None : ... def setup ( self , stage : StageStr ) -> None : ... def train_dataloader ( self ) -> Iterable [ BatchType ]: ...","title":"DataModule"},{"location":"reference/project/utils/types/#project.utils.types.HasInputOutputShapes","text":"Bases: Module , Protocol Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. Source code in project/utils/types/protocols.py 40 41 42 43 44 45 46 47 48 49 50 @runtime_checkable class HasInputOutputShapes ( Module , Protocol ): \"\"\"Protocol for a a module that is \"easy to invert\" since it has known input and output shapes. It's easier to mark modules as invertible in-place than to create new subclass for every single nn.Module class that we want to potentially use in the forward net. \"\"\" input_shape : tuple [ int , ... ] # input_shapes: tuple[tuple[int, ...] | None, ...] = () output_shape : tuple [ int , ... ]","title":"HasInputOutputShapes"},{"location":"reference/project/utils/types/#project.utils.types.is_list_of","text":"is_list_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ list [ V ]] Used to check (and tell the type checker) that object is a list of items of this type. Source code in project/utils/types/__init__.py 45 46 47 def is_list_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ]) -> TypeGuard [ list [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a list of items of this type.\"\"\" return isinstance ( object , list ) and is_sequence_of ( object , item_type )","title":"is_list_of"},{"location":"reference/project/utils/types/#project.utils.types.is_sequence_of","text":"is_sequence_of ( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]] Used to check (and tell the type checker) that object is a sequence of items of this type. Source code in project/utils/types/__init__.py 50 51 52 53 54 55 def is_sequence_of [ V ]( object : Any , item_type : type [ V ] | tuple [ type [ V ], ... ] ) -> TypeGuard [ Sequence [ V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a sequence of items of this type.\"\"\" return isinstance ( object , Sequence ) and all ( isinstance ( value , item_type ) for value in object )","title":"is_sequence_of"},{"location":"reference/project/utils/types/#project.utils.types.is_mapping_of","text":"is_mapping_of ( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]] Used to check (and tell the type checker) that object is a mapping with keys and values of the given types. Source code in project/utils/types/__init__.py 58 59 60 61 62 63 64 65 66 def is_mapping_of [ K , V ]( object : Any , key_type : type [ K ], value_type : type [ V ] ) -> TypeGuard [ Mapping [ K , V ]]: \"\"\"Used to check (and tell the type checker) that `object` is a mapping with keys and values of the given types.\"\"\" return isinstance ( object , Mapping ) and all ( isinstance ( key , key_type ) and isinstance ( value , value_type ) for key , value in object . items () )","title":"is_mapping_of"}]}
\ No newline at end of file
diff --git a/sitemap.xml b/sitemap.xml
index 22bc5ac6..e3f3f96e 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -35,11 +35,6 @@
2024-06-27daily
-
- https://mila-iqia.github.io/ResearchTemplate/examples/SUMMARY/
- 2024-06-27
- daily
- https://mila-iqia.github.io/ResearchTemplate/examples/examples/2024-06-27
diff --git a/sitemap.xml.gz b/sitemap.xml.gz
index 79984e845f67b6376f4cc5f864afb5c842897e03..15f1bde4b2d3df5ae6d09b91e0be03433b408e5a 100644
GIT binary patch
delta 330
zcmV-Q0k!_#0@(tP7=KG|!Y~Yg@BE6Y_ck4x_EIU@Wj8Ji4m(XDHEmE{#D>DZpF7aT
zebS`(V#jitj1JAta_1&XUvRONHtOfr@+!NvNP+IZEBS$~kzSZ!pX_ga>nrv9)g
zwHHf3kWuw3u)#3vZ>6w^lzx%zIxR4-iz+X(xnmD-$ZapSq5
zi66E-ou=>;Wd$
z9&Xq-Co%4wJV+&i+yRYE_!~*au?Dqc&zMB$JWHWYj~t>egIMY5hMZ&003(yqdNcq
diff --git a/tests/index.html b/tests/index.html
index 32a212fd..d52eba65 100644
--- a/tests/index.html
+++ b/tests/index.html
@@ -73,11 +73,9 @@