Skip to content

catalyst-team/catalyst-rl

Repository files navigation

Catalyst logo

Accelerated RL

Build Status CodeFactor Pipi version Docs PyPI Status

Twitter Telegram Slack Github contributors

PyTorch framework for RL research. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing. Being able to research/develop something new, rather than write another regular train loop.
Break the cycle - use the Catalyst!

Project manifest. Part of PyTorch Ecosystem. Part of Catalyst Ecosystem:

  • Alchemy - Experiments logging & visualization
  • Catalyst - Accelerated Deep Learning Research and Development
  • Reaction - Convenient Deep Learning models serving

Catalyst at AI Landscape.


Installation

Common installation:

pip install -U catalyst-rl

Catalyst.RL is compatible with: Python 3.6+. PyTorch 1.0.0+.

Getting started

For Catalyst.RL introduction, please follow OpenAI Gym example.

Docs and examples

API documentation and an overview of the library can be found here Docs.
In the examples folder of the repository, you can find advanced tutorials and Catalyst best practices.

Infos

To learn more about Catalyst internals and to be aware of the most important features, you can read Catalyst-info – our blog where we regularly write facts about the framework.

We also supervise Awesome Catalyst list – Catalyst-powered projects, tutorials and talks.
Feel free to make a PR with your project to the list. And don't forget to check out current list, there are many interesting projects.

Releases

We deploy a major release once a month with a name like YY.MM.
And micro-releases with framework improvements during a month in the format YY.MM.#.

You can view the changelog on the GitHub Releases page.
Current version: Pipi version

Overview

Catalyst.RL helps you write compact but full-featured RL pipelines in a few lines of code. You get a training loop with metrics, early-stopping, model checkpointing and other features without the boilerplate.

Features

  • Universal train/inference loop.
  • Configuration files for model/data hyperparameters.
  • Reproducibility – all source code and environment variables will be saved.
  • Callbacks – reusable train/inference pipeline parts.
  • Training stages support.
  • Easy customization.
  • PyTorch best practices (SWA, AdamW, Ranger optimizer, OneCycle, FP16 and more).

Structure

  • RL – scalable Reinforcement Learning, all popular model-free algorithms implementations and their improvements with distributed training support.
  • contrib - additional modules contributed by Catalyst users.
  • utils - different useful utils for Deep Learning research.

Contribution guide

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion. If you plan to contribute new features, utility functions or extensions, please first open an issue and discuss the feature with us.

License

This project is licensed under the Apache License, Version 2.0 see the LICENSE file for details License

Citation

Please use this bibtex if you want to cite this repository in your publications:

@misc{catalyst,
    author = {Kolesnikov, Sergey},
    title = {Accelerated RL.},
    year = {2018},
    publisher = {GitHub},
    journal = {GitHub repository},
    howpublished = {\url{https://github.com/catalyst-team/catalyst-rl}},
}

About

No description, website, or topics provided.

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published