Skip to content

Low Precision Arithmetic Simulation in PyTorch

License

Notifications You must be signed in to change notification settings

udellgroup/QPyTorch

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

QPyTorch

Downloads License: MIT

QPyTorch is a low-precision arithmetic simulation package in PyTorch. It is designed to support researches on low-precision machine learning, especially for researches in low-precision training.

Notably, QPyTorch supports quantizing different numbers in the training process with customized low-precision formats. This eases the process of investigating different precision settings and developing new deep learning architectures. More concretely, QPyTorch implements fused kernels for quantization and integrates smoothly with existing PyTorch kernels (e.g. matrix multiplication, convolution).

Recent researches can be reimplemented easily through QPyTorch. We offer an example replication of WAGE in a downstream repo WAGE. We also provide a list of working examples under Examples.

Note: QPyTorch relies on PyTorch functions for the underlying computation, such as matrix multiplication. This means that the actual computation is done in single precision. Therefore, QPyTorch is not intended to be used to study the numerical behavior of different accumulation strategies.

Note: QPyTorch, as of now, have a different rounding mode with PyTorch. QPyTorch does round-away-from-zero while PyTorch does round-to-nearest-even. This will create a discrepancy between the PyTorch half-precision tensor and QPyTorch's simulation of half-precision numbers.

Installation

requirements:

  • Python >= 3.6
  • PyTorch >= 1.0
  • GCC >= 4.9 on linux

Install other requirements by:

pip install -r requirements.txt

Install QPyTorch through pip:

pip install qtorch

For more details about compiler requirements, please refer to PyTorch extension tutorial.

Documentation

See our readthedocs page.

Tutorials

Examples

  • Low-Precision VGGs and ResNets using fixed point, block floating point on CIFAR and ImageNet. lp_train
  • Reproduction of WAGE in QPyTorch. WAGE
  • Implementation (simulation) of 8-bit Floating Point Training in QPyTorch. IBM8

Team

About

Low Precision Arithmetic Simulation in PyTorch

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 51.2%
  • Cuda 24.5%
  • C++ 20.8%
  • C 3.5%