Skip to content

Intel® Low Precision Optimization Tool v1.6 Release

Compare
Choose a tag to compare
@ftian1 ftian1 released this 20 Aug 17:08
· 2523 commits to master since this release

Intel® Low Precision Optimization Tool v1.6 release is featured by:

Pruning:

  • Support pruning and post-training quantization pipeline on PyTorch
  • Support pruning during quantization-aware training on PyTorch

Quantization:

  • Support post-training quantization on TensorFlow 2.6.0, PyTorch 1.9.0, IPEX 1.8.0, and MXNet 1.8.0
  • Support quantization-aware training on TensorFlow 2.x (Keras API)

User Experience:

  • Improve quantization productivity with new UI
  • Support quantized model recovery from tuning history

New Models:

  • Support ResNet50 on ONNX model zoo

Documentation:

  • Add pruned models
  • Add quantized MLPerf models

Validated Configurations:

  • Python 3.6 & 3.7 & 3.8 & 3.9
  • Centos 8.3 & Ubuntu 18.04
  • TensorFlow 2.6.0
  • Intel TensorFlow 2.4.0, 2.5.0 and 1.15.0 UP3
  • PyTorch 1.8.0+cpu, 1.9.0+cpu, IPEX 1.8.0
  • MxNet 1.6.0, 1.7.0, 1.8.0
  • ONNX Runtime 1.6.0, 1.7.0, 1.8.0

Distribution:

  Channel Links Install Command
Source Github https://github.com/intel/lpot.git $ git clone https://github.com/intel/lpot.git
Binary Pip https://pypi.org/project/lpot $ pip install lpot
Binary Conda https://anaconda.org/intel/lpot $ conda install lpot -c conda-forge -c intel

Contact:

Please feel free to contact [email protected], if you get any questions.