Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fix LSTM int8 quantization model size issue (pytorch#23577)
Summary: Pull Request resolved: pytorch#23577 This diff is fixing a model size issue introduced in pytorch#23291. After that PR, the model size after in8 quantization is the same as that of the original unquantized model. The reason is that we save original weight for int8 quantization even when that's not needed anymore. This diff fixes that by only saving original weight for fp16 quantization path. Reviewed By: llyfacebook Differential Revision: D16557619 fbshipit-source-id: f924ae8d155a0d525b86a7440b3c7147d5bead0a
- Loading branch information