v1.3
版本发布时间: 2021-04-16 22:58:24
intel/neural-compressor最新发布版本:v2.6(2024-06-14 21:55:11)
Intel® Low Precision Optimization Tool v1.3 release is featured by:
- FP32 optimization & auto-mixed precision (BF16/FP32) for TensorFlow
- Dynamic quantization support for PyTorch
- ONNX Runtime v1.7 support
- Configurable benchmarking support (multi-instances, warmup, etc.)
- Multiple batch size calibration & mAP metrics for object detection models
- Experimental user facing APIs for better usability
- Various HuggingFace models support
Validated Configurations:
- Python 3.6 & 3.7 & 3.8
- Centos 7 & Ubuntu 18.04
- Intel TensorFlow 1.15.2, 2.1.0, 2.2.0, 2.3.0, 2.4.0 and 1.15.0 UP1 & UP2
- PyTorch 1.5.0+cpu, 1.6.0+cpu, ipex
- MxNet 1.7.0
- ONNX Runtime 1.6.0, 1.7.0
Distribution:
Channel | Links | Install Command | |
---|---|---|---|
Source | Github | https://github.com/intel/lpot.git | $ git clone https://github.com/intel/lpot.git |
Binary | Pip | https://pypi.org/project/lpot | $ pip install lpot |
Binary | Conda | https://anaconda.org/intel/lpot | $ conda install lpot -c conda-forge -c intel |
Contact:
Please feel free to contact lpot.maintainers@intel.com, if you get any questions.