2023.1.0.dev20230811
版本发布时间: 2023-08-17 19:05:21
openvinotoolkit/openvino最新发布版本:2024.3.0(2024-07-31 22:33:13)
NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.
OpenVINO™ toolkit pre-release definition:
- It is introduced to get early feedback from the community.
- The scope and functionality of the pre-release version is subject to change in the future.
- Using the pre-release in production is strongly discouraged.
You can find OpenVINO™ toolkit 2023.1.0.dev20230811 pre-release version here:
- Download archives* with OpenVINO™
-
Install it via Conda:
conda install -c "conda-forge/label/openvino_dev" openvino=2023.1.0.dev20230811
-
OpenVINO™ Runtime for Python:
pip install --pre openvino
orpip install openvino==2023.1.0.dev20230811
-
OpenVINO™ Development tools:
pip install --pre openvino-dev
orpip install openvino-dev==2023.1.0.dev20230811
Release notes are available here: https://docs.openvino.ai/nightly/prerelease_information.html Release documentation is available here: https://docs.openvino.ai/nightly/
What's Changed
- CPU runtime:
- Enabled weights decompression support for Large Language models (LLMs). The implementation supports avx2 and avx512 HW targets for Intel® Core™ processors, improving performance in the latency mode (comparison: FP32 VS FP32+INT8 weights). For 4th Generation Intel® Xeon® Scalable Processors (formerly Sapphire Rapids) this INT8 decompression feature improves performance compared to pure BF16 inference. PRs: #18915, #19111
- Reduced memory consumption of the ‘compile model’ stage by moving constant folding of Transpose nodes to the CPU Runtime side. PR: #18877
- Set FP16 inference precision by default for non-convolution networks on ARM. Convolution networks will be executed in FP32. PRs: #19069, #19192, #19176
- GPU runtime: Added paddings for dynamic convolutions to improve performance for models like Stable-Diffusion v2.1, PR: #19001
- Python API:
- TensorFlow FE:
- Added support for the TensorFlow 1 Checkpoint format. All native TensorFlow formats are now enabled.
- Added support for 8 new operations:
- PyTorch FE:
New openvino_notebooks
- 245-typo-detector English Typo Detection in sentences with OpenVINO™
- 247-code-language-id Identify the programming language used in an arbitrary code snippet
- 121-convert-to-openvino Learn OpenVINO model conversion API
- 244-named-entity-recognition Named entity recognition with OpenVINO™
- 246-depth-estimation-videpth Monocular Visual-Inertial Depth Estimation with OpenVINO™
- 248-stable-diffusion-xl Image generation with Stable Diffusion XL
- 249-oneformer-segmentation Universal segmentation with OneFormer
Fixed GitHub issues
- Fixed #18978 "Webassembly build fails" with PR #19005
- Fixed #18847 "Debugging OpenVINO Python GIL Error" with PR #18848
- Fixed #18465 "OpenVINO can't be built in an environment that has an 'ambient' oneDNN installation" with PR #18805
Acknowledgements Thanks for contributions from the OpenVINO developer community: @DmitriyValetov, @kai-waang
Full Changelog: 2023.1.0.dev20230728...2023.1.0.dev20230811