v1.17.0
版本发布时间: 2024-02-03 08:25:21
microsoft/onnxruntime最新发布版本:v1.19.2(2024-09-05 03:33:14)
Announcements
In the next release, we will totally drop support for Windows ARM32.
General
- Added support for new ONNX 1.15 opsets: IsInf-20, IsNaN-20, DFT-20, ReduceMax-20, ReduceMin-20, AffineGrid-20, GridSample, ConstantOfShape-20, RegexFullMatch, StringConcat, StringSplit, and ai.onnx.ml.LabelEncoder-4.
- Updated C/C++ libraries: abseil, date, nsync, googletest, wil, mp11, cpuinfo, safeint, and onnx.
Build System and Packages
- Dropped CentOS 7 support. All Linux binaries now require glibc version >=2.28, but users can still build the source code for a lower glibc version.
- Added CUDA 12 packages for Python and Nuget.
- Added Python 3.12 packages for ONNX Runtime Inference. ONNX Runtime Training Python 3.12 packages cannot be provided at this time since training packages depend on PyTorch, which does not support Python 3.12 yet.
- Linux binaries (except those in AMD GPU packages) are built in a more secure way that is compliant with BinSkim's default policy (e.g., the binaries no longer have an executable stack).
- Added support for Windows ARM64X for users who build ONNX Runtime from source. No prebuilt package provided yet.
- Removed Windows ARM32 binaries from official packages. Users who still need these binaries can build them from source.
- Added AMD GPU package with ROCm and MiGraphX (Python + Linux only).
- Split ONNX Runtime GPU Nuget package into two packages.
- When building the source code for Linux ARM64 or Android, the C/C++ compiler must support BFloat16. Support for Android NDK 24.x has been removed. Please use NDK 25.x or 26.x instead.
- Link time code generation (LTCG or LTO) is now disabled by default when building from source. To re-enable it, users can add "--enable_lto" to the build command. All prebuilt binaries are still built with LTO.
Core
- Optimized graph inlining.
- Added support for supplying a custom logger at the session level.
Performance
- Added 4bit quant support on NVIDIA GPU and ARM64.
EPs
TensorRT EP
- Added support for direct load of precompiled TensorRT engines and customizable engine prefix.
- Added Python support for TensorRT plugins via ORT custom ops.
- Fixed concurrent Session::Run bugs.
- Updated calls to deprecated TensorRT APIs (e.g., enqueue_v2 → enqueue_v3).
- Fixed various memory leak bugs.
QNN EP
- Added support for QNN SDK 2.18.
- Added context binary caching and model initialization optimizations.
- Added mixed precision (8/16 bit) quantization support.
- Add device-level session options (soc_model, htp_arch, device_id), extreme_power_saver for htp_performance_mode, and vtcm_mb settings.
- Fixed multi-threaded inference bug.
- Fixed various other bugs and added performance improvements.
OpenVINO EP
- Added support for OpenVINO 2023.2.
- Added AppendExecutionProvider_OpenVINO_V2 API for supporting new OpenVINO EP options.
DirectML EP
- Updated to DirectML 1.13.1.
- Updated operators LpPool-18 and AveragePool-19 with dilations.
- Improved Python I/O binding support.
- Added RotaryEmbedding.
- Added support for fusing subgraphs into DirectML execution plans.
- Added new Python API to choose a specific GPU on multi-GPU devices with the DirectML EP.
Mobile
- Added initial support for 4bit quantization on ARM64.
- Extended CoreML/NNAPI operator coverage.
- Added support for YOLOv8 pose detection pre/post processing.
- Added support for macOS in CocoaPods package.
Web
- Added support for external data format.
- Added support for I/O bindings.
- Added support for training.
- Added WebGPU optimizations.
- Transitioned WebGPU out of experimental.
- Added FP16 support for WebGPU.
Training
Large Model Training
- Enabled support for QLoRA (with support for BFloat16).
- Added symbolic shape support for Triton codegen (see PR).
- Made improvements to recompute optimizer with easy ON/OFF to allow layer-wise recompute (see PR).
- Enabled memory-efficient gradient management. For Mistral, we see ~10GB drop in memory consumption when this feature is ON (see PR).
- Enabled embedding sparsity optimizations.
- Added support for Aten efficient attention and Triton Flash Attention (see PR).
- Packages now available for CUDA 11.8 and 12.1.
On Device Training
- On-Device training will now support training on the web. This release focuses on federated learning and developer exploration scenarios. More features coming soon in future releases.
Extensions
- Modified gen_processing_model tokenizer model to output int64, unifying output datatype of all tokenizers.
- Implemented support for post-processing of YOLO v8 within the Python extensions package.
- Introduced 'fairseq' flag to enhance compatibility with certain Hugging Face tokenizers.
- Incorporated 'added_token' attribute into the BPE tokenizer to improve CodeGen tokenizer functionality.
- Enhanced the SentencePiece tokenizer by integrating token indices into the output.
- Added support for the custom operator implemented with CUDA kernels, including two example operators.
- Added more tests on the Hugging Face tokenizer and fixed identified bugs.
Known Issues
- The onnxruntime-training package is not yet available in PyPI but can be accessed in ADO as follows:
Installation instructions can also be accessed here.python -m pip install cerberus flatbuffers h5py numpy>=1.16.6 onnx packaging protobuf sympy setuptools>=41.4.0 pip install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT/pypi/simple/ onnxruntime-training pip install torch-ort python -m torch_ort.configure
- For models with int4 kernel only:
- Crash may occur when int4 is applied on Intel CPUs with hybrid core if the E-cores are disabled in BIOS. Fix is in progress to be patched.
- Performance regression on the int4 kernel on x64 makes the op following MatMulNBits much slower. Fix is in progress to be patched.
- Current bug in BeamSearch implementation of T5, GPT, and Whisper may break models under heavy inference load using BeamSearch on CUDA. See #19345. Fix is in progress to be patched.
- Full support of ONNX 1.15 opsets is still in progress. A list of new ONNX 1.15 opset support that has been included in this release can be found above in the 'General' section.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: snnn, fs-eire, tianleiwu, mszhanyi, edgchen1, skottmckay, jchen351, adrianlizarraga, qjia7, Honry, HectorSVC, chilo-ms, axinging, jeffbloo, pengwa, yuslepukhin, guschmue, satyajandhyala, xadupre, RandyShuai, PeixuanZuo, RandySheriffH, er3x3, wschin, yf711, PatriceVignola, askhade, smk2007, natke, kunal-vaishnavi, YUNQIUGUO, liqunfu, cloudhan, wangyems, yufenglee, ajindal1, baijumeswani, justinchuby, Craigacp, wejoncy, jywu-msft, hariharans29, nums11, jslhcl, jeffdaily, chenfucn, zhijxu-MS, mindest, BowenBao, sumitsays, prasanthpul, fdwr, pranavsharma, chentaMS, zhangxiang1993, souptc, zhanghuanrong, faxu, georgen117, sfatimar, thiagocrepaldi, adityagoel4512
1、 Microsoft.AI.MachineLearning.1.17.0.symbols.zip 224.71MB
2、 Microsoft.AI.MachineLearning.1.17.0.zip 29.55MB
3、 Microsoft.ML.OnnxRuntime.DirectML.1.17.0.zip 5.14MB
4、 onnxruntime-linux-aarch64-1.17.0.tgz 4.69MB
5、 onnxruntime-linux-x64-1.17.0.tgz 5.52MB
6、 onnxruntime-linux-x64-gpu-1.17.0.tgz 162.87MB
7、 onnxruntime-osx-arm64-1.17.0.tgz 6.91MB
8、 onnxruntime-osx-universal2-1.17.0.tgz 14.46MB
9、 onnxruntime-osx-x86_64-1.17.0.tgz 7.68MB
10、 onnxruntime-training-linux-aarch64-1.17.0.tgz 5.04MB
11、 onnxruntime-training-linux-x64-1.17.0.tgz 5.94MB
12、 onnxruntime-training-win-arm-1.17.0.zip 59.92MB
13、 onnxruntime-training-win-arm64-1.17.0.zip 62.54MB
14、 onnxruntime-training-win-x64-1.17.0.zip 63.49MB
15、 onnxruntime-training-win-x86-1.17.0.zip 62.34MB
16、 onnxruntime-win-arm64-1.17.0.zip 57.41MB
17、 onnxruntime-win-x64-1.17.0.zip 59.18MB
18、 onnxruntime-win-x64-gpu-1.17.0.zip 191.59MB
19、 onnxruntime-win-x86-1.17.0.zip 57.39MB