v1.8.0
版本发布时间: 2021-06-03 14:14:48
microsoft/onnxruntime最新发布版本:v1.19.2(2024-09-05 03:33:14)
Announcements
- This release
- Building onnxruntime from source now requires a C++ compiler with full C++14 support.
- Builds with OpenMP are no longer published. They can still be built from source if needed. The default threadpool option should provide optimal performance for the majority of models.
- New dependency for Python package: flatbuffers
- Next release (v1.9)
- Builds will require C++ 17 compiler
- GPU build will be updated to CUDA 11.1
General
- ONNX opset 14 support - new and updated operators from the ONNX 1.9 release
- Dynamically loadable CUDA execution provider
- Allows a single build to work for both CPU and GPU (excludes Python packages)
-
Profiler tool now includes information on threadpool usage
- multi-threading preparation time
- multi-threading run time
- multi-threading wait time
-
[Experimental] onnxruntime-extensions package
- Crowd-sourced library of common/shareable custom operator implementations that can be loaded and run with ONNX Runtime; community contributions are welcome! - microsoft/onnxruntime-extensions
- Currently includes mostly ops and tokenizers for string operations (full list here)
- Tutorials to export and load custom ops from onnxruntime-extensions: TensorFlow, PyTorch
Training
- torch-ort package released as the ONNX Runtime backend in PyTorch
- onnxruntime-training-gpu and onnxruntime-training-rocm packages now available for distributed training on NVIDIA and AMD GPUs
Mobile
- Official package now available
- Pre-built Android and iOS packages with support for selected operators and data types
- Objective-C API for iOS in preview
- Expanded operators supported by NNAPI (Android) and CoreML (iOS) execution providers
- All operators in the ai.onnx domain now support type reduction
- Create ORT format model with
--enable_type_reduction
flag, and perform minimal build--enable_reduced_operator_type_support
flag
- Create ORT format model with
ORT Web
- New ONNX Runtime Javascript API
- ONNX Runtime Web package
- Support WebAssembly and WebGL for CPU and GPU
- Support Web Worker based multi-threaded WebAssembly backend
- Supports ORT model format
- Improved WebGL performance
Performance
-
Memory footprint reduction through shared pre-packed weights for shared initializers
- Pre-packing refers to weights that are pre-processed at model load time
- Allows pre-packed weights of shared initializers to also be shared between sessions, preserving memory savings from using shared initializers
-
Memory footprint reduction through arena shrinkage
- By default, the memory arena doesn't shrink and it holds onto any allocated memory forever. This feature exposes a RunOption that scans the arena and potentially returns unused memory back to the system after the end of a Run. This feature is particularly useful while running a dynamic shape model that may occasionally process an outlier inference request that requires a large amount of memory. If the shrinkage option if invoked as part of these Runs, the memory that was required for that Run is not held on forever by the memory arena.
-
Quantization
- Native support of Quantize-Dequantize (QDQ) format for CPU
- Support for Concat, Transpose, GlobalAveragePool, AveragePool, Resize, Squeeze
- Improved performance on high-end ARM devices by leveraging dot-product instructions
- Improved performance for batched quant GEMM with optimized multi-threading logic
- Per-column quantization for MatMul
-
Transformers
- GPT-2 and beam search integration (example)
APIs
- WinML
- New native WinML API SetIntraOpThreadSpinning for toggling Intra Op thread spin behavior. When enabled, and when there is no current workload, IntraOp threads will continue to spin for some additional time as it waits for any additional work. This can result in better performance for the current workload but may impact performance of other unrelated workloads. This toggle is enabled by default.
- ORT Inferencing
- The following APIs have been added to this release. Please check the API documentation for information.
- KernelInfoGetAttributeArray_float
- KernelInfoGetAttributeArray_int64
- CreateArenaCfgV2
- AddRunConfigEntry
- CreatePrepackedWeightsContainer
- PrepackedWeightsContainer
- CreateSessionWithPrepackedWeightsContainer
- CreateSessionFromArrayWithPrepackedWeightsContainer
- The following APIs have been added to this release. Please check the API documentation for information.
Execution Providers
- TensorRT
- Added support for TensorRT EP configuration using session options instead of environment variables.
- Added support for DLA on Jetson Xavier (AGX, NX)
- General bug fixes and quality improvements.
- OpenVINO
- Added support for OpenVINO 2021.3
- Removed support for OpenVINO 2020.4
- Added support for Loading/Saving of Blobs on MyriadX devices to avoid expensive model blob compilation at runtime.
- DirectML • Supports ARM/ARM64 architectures now in WinML and ONNX RunTime NuGet packages. • Support for 8-dimensional tensors to: BatchNormalization, Cast, Join, LpNormalization, MeanVarianceNormalization, Padding, Tile, TopK. • Substantial performance improvements for several operators. • Resize nearest_mode “floor” and “round_prefer_ceil”. • Fusion activations for: Conv, ConvTranspose, BatchNormalization, MeanVarianceNormalization, Gemm, MatMul. • Decomposes unsupported QLinearSigmoid operation. • Removes strided 64-bit emulation in Cast. • Allows empty shapes on constant CPU inputs.
Known issues
- This release has an issue that may result in segmentation faults when deployed on Intel 12th Gen processors with hybrid architecture capabilities with Performance and Efficient-cores (P-core and E-core). This has been fixed in ORT 1.9.
- The CUDA build of this release has a regression in that the memory utilization increases significantly compared to the previous releases. A fix for this will be released shortly as part of 1.8.1 patch. Here is an incomplete list of issues where this was reported - 8287, 8171, 8147.
- GPU part of source code is not compatible with
- Visual Studio 2019 16.10.0 ( which was just released on May 25, 2021). 16.9.x is fine.
- clang 12
- CPU part of source code is not compatible with
- VS 2017 (https://github.com/microsoft/onnxruntime/issues/7936). Before we fix it please use VS 2019 instead.
- GCC 11. See #7918
- C# OpenVino EP is broken. #7951
- Python and Windows only: if your CUDNN DLLs are not in CUDA's installation dir, then you need to set manually "CUDNN_HOME" variable. Just putting them in %PATH% is not enough. #7965
- onnxruntime-win-gpu-x64-1.8.0.zip on this page misses important DLLs, please don't use it.
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, gwang-msft, baijumeswani, fs-eire, edgchen1, zhanghuanrong, yufenglee, thiagocrepaldi, hariharans29, skottmckay, weixingzhang, tianleiwu, SherlockNoMad, ashbhandare, tracysh, satyajandhyala, liqunfu, iK1D, RandySheriffH, suffiank, hanbitmyths, wangyems, askhade, stevenlix, chilo-ms, smk2007, kit1980, codemzs, raviskolli, pranav-prakash, chenfucn, xadupre, gramalingam, harshithapv, oliviajain, xzhu1900, ytaous, MaajidKhan, RyanUnderhill, mrry, orilevari, jingyanwangms, sfatimar, KeDengMS, jywu-msft, souptc, adtsai, tlh20, yuslepukhin, duli2012, pranavsharma, faxu, georgen117, jeffbloo, Tixxx, wschin, YUNQIUGUO, tiagoshibata, martinb35, alberto-magni, ryanlai2, Craigacp, suryasidd, fdwr, jcwchen, neginraoof, natke, BowenBao
1、 Microsoft.AI.MachineLearning.1.8.0.symbols.zip 169.63MB
2、 Microsoft.AI.MachineLearning.1.8.0.zip 38.53MB
3、 Microsoft.ML.OnnxRuntime.DirectML.1.8.0.zip 136.23MB
4、 onnxruntime-linux-x64-1.8.0.tgz 4.52MB
5、 onnxruntime-linux-x64-gpu-1.8.0.tgz 29.41MB
6、 onnxruntime-osx-x64-1.8.0.tgz 4.42MB
7、 onnxruntime-win-arm-1.8.0.zip 27.99MB
8、 onnxruntime-win-arm64-1.8.0.zip 29.61MB
9、 onnxruntime-win-gpu-x64-1.8.0.zip 28.56MB
10、 onnxruntime-win-x64-1.8.0.zip 29.19MB
11、 onnxruntime-win-x86-1.8.0.zip 29.51MB