v1.6.0
版本发布时间: 2020-12-11 13:06:51
microsoft/onnxruntime最新发布版本:v1.19.2(2024-09-05 03:33:14)
Announcements
- OpenMP will be disabled in future official builds (build option will still be available). A NoOpenMP version of ONNX Runtime is now available with this release on Nuget and PyPi for C/C++/C#/Python users.
- In the next release, MKL-ML, openblas, and jemallac build options will be removed, and the Microsoft.ML.OnnxRuntime.MKLML Nuget package will no longer be published. Users of MKL-ML are recommended to use the Intel EPs. If you are using these options and identify issues switching to an alternative build, please file an issue with details.
Key Feature Updates
General
- ONNX 1.8 support / opset 13
- New contrib ops: BiasSoftmax, MatMulIntegerToFloat, QLinearSigmoid, Trilu
- ORT Mobile now compatible with NNAPI for accelerating model execution on Android devices
- Build support for Mac with Apple Silicon (CPU only)
- New dependency: flatbuffers
- Support for loading sparse tensor initializers in pruned models
- Support for setting the execution priority of a node
- Support for selection of cuDNN conv algorithms
- BERT Model profiling tool
Performance
- New session option to disable denormal floating numbers on sse3 supporting CPUs
- Eliminates unexpected performance degradation due to denormals without needing to retrain the model
- Option to share initializers between sessions to improve memory utilization
- Useful when several models that use the same set of initializers except the last few layers of the model are loaded in the same process
- Eliminates wasteful memory usage when every model (session) creates a separate instance of the same initializer
- Exposed by the AddInitializer API
- Transformer model optimizations
- Longformer: LongformerAttention CUDA operator added
- Support for BERT models exported from Tensorflow with 1 or 2 inputs
- Python optimizer supports additional models: openai-GPT, ALBERT and FlauBERT
- Quantization
- Support of per-channel QuantizeLinear and DeQuantizeLinear
- Support of LSTM quantization
- Quantization performance improvement on ARM
- CNN quantization perf optimizations, including u8s8 support and NHWC transformer in QLinearConv
- ThreadPool
- Use
_mm_pause()
for spin loop to improve performance and power consumption
- Use
APIs and Packages
- Python - I/O Binding enhancements
- Usage Documentation (OrtValue and IOBinding sections)
- Python binding for the
OrtValue
data structure- An interface is exposed to allocate memory on a CUDA-supported device and define the contents of this memory. No longer need to use allocators provided by other libraries to allocate and manage CUDA memory to be used with ORT.
- Allows consuming ORT allocated device memory as an
OrtValue
(check Scenario 4 in the IOBinding section of the documentation for an example)
-
OrtValue
instances can be used to bind inputs/outputs. This is in addition to existing interfaces that allows binding a piece of memory directly/using numpy arrays that can be bound and may be particularly useful when binding ORT allocated device memory.
- C# - float16 and bfloat16 support
- Windows ML
- NuGet package now supports UWP applications targeting Windows Store deployment for both CPU and GPU
- Minor API Improvements:
- Able to bind IIterable<Buffers> as inputs and outputs
- Able to create Tensor* via multiple buffers
- WindowsAI Redist now includes a statically linked C-Runtime package for additional deployment options
Execution Providers
- DNNL EP Updates
- DNNL updated from 1.1.1 to 1.7
- NNAPI EP Updates
- Support for CNN models
- Additional operator support - Resize/Flatten/Clip
- TensorRT EP Updates
- Int8 quantization support (experimental)
- Engine cache refactoring and improvements
- General fixes and performance improvements
- OpenVINO EP Updates
- OpenVINO 2021.1 support
- OpenVINO EP builds as shared library
- Multi-threaded inferencing support
- fp16 input type support
- Multi-device plugin support
- Hetero plugin support
- Enable build on ARM64
- DirectML EP Updates (1.3.0 -> 1.4.0)
- Utilizing the first public standalone release of the DirectML API through the DirectML NuGet package release
- General fixes and improvements
- nGraph EP is removed. Recommend to use OpenVINO instead
Additional notes
- VCRuntime2019 with OpenMP: pinning a process to NUMA node 1 forces the execution to be single threaded. Fix is in progress in VC++.
- Workaround: place the VS2017 vcomp DLL side-by-side so that ORT uses the VS2017 version
- Pip version >=20.3 is required for use on macOS Big Sur (11.x)
- The destructor of OrtEnv is now non-trivial and may do DLL unloading Do not call
ReleaseEnv
from DLLMain or put OrtEnv in global variables. It is not safe to call FreeLibrary from DllMain. - reference - Some unit tests fail on Pascal GPUs. See: https://github.com/microsoft/onnxruntime/issues/5914
- If using the default CPU package (built with OpenMP), consider tuning the OpenMP settings to improve performance. By default the number of threads to use for openmp parallel regions is set to the number of logical CPUs. This may not be optimal for machines with hyper-threading; when CPUs are oversubscribed the 99-percentile latency could be 10x greater. Setting the OMP_NUM_THREADS environment variable to the number of physical cores is a good starting point. As noted in Announcements, future official builds of ORT will be published without OpenMP
Contributions
Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members: gwang-msft, snnn, skottmckay, edgchen1, hariharans29, wangyems, yufenglee, yuslepukhin, tianleiwu, SherlockNoMad, tracysh, ryanlai2, askhade, xadupre, liqunfu, RandySheriffH, jywu-msft, KeDengMS, pranavsharma, mrry, ashbhandare, iK1D, RyanUnderhill, MaajidKhan, wenbingl, kit1980, weixingzhang, tlh20, suffiank, Craigacp, smkarlap, stevenlix, zhanghuanrong, sfatimar, ytaous, tiagoshibata, fdwr, oliviajain, alberto-magni, jcwchen, mosdav, xzhu1900, wschin, codemzs, duli2012, smk2007, natke, zhijxu-MS, manashgoswami, zhangxiang1993, faxu, HectorSVC, take-cheeze, jingyanwangms, chilo-ms, YUNQIUGUO, jgbradley1, jessebenson, martinb35, Andrews548, souptc, pengwa, liuziyue, orilevari, BowenBao, thiagocrepaldi, jeffbloo
1、 Microsoft.AI.MachineLearning.1.6.0.symbols.zip 153.1MB
2、 Microsoft.AI.MachineLearning.1.6.0.zip 34.53MB
3、 Microsoft.ML.OnnxRuntime.DirectML.1.6.0.zip 63.18MB
4、 onnxruntime-linux-x64-1.6.0.tgz 4.11MB
5、 onnxruntime-linux-x64-gpu-1.6.0.tgz 29.09MB
6、 onnxruntime-osx-x64-1.6.0.tgz 4.56MB
7、 onnxruntime-win-x64-1.6.0.zip 26.75MB
8、 onnxruntime-win-x64-gpu-1.6.0.zip 80.85MB
9、 onnxruntime-win-x86-1.6.0.zip 26.99MB