v0.16.0
版本发布时间: 2024-10-18 19:40:36
xorbitsai/inference最新发布版本:v1.1.0(2024-12-13 18:29:37)
What's new in 0.16.0 (2024-10-18)
These are the changes in inference v0.16.0.
New features
- FEAT: Adding support for awq/gptq vLLM inference to VisionModel such as Qwen2-VL by @cyhasuka in https://github.com/xorbitsai/inference/pull/2445
- FEAT: Dynamic batching for the state-of-the-art FLUX.1
text_to_image
interface by @ChengjieLi28 in https://github.com/xorbitsai/inference/pull/2380 - FEAT: added MLX for qwen2.5-instruct by @qinxuye in https://github.com/xorbitsai/inference/pull/2444
Enhancements
- ENH: Speed up cli interaction by @frostyplanet in https://github.com/xorbitsai/inference/pull/2443
- REF: Enable continuous batching for LLM with transformers engine by default by @ChengjieLi28 in https://github.com/xorbitsai/inference/pull/2437
Documentation
- DOC: update readme & docs by @qinxuye in https://github.com/xorbitsai/inference/pull/2435
New Contributors
- @cyhasuka made their first contribution in https://github.com/xorbitsai/inference/pull/2445
Full Changelog: https://github.com/xorbitsai/inference/compare/v0.15.4...v0.16.0