v0.0.10
版本发布时间: 2023-09-26 20:52:32
InternLM/lmdeploy最新发布版本:v0.6.0a0(2024-08-26 17:12:19)
What's Changed
💥 Improvements
- [feature] Graceful termination of background threads in LlamaV2 by @akhoroshev in https://github.com/InternLM/lmdeploy/pull/458
- expose stop words and filter eoa by @AllentDan in https://github.com/InternLM/lmdeploy/pull/352
🐞 Bug fixes
- Fix side effect brought by supporting codellama:
sequence_start
is always true when callingmodel.get_prompt
by @lvhan028 in https://github.com/InternLM/lmdeploy/pull/466 - Miss meta instruction of internlm-chat model by @lvhan028 in https://github.com/InternLM/lmdeploy/pull/470
- [bug] Fix race condition by @akhoroshev in https://github.com/InternLM/lmdeploy/pull/460
- Fix compatibility issues with Pydantic 2 by @aisensiy in https://github.com/InternLM/lmdeploy/pull/465
- fix benchmark serving cannot use Qwen tokenizer by @AllentDan in https://github.com/InternLM/lmdeploy/pull/443
- Fix memory leak by @lvhan028 in https://github.com/InternLM/lmdeploy/pull/488
📚 Documentations
- Fix typo in README.md by @eltociear in https://github.com/InternLM/lmdeploy/pull/462
🌐 Other
- bump version to v0.0.10 by @lvhan028 in https://github.com/InternLM/lmdeploy/pull/474
New Contributors
- @eltociear made their first contribution in https://github.com/InternLM/lmdeploy/pull/462
- @akhoroshev made their first contribution in https://github.com/InternLM/lmdeploy/pull/458
- @aisensiy made their first contribution in https://github.com/InternLM/lmdeploy/pull/465
Full Changelog: https://github.com/InternLM/lmdeploy/compare/v0.0.9...v0.0.10