v0.1.20
版本发布时间: 2024-06-13 15:46:54
InternLM/xtuner最新发布版本:v0.1.23(2024-07-22 20:19:23)
What's Changed
- [Enhancement] Optimizing Memory Usage during ZeRO Checkpoint Convert by @pppppM in https://github.com/InternLM/xtuner/pull/582
- [Fix] ZeRO2 Checkpoint Convert Bug by @pppppM in https://github.com/InternLM/xtuner/pull/684
- [Feature] support auto saving tokenizer by @HIT-cwh in https://github.com/InternLM/xtuner/pull/696
- [Bug] fix internlm2 flash attn by @HIT-cwh in https://github.com/InternLM/xtuner/pull/693
- [Bug] The LoRA model will have
meta-tensor
during thepth_to_hf
phase. by @pppppM in https://github.com/InternLM/xtuner/pull/697 - [Bug] fix cfg check by @HIT-cwh in https://github.com/InternLM/xtuner/pull/729
- [Bugs] Fix bugs caused by sequence parallel when deepspeed is not used. by @HIT-cwh in https://github.com/InternLM/xtuner/pull/752
- [Fix] Avoid incorrect
torchrun
invocation with--launcher slurm
by @LZHgrla in https://github.com/InternLM/xtuner/pull/728 - [fix] fix save eval result failed with mutil-node pretrain by @HoBeedzc in https://github.com/InternLM/xtuner/pull/678
- [Improve] Support the export of various LLaVA formats with
pth_to_hf
by @LZHgrla in https://github.com/InternLM/xtuner/pull/708 - [Refactor] refactor dispatch_modules by @HIT-cwh in https://github.com/InternLM/xtuner/pull/731
- [Docs] Readthedocs ZH by @pppppM in https://github.com/InternLM/xtuner/pull/553
- [Feature] Support finetune Deepseek v2 by @HIT-cwh in https://github.com/InternLM/xtuner/pull/663
- bump version to 0.1.20 by @HIT-cwh in https://github.com/InternLM/xtuner/pull/766
New Contributors
- @HoBeedzc made their first contribution in https://github.com/InternLM/xtuner/pull/678
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.19...v0.1.20