v0.1.19
版本发布时间: 2024-05-11 17:50:14
InternLM/xtuner最新发布版本:v0.1.23(2024-07-22 20:19:23)
What's Changed
- [Fix] LLaVA-v1.5 official settings by @LZHgrla in https://github.com/InternLM/xtuner/pull/594
- [Feature] Release LLaVA-Llama-3-8B by @LZHgrla in https://github.com/InternLM/xtuner/pull/595
- [Improve] Add single-gpu configs for LLaVA-Llama-3-8B by @LZHgrla in https://github.com/InternLM/xtuner/pull/596
- [Docs] Add wisemodel badge by @LZHgrla in https://github.com/InternLM/xtuner/pull/597
- [Feature] Support load_json_file with json.load by @HIT-cwh in https://github.com/InternLM/xtuner/pull/610
- [Feature]Support Mircosoft Phi3 4K&128K Instruct Models by @pppppM in https://github.com/InternLM/xtuner/pull/603
- [Fix] set
dataloader_num_workers=4
for llava training by @LZHgrla in https://github.com/InternLM/xtuner/pull/611 - [Fix] Do not set attn_implementation to flash_attention_2 or sdpa if users already set it in XTuner configs. by @HIT-cwh in https://github.com/InternLM/xtuner/pull/609
- [Release] LLaVA-Phi-3-mini by @LZHgrla in https://github.com/InternLM/xtuner/pull/615
- Update README.md by @eltociear in https://github.com/InternLM/xtuner/pull/608
- [Feature] Refine sp api by @HIT-cwh in https://github.com/InternLM/xtuner/pull/619
- [Feature] Add conversion scripts for LLaVA-Llama-3-8B by @LZHgrla in https://github.com/InternLM/xtuner/pull/618
- [Fix] Convert nan to 0 just for logging by @HIT-cwh in https://github.com/InternLM/xtuner/pull/625
- [Docs] Delete colab and add speed benchmark by @HIT-cwh in https://github.com/InternLM/xtuner/pull/617
- [Feature] Support dsz3+qlora by @HIT-cwh in https://github.com/InternLM/xtuner/pull/600
- [Feature] Add qwen1.5 110b cfgs by @HIT-cwh in https://github.com/InternLM/xtuner/pull/632
- check transformers version before dispatch by @HIT-cwh in https://github.com/InternLM/xtuner/pull/672
- [Fix]
convert_xtuner_weights_to_hf
with frozen ViT by @LZHgrla in https://github.com/InternLM/xtuner/pull/661 - [Fix] Fix batch-size setting of single-card LLaVA-Llama-3-8B configs by @LZHgrla in https://github.com/InternLM/xtuner/pull/598
- [Feature] add HFCheckpointHook to auto save hf model after the whole training phase by @HIT-cwh in https://github.com/InternLM/xtuner/pull/621
- Remove test info in DatasetInfoHook by @hhaAndroid in https://github.com/InternLM/xtuner/pull/622
- [Improve] Support
safe_serialization
saving by @LZHgrla in https://github.com/InternLM/xtuner/pull/648 - bump version to 0.1.19 by @HIT-cwh in https://github.com/InternLM/xtuner/pull/675
New Contributors
- @eltociear made their first contribution in https://github.com/InternLM/xtuner/pull/608
Full Changelog: https://github.com/InternLM/xtuner/compare/v0.1.18...v0.1.19