v0.9.8.1
版本发布时间: 2024-08-11 13:01:44
bghira/SimpleTuner最新发布版本:v1.1.1(2024-10-05 08:37:33)
Dreambooth results from this release
What's Changed
- Quantised Flux LoRA training (but only non-quantised models can resume training still)
- More VRAM and System memory use reductions contributed by team and community members
- CUDA 12.4 requirement bump as well as blacklisting of Python 3.12
- Dockerfile updates, allowing deployment of the latest build without errors
- More LoRA training options for Flux Dev
- Basic (crappy) Schnell training support
- Support for preserving Flux Dev's distillation or introducing CFG back into the model for improved creativity
- CFG skip logic to ensure no blurry results on undertrained LoRAs without requiring a CFG-capable base model
Detailed change list
- release: follow-ups, memory reduction, quanto LoRA training by @bghira in https://github.com/bghira/SimpleTuner/pull/638
- quanto: allow further vram reduction with bf16 base weights, reorder model loading operations to evacuate text encoders before DiT loads by @bghira in https://github.com/bghira/SimpleTuner/pull/647
- merge vram reductions & regression fixes by @bghira in https://github.com/bghira/SimpleTuner/pull/649
- CUDA 12.4; more efficient model loading, lower overall sysmem / vram usage; enforce hashed VAE object names by default; reduce default VAE batch size; change default LR scheduler to polynomial by @bghira in https://github.com/bghira/SimpleTuner/pull/660
- update something about kolor in the train.py by @chongxian in https://github.com/bghira/SimpleTuner/pull/658
- refactor saving utitilites part ii by @sayakpaul in https://github.com/bghira/SimpleTuner/pull/661
- Residues of #661 by @sayakpaul in https://github.com/bghira/SimpleTuner/pull/662
- Update Dockerfile by @komninoschatzipapas in https://github.com/bghira/SimpleTuner/pull/675
- merge: reorder model casting, fix kolors on newer diffusers, update cuda, refactor save hooks by @bghira in https://github.com/bghira/SimpleTuner/pull/666
- Flux: Purge text encoders from system RAM. by @mhirki in https://github.com/bghira/SimpleTuner/pull/694
- [Tests] basic save hook tests by @sayakpaul in https://github.com/bghira/SimpleTuner/pull/679
- Add the ability to train all nn.Linear for flux by @AmericanPresidentJimmyCarter in https://github.com/bghira/SimpleTuner/pull/707
- Add real guidance scale to flux validation (optional) by @AmericanPresidentJimmyCarter in https://github.com/bghira/SimpleTuner/pull/706
- flux: import some of kohya suggestions (wip) by @bghira in https://github.com/bghira/SimpleTuner/pull/711
- Add CFG skip for Flux lora training, fix precomputed embeds bug for C… by @AmericanPresidentJimmyCarter in https://github.com/bghira/SimpleTuner/pull/712
- fixed flux training by @bghira in https://github.com/bghira/SimpleTuner/pull/714
- final flux updates for release by @bghira in https://github.com/bghira/SimpleTuner/pull/716
New Contributors
- @chongxian made their first contribution in https://github.com/bghira/SimpleTuner/pull/658
Full Changelog: https://github.com/bghira/SimpleTuner/compare/v0.9.8...v0.9.8.1