v4.2.9rc2
版本发布时间: 2024-09-04 23:30:01
invoke-ai/InvokeAI最新发布版本:v5.0.0(2024-09-24 21:17:48)
FLUX
Please note these nodes are still in the prototype stage and are subject to change. This Node API is not stable!
We are supporting both FLUX dev and FLUX schnell at this time in workflows only. These will be incorporated into the rest of the UI in future updates. At this time, this is an initial and developing implementation - we’re bringing this in with the intent of long-term stable support for FLUX.
Default workflows can be found in your workflow tab: FLUX Text to Image
and FLUX Image to Image
. Please note that we have not added FLUX to the linear UI yet, LoRAs and Img2Img are not yet supported, but will be added soon.
Flux denoise nodes now provide preview images.
Clip embeds and T5 model encoders can now be installed outside of the starter models
Required Dependencies
In order to run FLUX on Invoke, you will need to download and install several models. We have provided options in the Starter Models (found in your Model Manager tab) for quantized and unquantized versions of both FLUX dev and FLUX schnell. Selecting these will automatically download the dependencies you need, listed below. These dependencies are also available for adhoc download in Starter Models list.
- T5 encoder
- CLIP-L encoder
- FLUX transformer/unet
- FLUX VAE
Considerations
FLUX is a large model, and has significant VRAM requirements. The full models require 24gb of VRAM on Linux — Windows PCs are less efficient, and thus need slightly more, making it difficult to run the full models.
To compensate for this, the community has begun to develop quantized versions of the DEV model - These are models with a slightly lower quality, but significant reductions in VRAM requirements.
Currently, Invoke is only supporting NVidia GPUs. You may be able to work out a way to get an AMD GPU to generate, however we’ve not been able to test this, and so can’t provide committed support for it. FLUX on MPS is not supported at this time.
Please note that the FLUX Dev model is a non-commercial license. You will need a commercial license to use the model for any commercial work.
Below are additional details on which model to use based on your system:
- FLUX dev quantized starter model: non-commercial, >16GB RAM, ≥12GB VRAM
- FLUX schnell quantized starter model: commercial, faster inference than dev, >16GB RAM, ≥ 12GB VRAM
- FLUX dev starter model: non-commercial, >32GB RAM, ≥24GB VRAM, linux OS
- FLUX schnell starter model: commercial, >32GB RAM, ≥24GB VRAM, linux OS
Running the Workflow
You can find a new default workflow in your workflows tab called FLUX Text to Image
. This can be run with both FLUX dev and FLUX schnell models, but note that the default step count of 30 is the recommendation for FLUX dev. If running FLUX schnell, we recommend you lower your step count to 4. You will not be able to successfully run this workflow without the models listed above as required dependencies installed.
The exposed fields will require you to select a FLUX model ,T5 encoder, CLIP Embed model, VAE, prompt, and your step count.
We've also added a new default workflow named Flux Image to Image
. This can be run vary similarly to the workflow described above with the additional ability to provide a base image.
Other Changes
- Enhancement: add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp
- Enhancement: FLUX memory management improvements by @RyanJDick
- Feature: Add FLUX image-to-image and inpainting by @RyanJDick
- Feature: flux preview images by @brandonrising
- Enhancement: Add install probes for T5_encoder and ClipTextModel by @lstein
- Fix: support checkpoint bundles containing more than the transformer by @brandonrising
Installation and Updating
To install or update to v4.2.9rc2, download the installer and follow the [installation instructions](https://invoke-ai.github.io/InvokeAI/installation/010_INSTALL_AUTOMATED/).
To update, select the same installation location. Your user data (images, models, etc) will be retained.
What's Changed
- Add selectedStylePreset to app parameters by @chainchompa in https://github.com/invoke-ai/InvokeAI/pull/6787
- feat(ui, nodes): add fields for CLIPEmbedModel and FluxVAEModel by @maryhipp in https://github.com/invoke-ai/InvokeAI/pull/6794
- FLUX memory management improvements by @RyanJDick in https://github.com/invoke-ai/InvokeAI/pull/6791
- Fix source string in hugging face installs with subfolders by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/6797
- Add a new FAQ for converting checkpoints to diffusers by @lstein in https://github.com/invoke-ai/InvokeAI/pull/6736
- scripts: add allocate_vram script by @psychedelicious in https://github.com/invoke-ai/InvokeAI/pull/6617
- Add FLUX image-to-image and inpainting by @RyanJDick in https://github.com/invoke-ai/InvokeAI/pull/6798
- [MM] add API routes for getting & setting MM cache sizes by @lstein in https://github.com/invoke-ai/InvokeAI/pull/6523
- feat: flux preview images by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/6804
- Add install probes for T5_encoder and ClipTextModel by @lstein in https://github.com/invoke-ai/InvokeAI/pull/6800
- Build container image on-demand by @ebr in https://github.com/invoke-ai/InvokeAI/pull/6806
- feat: support checkpoint bundles containing more than the transformer by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/6808
- chore: 4.2.9rc2 version bump by @brandonrising in https://github.com/invoke-ai/InvokeAI/pull/6810
Full Changelog: https://github.com/invoke-ai/InvokeAI/compare/v4.2.9rc1...v4.2.9rc2
1、 InvokeAI-4.2.9rc2-py3-none-any.whl 4.23MB
2、 InvokeAI-4.2.9rc2.tar.gz 4.05MB
3、 InvokeAI-installer-v4.2.9rc2.zip 16.46KB