Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does this work on flux model? #3

Open
Njasa2k opened this issue Sep 4, 2024 · 1 comment
Open

Does this work on flux model? #3

Njasa2k opened this issue Sep 4, 2024 · 1 comment

Comments

@Njasa2k
Copy link

Njasa2k commented Sep 4, 2024

No description provided.

@Godofnothing
Copy link
Contributor

Hi, @ninjasaid2k.

The proposed method is architecture agnostic, so, in principle, it works with any modern diffusion model. Given that the FLUX model is much larger, the benefits from its compression are more pronounced. However, the current implementation is focused towards SDXL-like architectures and adding support for FLUX would require significant effort. We hope to produce VQDM quantized FLUX models in the future.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants