diffusers-internal-dev

Activity Feed

AI & ML interests

None defined yet.

Recent Activity

diffusers-internal-dev's activity

sayakpaulΒ 
posted an update 1 day ago
view post
Post
1450
Commits speak louder than words πŸ€ͺ

* 4 new video models
* Multiple image models, including SANA & Flux Control
* New quantizers -> GGUF & TorchAO
* New training scripts

Enjoy this holiday-special Diffusers release πŸ€—
Notes: https://github.com/huggingface/diffusers/releases/tag/v0.32.0
sayakpaulΒ 
posted an update 7 days ago
view post
Post
1535
In the past seven days, the Diffusers team has shipped:

1. Two new video models
2. One new image model
3. Two new quantization backends
4. Three new fine-tuning scripts
5. Multiple fixes and library QoL improvements

Coffee on me if someone can guess 1 - 4 correctly.
  • 1 reply
Β·
sayakpaulΒ 
posted an update 15 days ago
view post
Post
2039
Introducing a high-quality open-preference dataset to further this line of research for image generation.

Despite being such an inseparable component for modern image generation, open preference datasets are a rarity!

So, we decided to work on one with the community!

Check it out here:
https://huggingface.co./blog/image-preferences
Β·
sayakpaulΒ 
posted an update 16 days ago
view post
Post
2095
The Control family of Flux from @black-forest-labs should be discussed more!

It enables structural controls like ControlNets while being significantly less expensive to run!

So, we're working on a Control LoRA training script πŸ€—

It's still WIP, so go easy:
https://github.com/huggingface/diffusers/pull/10130
sayakpaulΒ 
posted an update 26 days ago
sayakpaulΒ 
posted an update about 1 month ago
view post
Post
2598
It's been a while we shipped native quantization support in diffusers 🧨

We currently support bistandbytes as the official backend but using others like torchao is already very simple.

This post is just a reminder of what's possible:

1. Loading a model with a quantization config
2. Saving a model with quantization config
3. Loading a pre-quantized model
4. enable_model_cpu_offload()
5. Training and loading LoRAs into quantized checkpoints

Docs:
https://huggingface.co./docs/diffusers/main/en/quantization/bitsandbytes
  • 1 reply
Β·