File size: 2,964 Bytes
f356c83
 
 
 
 
 
fa4d7ec
f356c83
 
 
 
 
 
 
 
4430b1b
f356c83
76eca2c
f356c83
 
 
 
 
001466b
f356c83
 
d752723
 
f356c83
 
 
 
 
 
 
 
 
 
 
 
 
 
7400763
67d2b26
682b2ef
67d2b26
 
 
 
 
 
a94a369
d752723
 
a94a369
 
f513774
f356c83
5007f26
f356c83
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co./black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
pipeline_tag: video-classification
tags:
- Stable Diffusion
- image-generation
- Flux
- diffusers
- controlnet
---

![Banner Picture 1](assets/banner-green.png?raw=true)
[<img src="https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/join-our-discord-rev1.png?raw=true">](https://discord.gg/FHY2guThfy)
![Greek Picture 1](assets/x_greek.png?raw=true)

This repository provides a IP-Adapter checkpoint for
[FLUX.1-dev model](https://huggingface.co./black-forest-labs/FLUX.1-dev) by Black Forest Labs

[See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.
![Flow Example Picture 1](assets/ip_adapter_workflow_example.png?raw=true)

# Models
IP-Adapter is trained on 512x512 resolution for 50k steps and 1024x1024 for 25k steps resolution and works for both 512x512 and 1024x1024 resolution. Model is training, we release new checkpoints regularly, stay updated.
We release **v1 version** - which can be used directly in ComfyUI!   

Please, see our [ComfyUI custom nodes installation guide](https://github.com/XLabs-AI/x-flux-comfyui)

# Examples

See examples of our models results below.  
Also, some generation results with input images are provided in "Files and versions"

# Inference

To try our models, you have 2 options:
1. Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
2. Use our custom nodes for ComfyUI and test it with provided workflows (check out folder /workflows)

## Instruction for ComfyUI 
1. Go to ComfyUI/custom_nodes
2. Clone [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui.git), path should be ComfyUI/custom_nodes/x-flux-comfyui/*, where * is all the files in this repo
3. Go to ComfyUI/custom_nodes/x-flux-comfyui/ and run python setup.py
4. Update x-flux-comfy with `git pull` or reinstall it.
5. Download Clip-L `model.safetensors` from [OpenAI VIT CLIP large](https://huggingface.co./openai/clip-vit-large-patch14), and put it to `ComfyUI/models/clip_vision/*`.
6. Download our IPAdapter from [huggingface](https://huggingface.co./XLabs-AI/flux-ip-adapter/tree/main), and put it to `ComfyUI/models/xlabs/ipadapters/*`.
7. Use `Flux Load IPAdapter` and `Apply Flux IPAdapter` nodes, choose right CLIP model and enjoy your genereations.
8. You can find example workflow in folder workflows in this repo.

If you get bad results, try to set true_gs=2

### Limitations
The IP Adapter is currently in beta.
We do not guarantee that you will get a good result right away, it may take more attempts to get a result. 

![Example Picture 2](assets/ip_adapter_example2.png?raw=true)
![Example Picture 1](assets/ip_adapter_example1.png?raw=true)

## License

Our weights fall under the [FLUX.1 [dev]](https://huggingface.co./black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/>