File size: 1,485 Bytes
f9703a7
 
 
2542bb3
ee6782f
f13abcf
fc3879a
6c3f0d5
ee6782f
c6db579
 
 
 
 
5cb6672
fc3879a
 
3b8b95b
f9703a7
11414d8
 
2542bb3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
license: mit
---
[![Discord](https://img.shields.io/discord/232596713892872193?logo=discord)](https://discord.gg/2JhHVh7CGu)

This is a severely undertrained research network as a POC for the architecture. It was trained on ~700 example images for 2000 epochs reaching a minimal MSE loss of ~0.06. The generation is unconditioned (No text knowledge yet, simply generates something plauible from the flow objective.) This repo is meant only as a demo of a strong, <100M parameter example model that can achieve strong color balance and achieve low loss on pixel diffusion. The next step is scaling up the data.

A semi custom network based on the follow paper [Simpler Diffusion (SiD2)](https://arxiv.org/abs/2410.19324v1)

This network uses the optimal transport flow matching objective outlined [Flow Matching for Generative Modeling](https://arxiv.org/abs/2210.02747)

xATGLU Layers are used instead of linears for entry into the transformer MLP layer [Expanded Gating Ranges
Improve Activation Functions](https://arxiv.org/pdf/2405.20768)

```python train.py``` will train a new image network on the provided dataset (Currently the dataset is being fully rammed into GPU and is defined in the preload_dataset function)

```python test_sample.py step_1799.safetensors``` Where step_1799.safetensors is the desired model to test inference on. This will always generate a sample grid of 16x16 images.

![samples](./1.png)
![samples](./2.png)
![samples](./3.png)
![samples](./4.png)