Text-to-image finetuning - suvadityamuk/stable-diffusion-japanese-kanji

This pipeline was finetuned from stabilityai/stable-diffusion-2-1 on the suvadityamuk/japanese-kanji dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['deep learning', 'elon musk', 'india', 'sakana', 'fish', 'foundation', 'neural network', 'machine learning', 'man', 'woman', 'tokyo', 'mumbai', 'google', 'youtube', 'deepmind', 'attention', 'diffusion', 'stability']:

val_imgs_grid

Pipeline usage

You can use the pipeline like so:

from diffusers import DiffusionPipeline
import torch

pipeline = DiffusionPipeline.from_pretrained("suvadityamuk/stable-diffusion-japanese-kanji", torch_dtype=torch.float16)
prompt = "deep learning"
image = pipeline(prompt).images[0]
image.save("my_image.png")

Training info

These are the key hyperparameters used during training:

  • Epochs: 20
  • Learning rate: 0.00025
  • Batch size: 128
  • Gradient accumulation steps: 4
  • Image resolution: 128
  • Mixed-precision: bf16

More information on all the CLI arguments and the environment are available on your wandb run page.

Downloads last month
3
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for suvadityamuk/stable-diffusion-japanese-kanji

Finetuned
(175)
this model

Space using suvadityamuk/stable-diffusion-japanese-kanji 1