|
--- |
|
library_name: transformers |
|
tags: |
|
- diffusion |
|
- style_similarity |
|
- CSD |
|
- image-feature-extraction |
|
language: |
|
- en |
|
pipeline_tag: image-feature-extraction |
|
license: cc-by-4.0 |
|
--- |
|
|
|
# Quick Links |
|
|
|
- **GitHub Repository**: https://github.com/learn2phoenix/CSD |
|
- **arXiv**: https://arxiv.org/abs/2404.01292 |
|
|
|
# Description |
|
|
|
We present a framework for understanding and extracting style descriptors from images. Our framework comprises a new dataset curated using the insight that style is a subjective property |
|
of an image that captures complex yet meaningful interactions of factors including but not limited to colors, textures, shapes, etc.We also propose a method to extract |
|
style descriptors that can be used to attribute style of a generated image to the images used in the training dataset of a text-to-image mode |
|
|
|
# Technical Specification |
|
The checkpoint is for ViT-Large model |
|
|
|
# Cite our work |
|
|
|
If you find our model, codebase or dataset beneficial, please consider citing our work: |
|
|
|
```bibtex |
|
@article{somepalli2024measuring, |
|
title={Measuring Style Similarity in Diffusion Models}, |
|
author={Somepalli, Gowthami and Gupta, Anubhav and Gupta, Kamal and Palta, Shramay and Goldblum, Micah and Geiping, Jonas and Shrivastava, Abhinav and Goldstein, Tom}, |
|
journal={arXiv preprint arXiv:2404.01292}, |
|
year={2024} |
|
} |
|
``` |
|
|