Edit model card
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM

device = "cuda" if torch.cuda.is_available() else "cpu"

# Model checkpoint
model_checkpoint = "gokaygokay/Flux-Prompt-Enhance"

# Tokenizer
tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)

# Model
model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint)

enhancer = pipeline('text2text-generation',
                    model=model,
                    tokenizer=tokenizer,
                    repetition_penalty= 1.2,
                    device=device)

max_target_length = 256
prefix = "enhance prompt: "

short_prompt = "beautiful house with text 'hello'"
answer = enhancer(prefix + short_prompt, max_length=max_target_length)
final_answer = answer[0]['generated_text']
print(final_answer)

# a two-story house with white trim, large windows on the second floor,
# three chimneys on the roof, green trees and shrubs in front of the house,
# stone pathway leading to the front door, text on the house reads "hello" in all caps,
# blue sky above, shadows cast by the trees, sunlight creating contrast on the house's facade,
# some plants visible near the bottom right corner, overall warm and serene atmosphere. 
Downloads last month
6,175
Safetensors
Model size
223M params
Tensor type
F32
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for gokaygokay/Flux-Prompt-Enhance

Base model

google-t5/t5-base
Finetuned
(382)
this model
Quantizations
1 model

Dataset used to train gokaygokay/Flux-Prompt-Enhance

Spaces using gokaygokay/Flux-Prompt-Enhance 4