Spanish version
The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.
Setting Up
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from peft import PeftModel, PeftConfig
# Define the repository ID
repo_id = "Miguelpef/bart-base-lora-3DPrompt"
# Load the PEFT configuration from the Hub
peft_config = PeftConfig.from_pretrained(repo_id)
# Load the base model from the Hub
model = AutoModelForSeq2SeqLM.from_pretrained(peft_config.base_model_name_or_path)
# Load the tokenizer from the Hub
tokenizer = AutoTokenizer.from_pretrained(repo_id)
# Wrap the base model with PEFT
model = PeftModel.from_pretrained(model, repo_id)
# Now you can use the model for inference as before
def generar_prompt_desde_objeto(objeto):
prompt = objeto
inputs = tokenizer(prompt, return_tensors='pt').to(model.device)
outputs = model.generate(**inputs, max_length=100)
prompt_generado = tokenizer.decode(outputs[0], skip_special_tokens=True)
return prompt_generado
mi_objeto = "Mesa grande marrón" #Change this object
prompt_generado = generar_prompt_desde_objeto(mi_objeto)
print({prompt_generado})
- Downloads last month
- 19
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.
Model tree for Miguelpef/bart-base-lora-3DPrompt
Base model
facebook/bart-base