|
--- |
|
license: apache-2.0 |
|
datasets: |
|
- argilla/ultrafeedback-binarized-preferences-cleaned |
|
language: |
|
- en |
|
base_model: |
|
- mistralai/Mistral-7B-v0.1 |
|
library_name: transformers |
|
tags: |
|
- transformers |
|
--- |
|
|
|
<h1 style="font-size: 2em;">✨ Introducing ElEmperador! ✨</h1> |
|
|
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/64e8ea3892d9db9a93580fe3/gkDcpIxRCjBlmknN_jzWN.png) |
|
|
|
# Introduction: |
|
|
|
ElEmperador is an ORPO-based finetinue derived from the Mistral-7B-v0.1 base model. |
|
|
|
The argilla/ultrafeedback-binarized-preferences-cleaned dataset was used to improve the performance of the model. |
|
|
|
## Model Evals will be posted soon. |
|
|
|
The model recipe: https://github.com/ParagEkbote/El-Emperador_ModelRecipe |
|
|
|
## Inference Script: |
|
|
|
```yaml |
|
def generate_response(model_name, input_text, max_new_tokens=50): |
|
# Load the tokenizer and model from Hugging Face Hub |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
model = AutoModelForCausalLM.from_pretrained(model_name) |
|
|
|
# Tokenize the input text |
|
input_ids = tokenizer(input_text, return_tensors='pt').input_ids |
|
|
|
# Generate a response using the model |
|
with torch.no_grad(): |
|
generated_ids = model.generate(input_ids, max_new_tokens=max_new_tokens) |
|
|
|
# Decode the generated tokens into text |
|
generated_text = tokenizer.decode(generated_ids[0], skip_special_tokens=True) |
|
|
|
return generated_text |
|
|
|
if __name__ == "__main__": |
|
# Set the model name from Hugging Face Hub |
|
model_name = "AINovice2005/ElEmperador" |
|
input_text = "Hello, how are you?" |
|
|
|
# Generate and print the model's response |
|
output = generate_response(model_name, input_text) |
|
|
|
print(f"Input: {input_text}") |
|
print(f"Output: {output}") |
|
|
|
|