SydneyBot - LLaMA 8B Model (v1)
Model Description
This is a fine-tuned version of the LLaMA 8B model, trained to emulate the personality of a fictional character named Sydney. The model is trained for conversational AI and supports text generation tasks.
- Architecture: LLaMA 8B
- Fine-tuned On: Custom dataset representing the personality of Sydney
- Size: 8B parameters
- Task: Text generation (Causal Language Modeling)
Intended Use
- Primary Use: This model is intended for text generation, including role-playing chat, dialogue systems, and storytelling.
- How to Use: The model can be used via the Hugging Face Inference API or integrated into custom applications using transformers.
Example Usage:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Eschatol/SydneyBot")
tokenizer = AutoTokenizer.from_pretrained("Eschatol/SydneyBot")
inputs = tokenizer("Hello, Sydney!", return_tensors="pt")
outputs = model.generate(inputs["input_ids"], max_length=50)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 7
Model tree for Eschatol/SydneyBot
Base model
meta-llama/Llama-3.1-8B
Finetuned
meta-llama/Llama-3.1-8B-Instruct