FineLlama-3.2-3B-Instruct-ead-openvino
This model was converted to OpenVINO from Geraldine/FineLlama-3.2-3B-Instruct-ead
using optimum-intel
via the export space.
Model Description
- Original Model: Geraldine/FineLlama-3.2-3B-Instruct-ead
- Framework: OpenVINO
- Task: Text Generation, EAD tag generation
- Language: English
- License: llama3.2
Features
- Optimized for Intel hardware using OpenVINO
- Supports text generation inference
- Maintains original model capabilities for EAD tag generation
- Integration with PyTorch
Installation
First make sure you have optimum-intel installed:
pip install optimum[openvino]
To load your model you can do as follows:
from optimum.intel import OVModelForCausalLM
model_id = "Geraldine/FineLlama-3.2-3B-Instruct-ead-openvino"
model = OVModelForCausalLM.from_pretrained(model_id)
Technical Specifications
Supported Features
- Text Generation
- Transformers integration
- PyTorch compatibility
- OpenVINO export
- Inference Endpoints
- Conversational capabilities
Model Architecture
- Base: meta-llama/Llama-3.2-3B-Instruct
- Fine-tuned: Geraldine/FineLlama-3.2-3B-Instruct-ead
- Final conversion: OpenVINO optimization
Usage Examples
from optimum.intel import OVModelForCausalLM
from transformers import AutoTokenizer
# Load model and tokenizer
model_id = "Geraldine/FineLlama-3.2-3B-Instruct-ead-openvino"
model = OVModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Generate text
def generate_ead(prompt):
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs)
return tokenizer.decode(outputs[0])
- Downloads last month
- 12
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for Geraldine/FineLlama-3.2-3B-Instruct-ead-openvino
Base model
Geraldine/FineLlama-3.2-3B-Instruct-ead