File size: 2,074 Bytes
aba83ce
 
 
 
 
 
 
a4d4d83
aba83ce
a4d4d83
aba83ce
 
 
 
a4d4d83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aba83ce
 
 
 
 
 
 
 
 
 
 
 
 
 
a4d4d83
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
base_model: Geraldine/FineLlama-3.2-3B-Instruct-ead
library_name: transformers
pipeline_tag: text-generation
tags:
- openvino
- openvino-export
license: llama3.2
---
# FineLlama-3.2-3B-Instruct-ead-openvino

This model was converted to OpenVINO from [`Geraldine/FineLlama-3.2-3B-Instruct-ead`](https://huggingface.co./Geraldine/FineLlama-3.2-3B-Instruct-ead) using [optimum-intel](https://github.com/huggingface/optimum-intel)
via the [export](https://huggingface.co./spaces/echarlaix/openvino-export) space.

## Model Description

- **Original Model**: Geraldine/FineLlama-3.2-3B-Instruct-ead
- **Framework**: OpenVINO
- **Task**: Text Generation, EAD tag generation
- **Language**: English
- **License**: llama3.2

## Features

- Optimized for Intel hardware using OpenVINO
- Supports text generation inference
- Maintains original model capabilities for EAD tag generation
- Integration with PyTorch

## Installation

First make sure you have optimum-intel installed:

```bash
pip install optimum[openvino]
```

To load your model you can do as follows:

```python
from optimum.intel import OVModelForCausalLM

model_id = "Geraldine/FineLlama-3.2-3B-Instruct-ead-openvino"
model = OVModelForCausalLM.from_pretrained(model_id)
```

## Technical Specifications

### Supported Features
- Text Generation
- Transformers integration
- PyTorch compatibility
- OpenVINO export
- Inference Endpoints
- Conversational capabilities

### Model Architecture
- Base: meta-llama/Llama-3.2-3B-Instruct
- Fine-tuned: Geraldine/FineLlama-3.2-3B-Instruct-ead
- Final conversion: OpenVINO optimization

## Usage Examples

```python
from optimum.intel import OVModelForCausalLM
from transformers import AutoTokenizer

# Load model and tokenizer
model_id = "Geraldine/FineLlama-3.2-3B-Instruct-ead-openvino"
model = OVModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)

# Generate text
def generate_ead(prompt):
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs)
    return tokenizer.decode(outputs[0])
```