File size: 1,028 Bytes
6be71e6
 
 
 
 
 
 
 
 
 
 
 
 
bf47318
6be71e6
bf47318
 
 
 
 
 
 
bbc098a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
---
license: apache-2.0
datasets:
- mlabonne/guanaco-llama2-1k
pipeline_tag: text-generation
---
# Miniguanaco

<img src="https://i.imgur.com/E7IzZMc.png" width="400">

📝 [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
💻 [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)

This is a Llama 2-7b model QLoRA fine-tuned (4-bit precision) on the [`mlabonne/guanaco-llama2-1k`](https://huggingface.co./datasets/mlabonne/guanaco-llama2-1k) dataset, which is a subset of the [`timdettmers/openassistant-guanaco`](https://huggingface.co./datasets/timdettmers/openassistant-guanaco).

It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.

You can easily import it using the `AutoModelForCausalLM` class from `transformers`:

```
from transformers import AutoModelForCausalLM

model = AutoModelForCausalLM("mlabonne/llama-2-7b-miniguanaco")
```