mlabonne commited on
Commit
0571303
β€’
1 Parent(s): 78546ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -8
README.md CHANGED
@@ -4,21 +4,51 @@ datasets:
4
  - mlabonne/guanaco-llama2-1k
5
  pipeline_tag: text-generation
6
  ---
7
- # Miniguanaco
8
-
9
- <img src="https://i.imgur.com/E7IzZMc.png" width="400">
10
 
11
  πŸ“ [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
12
  πŸ’» [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)
13
 
14
- This is a Llama 2-7b model QLoRA fine-tuned (4-bit precision) on the [`mlabonne/guanaco-llama2-1k`](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k) dataset, which is a subset of the [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
 
 
 
 
15
 
16
  It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.
17
 
18
- You can easily import it using the `AutoModelForCausalLM` class from `transformers`:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
 
 
 
 
 
 
 
 
 
 
 
20
  ```
21
- from transformers import AutoModelForCausalLM
22
 
23
- model = AutoModelForCausalLM("mlabonne/llama-2-7b-miniguanaco")
24
- ```
 
 
4
  - mlabonne/guanaco-llama2-1k
5
  pipeline_tag: text-generation
6
  ---
7
+ # πŸ¦™πŸ§  Miniguanaco
 
 
8
 
9
  πŸ“ [Article](https://towardsdatascience.com/fine-tune-your-own-llama-2-model-in-a-colab-notebook-df9823a04a32) |
10
  πŸ’» [Colab](https://colab.research.google.com/drive/1PEQyJO1-f6j0S_XJ8DV50NkpzasXkrzd?usp=sharing)
11
 
12
+ <center><img src="https://i.imgur.com/1IZmjU4.png" width="300"></center>
13
+
14
+ This is a Llama 2-7b model fine-tuned using QLoRA (4-bit precision) on the [`mlabonne/guanaco-llama2-1k`](https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k) dataset, which is a subset of the [`timdettmers/openassistant-guanaco`](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
15
+
16
+ ## πŸ”§ Training
17
 
18
  It was trained on a Google Colab notebook with a T4 GPU and high RAM. It is mainly designed for educational purposes, not for inference.
19
 
20
+ ## πŸ’» Usage
21
+
22
+ ``` python
23
+ # pip install transformers accelerate
24
+
25
+ from transformers import AutoTokenizer
26
+ import transformers
27
+ import torch
28
+
29
+ model = "mlabonne/llama-2-7b-miniguanaco"
30
+ prompt = "What is a large language model?"
31
+
32
+ tokenizer = AutoTokenizer.from_pretrained(model)
33
+ pipeline = transformers.pipeline(
34
+ "text-generation",
35
+ model=model,
36
+ torch_dtype=torch.float16,
37
+ device_map="auto",
38
+ )
39
 
40
+ sequences = pipeline(
41
+ f'<s>[INST] {prompt} [/INST]',
42
+ do_sample=True,
43
+ top_k=10,
44
+ num_return_sequences=1,
45
+ eos_token_id=tokenizer.eos_token_id,
46
+ max_length=200,
47
+ )
48
+ for seq in sequences:
49
+ print(f"Result: {seq['generated_text']}")
50
  ```
 
51
 
52
+ Output:
53
+ > A large language model is trained on massive amounts of text data to understand and generate human language. The model learns by predicting the next word in a sequence based on the context of the previous words. This process allows the language model to learn patterns, rules, and relationships within the language that allow it to generate text that looks and sounds authentic and coherent. These large language models are used for many applications, such as language translation, sentiment analysis, and language generation. These models can also be used to generate text summaries of complex documents, such as legal or scientific papers, or to generate text summaries of social media posts. These models are often used in natural language processing (NLP) and machine learning applications.
54
+ > The large language models are trained using a large number of parameters, often in the billions or even in the tens of billions.