lxyuan commited on
Commit
fbe6327
1 Parent(s): 7ea2327

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +91 -3
README.md CHANGED
@@ -2,6 +2,8 @@
2
  language:
3
  - en
4
  license: apache-2.0
 
 
5
  tags:
6
  - text-generation-inference
7
  - transformers
@@ -12,12 +14,98 @@ tags:
12
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
13
  ---
14
 
15
- # Uploaded model
 
 
 
 
16
 
17
  - **Developed by:** lxyuan
18
  - **License:** apache-2.0
19
  - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
22
 
23
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
2
  language:
3
  - en
4
  license: apache-2.0
5
+ datasets:
6
+ - tatsu-lab/alpaca
7
  tags:
8
  - text-generation-inference
9
  - transformers
 
14
  base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
15
  ---
16
 
17
+ ## lxyuan/llama-3-8b-Instruct-lora-merged
18
+
19
+ **Model Description**: Finetuned the [Llama-3-8B-Instruct Model](https://huggingface.co/unsloth/llama-3-8b-Instruct-bnb-4bit) using [unsloth](https://github.com/unslothai/unsloth)
20
+ on [Alpaca Dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) for 1000 steps.
21
+
22
 
23
  - **Developed by:** lxyuan
24
  - **License:** apache-2.0
25
  - **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
26
+ - **Finetuned from model :** tatsu-lab/alpaca
27
+
28
+ ## Installation
29
+
30
+ ```python
31
+ import torch
32
+
33
+ major_version, minor_version = torch.cuda.get_device_capability()
34
+
35
+ # Must install separately since Colab has torch 2.2.1, which breaks packages
36
+ !pip install "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
37
+
38
+ if major_version >= 8:
39
+ # Use this for new GPUs like Ampere, Hopper GPUs (RTX 30xx, RTX 40xx, A100, H100, L40)
40
+ !pip install --no-deps packaging ninja einops flash-attn xformers trl peft accelerate bitsandbytes
41
+ else:
42
+ # Use this for older GPUs (V100, Tesla T4, RTX 20xx)
43
+ !pip install --no-deps xformers trl peft accelerate bitsandbytes
44
+ ```
45
+
46
+ ## Inference example
47
+
48
+ ```python
49
+ from transformers import pipeline
50
+ from unsloth import FastLanguageModel
51
+
52
+ model, tokenizer = FastLanguageModel.from_pretrained(
53
+ model_name = "lxyuan/llama-3-8b-Instruct-lora-merged",
54
+ dtype = None, # auto detect
55
+ load_in_4bit = True, # default is True
56
+ )
57
+
58
+ FastLanguageModel.for_inference(model) # Enable native 2x faster inference
59
+
60
+ pipeline = pipeline("text-generation", model=model, tokenizer=tokenizer)
61
+
62
+ messages = [
63
+ {"role": "system", "content": "You are helpful AI bot that follows instruction to complete task."},
64
+ {"role": "user", "content": "Write me 10 sentences that end with 'apple"},
65
+ ]
66
+
67
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
68
+
69
+ terminators = [
70
+ tokenizer.eos_token_id,
71
+ tokenizer.convert_tokens_to_ids("<|eot_id|>")
72
+ ]
73
+
74
+ outputs = pipeline(
75
+ prompt,
76
+ max_new_tokens=512,
77
+ eos_token_id=terminators,
78
+ do_sample=True,
79
+ temperature=0.6,
80
+ top_p=0.9,
81
+ )
82
+
83
+ print(outputs[0]["generated_text"])
84
+ ```
85
+
86
+ #### Inference Output
87
+ ```markdown
88
+ <|begin_of_text|><|start_header_id|>system<|end_header_id|>
89
+
90
+ You are helpful AI bot that follows instruction to complete task.<|eot_id|><|start_header_id|>user<|end_header_id|>
91
+
92
+ Write me 10 sentences that end with 'apple<|eot_id|><|start_header_id|>assistant<|end_header_id|>
93
+
94
+ Here are 10 sentences that end with the word "apple":
95
+
96
+ 1. The farmer grew a juicy red apple.
97
+ 2. She ate a crunchy green apple.
98
+ 3. The tree bore a ripe yellow apple.
99
+ 4. He bit into a sweet Granny Smith apple.
100
+ 5. The basket was filled with fresh apples.
101
+ 6. The juice was squeezed from a ripe red apple.
102
+ 7. She picked a perfect autumn apple.
103
+ 8. The pie was filled with tender Granny Smith apple.
104
+ 9. The farmer's market sold a variety of apples.
105
+ 10. The snack was a crisp, juicy apple.
106
+ ```
107
 
108
+ ## Training procedure
109
 
110
+ - [Finetuning notebook](https://github.com/LxYuan0420/nlp/blob/main/notebooks/Lora_finetuning_Llama_3_8b_Instruct_with_Alpaca.ipynb)
111
+ - [Original Notebook from unsloth](https://colab.research.google.com/drive/135ced7oHytdxu3N2DNe1Z0kqjyYIkDXp?usp=sharing#scrollTo=MKX_XKs_BNZR)