lgaalves commited on
Commit
e53612c
1 Parent(s): 0bb6ebe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -14
README.md CHANGED
@@ -9,27 +9,28 @@ pipeline_tag: text-generation
9
 
10
 
11
 
12
- # tinyllama-1.1b-chat-v0.3-platypus
13
 
14
- **tinyllama-1.1b-chat-v0.3-platypus** is an instruction fine-tuned model based on the tinyllama transformer architecture.
15
 
16
 
17
  ### Benchmark Metrics
18
 
19
- | Metric |lgaalves/tinyllama-1.1b-chat-v0.3-platypus | tinyllama-1.1b-chat-v0.3 |
20
  |-----------------------|-------|-------|
21
- | Avg. | - | 38.74 |
22
- | ARC (25-shot) | - | 35.07 |
23
- | HellaSwag (10-shot) | - | 57.7 |
24
- | MMLU (5-shot) | - | 25.53 |
25
- | TruthfulQA (0-shot) | - | 36.67 |
 
26
 
27
  We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
28
 
29
  ### Model Details
30
 
31
  * **Trained by**: Luiz G A Alves
32
- * **Model type:** **tinyllama-1.1b-chat-v0.3-platypus** is an auto-regressive language model based on the tinyllama transformer architecture.
33
  * **Language(s)**: English
34
 
35
  ### How to use:
@@ -37,7 +38,7 @@ We use state-of-the-art [Language Model Evaluation Harness](https://github.com/E
37
  ```python
38
  # Use a pipeline as a high-level helper
39
  >>> from transformers import pipeline
40
- >>> pipe = pipeline("text-generation", model="lgaalves/tinyllama-1.1b-chat-v0.3-platypus")
41
  >>> question = "What is a large language model?"
42
  >>> answer = pipe(question)
43
  >>> print(answer[0]['generated_text'])
@@ -49,17 +50,17 @@ or, you can load the model direclty using:
49
  # Load model directly
50
  from transformers import AutoTokenizer, AutoModelForCausalLM
51
 
52
- tokenizer = AutoTokenizer.from_pretrained("lgaalves/tinyllama-1.1b-chat-v0.3-platypus")
53
- model = AutoModelForCausalLM.from_pretrained("lgaalves/tinyllama-1.1b-chat-v0.3-platypus")
54
  ```
55
 
56
  ### Training Dataset
57
 
58
- `lgaalves/tinyllama-1.1b-chat-v0.3-platypus` trained using STEM and logic based dataset [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
59
 
60
  ### Training Procedure
61
 
62
- `lgaalves/tinyllama-1.1b-chat-v0.3-platypus` was instruction fine-tuned using LoRA on 1 V100 GPU on Google Colab. It took about 43 minutes to train it.
63
 
64
 
65
  # Intended uses, limitations & biases
 
9
 
10
 
11
 
12
+ # tinyllama-1.1b-chat-v0.3_platypus
13
 
14
+ **tinyllama-1.1b-chat-v0.3_platypus** is an instruction fine-tuned model based on the tinyllama transformer architecture.
15
 
16
 
17
  ### Benchmark Metrics
18
 
19
+ | Metric |lgaalves/tinyllama-1.1b-chat-v0.3_platypus | tinyllama-1.1b-chat-v0.3 |
20
  |-----------------------|-------|-------|
21
+ | Avg. | 37.67 | **38.74** |
22
+ | ARC (25-shot) | 30.29 | **35.07** |
23
+ | HellaSwag (10-shot) | 55.12 | **57.7** |
24
+ | MMLU (5-shot) | **26.13** | 25.53 |
25
+ | TruthfulQA (0-shot) | **39.15** | 36.67 |
26
+
27
 
28
  We use state-of-the-art [Language Model Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness) to run the benchmark tests above, using the same version as the HuggingFace LLM Leaderboard. Please see below for detailed instructions on reproducing benchmark results.
29
 
30
  ### Model Details
31
 
32
  * **Trained by**: Luiz G A Alves
33
+ * **Model type:** **tinyllama-1.1b-chat-v0.3_platypus** is an auto-regressive language model based on the tinyllama transformer architecture.
34
  * **Language(s)**: English
35
 
36
  ### How to use:
 
38
  ```python
39
  # Use a pipeline as a high-level helper
40
  >>> from transformers import pipeline
41
+ >>> pipe = pipeline("text-generation", model="lgaalves/tinyllama-1.1b-chat-v0.3_platypus")
42
  >>> question = "What is a large language model?"
43
  >>> answer = pipe(question)
44
  >>> print(answer[0]['generated_text'])
 
50
  # Load model directly
51
  from transformers import AutoTokenizer, AutoModelForCausalLM
52
 
53
+ tokenizer = AutoTokenizer.from_pretrained("lgaalves/tinyllama-1.1b-chat-v0.3_platypus")
54
+ model = AutoModelForCausalLM.from_pretrained("lgaalves/tinyllama-1.1b-chat-v0.3_platypus")
55
  ```
56
 
57
  ### Training Dataset
58
 
59
+ `lgaalves/tinyllama-1.1b-chat-v0.3_platypus` trained using STEM and logic based dataset [garage-bAInd/Open-Platypus](https://huggingface.co/datasets/garage-bAInd/Open-Platypus).
60
 
61
  ### Training Procedure
62
 
63
+ `lgaalves/tinyllama-1.1b-chat-v0.3_platypus` was instruction fine-tuned using LoRA on 1 V100 GPU on Google Colab. It took about 43 minutes to train it.
64
 
65
 
66
  # Intended uses, limitations & biases