aashish1904 commited on
Commit
504bab5
1 Parent(s): 0ce771d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +201 -0
README.md ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ license: apache-2.0
5
+ base_model: mistralai/Mistral-7B-v0.3
6
+ tags:
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: home/migel/tess-2.5-mistral-7B-phase-1
10
+ results: []
11
+
12
+ ---
13
+
14
+ ![](https://cdn.discordapp.com/attachments/791342238541152306/1264099835221381251/image.png?ex=669ca436&is=669b52b6&hm=129f56187c31e1ed22cbd1bcdbc677a2baeea5090761d2f1a458c8b1ec7cca4b&)
15
+
16
+ # QuantFactory/Tess-3-7B-SFT-GGUF
17
+ This is quantized version of [migtissera/Tess-3-7B-SFT](https://huggingface.co/migtissera/Tess-3-7B-SFT) created using llama.cpp
18
+
19
+ # Original Model Card
20
+
21
+
22
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
23
+ should probably proofread and complete it, then remove this comment. -->
24
+
25
+ [<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
26
+ <details><summary>See axolotl config</summary>
27
+
28
+ axolotl version: `0.4.1`
29
+ ```yaml
30
+ base_model: mistralai/Mistral-7B-v0.3
31
+ model_type: AutoModelForCausalLM
32
+ tokenizer_type: AutoTokenizer
33
+ tokenizer_use_fast: false
34
+
35
+ load_in_8bit: false
36
+ load_in_4bit: false
37
+ strict: false
38
+ model_config:
39
+
40
+ datasets:
41
+ - path: /home/migel/ai_datasets/tess-v1.5b-chatml.jsonl
42
+ type: sharegpt
43
+ conversation: chatml
44
+ - path: /home/migel/ai_datasets/Tess-3.0/Tess-3.0-multi_turn_chatml.jsonl
45
+ type: sharegpt
46
+ conversation: chatml
47
+ - path: /home/migel/ai_datasets/Tess-3.0/Tess-3.0-single_turn_chatml.jsonl
48
+ type: sharegpt
49
+ conversation: chatml
50
+
51
+ chat_template: chatml
52
+
53
+ dataset_prepared_path: last_run_prepared_mistral
54
+ val_set_size: 0.0
55
+ output_dir: /home/migel/tess-2.5-mistral-7B-phase-1
56
+
57
+ resume_from_checkpoint: /home/migel/tess-2.5-mistral-7B-phase-1/checkpoint-440
58
+ auto_resume_from_checkpoints: true
59
+
60
+ sequence_len: 16384
61
+ sample_packing: true
62
+ pad_to_sequence_len: true
63
+
64
+ gradient_accumulation_steps: 4
65
+ micro_batch_size: 4
66
+ num_epochs: 1
67
+ logging_steps: 1
68
+ optimizer: adamw_8bit
69
+ lr_scheduler: constant
70
+ learning_rate: 1e-6
71
+
72
+ wandb_project: mistral-7b
73
+ wandb_watch:
74
+ wandb_run_id:
75
+ wandb_log_model:
76
+
77
+ train_on_inputs: false
78
+ group_by_length: false
79
+ bf16: auto
80
+ fp16:
81
+ tf32: false
82
+
83
+ gradient_checkpointing: true
84
+ gradient_checkpointing_kwargs:
85
+ use_reentrant: false
86
+ early_stopping_patience:
87
+ resume_from_checkpoint:
88
+ local_rank:
89
+ logging_steps: 1
90
+ xformers_attention:
91
+ flash_attention: true
92
+ saves_per_epoch: 10
93
+ evals_per_epoch: 10
94
+ save_total_limit: 3
95
+ save_steps:
96
+ eval_sample_packing: false
97
+ debug:
98
+ deepspeed: /home/migel/axolotl/deepspeed_configs/zero3_bf16.json
99
+ weight_decay: 0.0
100
+ fsdp:
101
+ fsdp_config:
102
+ special_tokens:
103
+ bos_token: "<|im_start|>"
104
+ eos_token: "<|im_end|>"
105
+ pad_token: "<|end_of_text|>"
106
+
107
+
108
+ ```
109
+
110
+ </details><br>
111
+
112
+ # Tess-3-7B-SFT
113
+
114
+ ![Tess-3](https://huggingface.co/migtissera/Tess-v2.5-Qwen2-72B/resolve/main/Tess-v2.5.png)
115
+
116
+ Tess-3-7B is a finetuned version of the Mistral-7B-v0.3 base model. This version is the first phase of the final Tess-3 model, and have been trained with supervised fine-tuning (SFT) on a curated dataset of ~500K samples. The total SFT dataset contains about 1B tokens.
117
+
118
+ This model has 32K context length.
119
+
120
+
121
+ # Sample code to run inference
122
+
123
+ Note that this model uses ChatML prompt format.
124
+
125
+ ```python
126
+ import torch, json
127
+ from transformers import AutoModelForCausalLM, AutoTokenizer
128
+ from stop_word import StopWordCriteria
129
+
130
+ model_path = "migtissera/Tess-3-7B-SFT"
131
+ output_file_path = "/home/migel/conversations.jsonl"
132
+
133
+ model = AutoModelForCausalLM.from_pretrained(
134
+ model_path,
135
+ torch_dtype=torch.float16,
136
+ device_map="auto",
137
+ load_in_4bit=False,
138
+ trust_remote_code=True,
139
+ )
140
+
141
+ tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
142
+
143
+ terminators = [
144
+ tokenizer.convert_tokens_to_ids("<|im_end|>")
145
+ ]
146
+
147
+ def generate_text(instruction):
148
+ tokens = tokenizer.encode(instruction)
149
+ tokens = torch.LongTensor(tokens).unsqueeze(0)
150
+ tokens = tokens.to("cuda")
151
+
152
+ instance = {
153
+ "input_ids": tokens,
154
+ "top_p": 1.0,
155
+ "temperature": 0.75,
156
+ "generate_len": 1024,
157
+ "top_k": 50,
158
+ }
159
+
160
+ length = len(tokens[0])
161
+ with torch.no_grad():
162
+ rest = model.generate(
163
+ input_ids=tokens,
164
+ max_length=length + instance["generate_len"],
165
+ use_cache=True,
166
+ do_sample=True,
167
+ top_p=instance["top_p"],
168
+ temperature=instance["temperature"],
169
+ top_k=instance["top_k"],
170
+ num_return_sequences=1,
171
+ pad_token_id=tokenizer.eos_token_id,
172
+ eos_token_id=terminators,
173
+ )
174
+ output = rest[0][length:]
175
+ string = tokenizer.decode(output, skip_special_tokens=True)
176
+ return f"{string}"
177
+
178
+ conversation = f"""<|im_start|>system\nYou are Tesoro, a helful AI assitant. You always provide detailed answers without hesitation.<|im_end|>\n<|im_start|>user\n"""
179
+
180
+ while True:
181
+ user_input = input("You: ")
182
+ llm_prompt = f"{conversation}{user_input}<|im_end|>\n<|im_start|>assistant\n"
183
+ answer = generate_text(llm_prompt)
184
+ print(answer)
185
+ conversation = f"{llm_prompt}{answer}\n"
186
+ json_data = {"prompt": user_input, "answer": answer}
187
+
188
+ with open(output_file_path, "a") as output_file:
189
+ output_file.write(json.dumps(json_data) + "\n")
190
+ ```
191
+
192
+ # Join My General AI Discord (NeuroLattice):
193
+ https://discord.gg/Hz6GrwGFKD
194
+
195
+ # Limitations & Biases:
196
+
197
+ While this model aims for accuracy, it can occasionally produce inaccurate or misleading results.
198
+
199
+ Despite diligent efforts in refining the pretraining data, there remains a possibility for the generation of inappropriate, biased, or offensive content.
200
+
201
+ Exercise caution and cross-check information when necessary. This is an uncensored model.