Triangle104 commited on
Commit
248dc82
1 Parent(s): ea61722

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -0
README.md CHANGED
@@ -130,6 +130,80 @@ model-index:
130
  This model was converted to GGUF format from [`ValiantLabs/Llama3.1-8B-Fireplace2`](https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
131
  Refer to the [original model card](https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2) for more details on the model.
132
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
  ## Use with llama.cpp
134
  Install llama.cpp through brew (works on Mac and Linux)
135
 
 
130
  This model was converted to GGUF format from [`ValiantLabs/Llama3.1-8B-Fireplace2`](https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
131
  Refer to the [original model card](https://huggingface.co/ValiantLabs/Llama3.1-8B-Fireplace2) for more details on the model.
132
 
133
+ ---
134
+ Model details:
135
+ -
136
+ Fireplace 2 is a chat model, adding helpful structured outputs to Llama 3.1 8b Instruct.
137
+
138
+ an expansion pack of supplementary outputs - request them at will within your chat:
139
+ Inline function calls
140
+ SQL queries
141
+ JSON objects
142
+ Data visualization with matplotlib
143
+ Mix normal chat and structured outputs within the same conversation.
144
+ Fireplace 2 supplements the existing strengths of Llama 3.1, providing inline capabilities within the Llama 3 Instruct format.
145
+
146
+ Version
147
+
148
+ This is the 2024-07-23 release of Fireplace 2 for Llama 3.1 8b.
149
+
150
+ We're excited to bring further upgrades and releases to Fireplace 2 in the future.
151
+
152
+ Help us and recommend Fireplace 2 to your friends!
153
+ Prompting Guide
154
+
155
+ Fireplace uses the Llama 3.1 Instruct prompt format. The example script below can be used as a starting point for general chat with Llama 3.1 and also includes the different special tokens used for Fireplace 2's added features:
156
+
157
+ import transformers import torch
158
+
159
+ model_id = "ValiantLabs/Llama3.1-8B-Fireplace2"
160
+
161
+ pipeline = transformers.pipeline( "text-generation", model=model_id, model_kwargs={"torch_dtype": torch.bfloat16}, device_map="auto", )
162
+
163
+ messages = [ {"role": "system", "content": "You are Fireplace, an expert technical assistant."}, {"role": "user", "content": "Hi, can you explain local area networking to me?"}, #general Llama 3.1 chat #{"role": "user", "content": "I have the following SQL table: employees (job_id VARCHAR, salary INTEGER)\n\nCan you find all employees with a salary above $75000?<|request_sql|>"}, #for SQL query #{"role": "user", "content": "{""name"": ""get_news_headlines"",""description"": ""Get the latest news headlines"",""parameters"": {""type"": ""object"",""properties"": {""country"": {""type"": ""string"",""description"": ""The country for which news headlines are to be retrieved""}},""required"": [""country""]}}\n\nHi, can you get me the latest news headlines for the United States?<|request_function_call|>"}, # for function call #{"role": "user", "content": "Show me an example of a histogram with a fixed bin size. Use attractive colors.<|request_matplotlib|>"}, #for data visualization #{"role": "user", "content": "Can you define the word 'presence' for me, thanks!<|request_json|>"}, #for JSON output ]
164
+
165
+ outputs = pipeline( messages, max_new_tokens=512, ) print(outputs[0]["generated_text"][-1])
166
+
167
+ While Fireplace 2 is trained to minimize incorrect structured outputs, they can still occur occasionally. Production uses of Fireplace 2 should verify the structure of all model outputs and remove any unneeded components of the output.
168
+
169
+ For handling of function call responses, use the Llama 3.1 Instruct tool response style.
170
+ Special Tokens
171
+
172
+ Fireplace 2 utilizes special tokens applied to the Llama 3.1 tokenizer:
173
+
174
+ <|request_json|>
175
+ <|start_json|>
176
+ <|end_json|>
177
+ <|request_sql|>
178
+ <|start_sql|>
179
+ <|end_sql|>
180
+ <|request_matplotlib|>
181
+ <|start_matplotlib|>
182
+ <|end_matplotlib|>
183
+ <|request_function_call|>
184
+ <|start_function_call|>
185
+ <|end_function_call|>
186
+
187
+ These are supplemental to the existing special tokens used by Llama 3.1, such as <|python_tag|> and <|start_header_id|>. Fireplace 2 has been trained using the Llama 3.1 Instruct chat structure, with new special tokens added within the conversation.
188
+
189
+ The 'request' tokens are used by the user to request a specific type of structured output. They should be appended to the end of the user's message and can be alternated with normal chat responses throughout the conversation.
190
+ The Model
191
+
192
+ Fireplace 2 is built on top of Llama 3.1 8b Instruct.
193
+
194
+ This version of Fireplace 2 uses data from the following datasets:
195
+
196
+ glaiveai/glaive-function-calling-v2
197
+ b-mc2/sql-create-context
198
+ sequelbox/Cadmium
199
+ sequelbox/Harlequin
200
+ migtissera/Tess-v1.5
201
+ LDJnr/Pure-Dove
202
+
203
+ Additional capabilities will be added to future releases.
204
+
205
+ ---
206
+
207
  ## Use with llama.cpp
208
  Install llama.cpp through brew (works on Mac and Linux)
209