File size: 16,204 Bytes
8d7ce98
 
 
 
 
 
 
 
 
 
17fa105
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d7ce98
 
83c3fc6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf8b99d
83c3fc6
 
 
 
 
 
 
 
cf8b99d
 
 
 
 
 
 
83c3fc6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf8b99d
83c3fc6
 
 
 
 
 
 
 
cf8b99d
 
 
 
 
 
 
 
83c3fc6
 
 
 
 
 
 
 
 
 
cf8b99d
83c3fc6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8d7ce98
 
 
 
 
 
 
 
 
17fa105
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code
model-index:
- name: Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: IFEval (0-Shot)
      type: HuggingFaceH4/ifeval
      args:
        num_few_shot: 0
    metrics:
    - type: inst_level_strict_acc and prompt_level_strict_acc
      value: 46.33
      name: strict accuracy
    source:
      url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: BBH (3-Shot)
      type: BBH
      args:
        num_few_shot: 3
    metrics:
    - type: acc_norm
      value: 26.4
      name: normalized accuracy
    source:
      url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MATH Lvl 5 (4-Shot)
      type: hendrycks/competition_math
      args:
        num_few_shot: 4
    metrics:
    - type: exact_match
      value: 10.5
      name: exact match
    source:
      url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GPQA (0-shot)
      type: Idavidrein/gpqa
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 8.28
      name: acc_norm
    source:
      url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MuSR (0-shot)
      type: TAUR-Lab/MuSR
      args:
        num_few_shot: 0
    metrics:
    - type: acc_norm
      value: 5.01
      name: acc_norm
    source:
      url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU-PRO (5-shot)
      type: TIGER-Lab/MMLU-Pro
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 28.5
      name: accuracy
    source:
      url: https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard?query=EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT
      name: Open LLM Leaderboard
---

# Agent LLama with tasks

Experimental and revolutionary fine-tune technique to allow LLama 3.1 8B to be agentic coder with tasks and CoT(Chain of Thought). It fine tuned with code dataset and Glaive's Cot Tasks dataset for Coder Agent.
It has some build-in agent features:
- search
- calculator
- ReAct. [Synergizing Reasoning and Acting in Language Models](https://arxiv.org/abs/2210.03629)
  - fine tuned ReAct for better responses

Other noticable features: 
- Self learning using unsloth. (in progress)
- can be used in RAG applications
- Memory. [**please use Langchain memory , section Message persistence**](https://python.langchain.com/docs/tutorials/chatbot/)

It is perfectly use for Langchain or LLamaIndex. 

Context Window: 128K

### Installation
```bash
pip install --upgrade "transformers>=4.43.2" torch==2.3.1 accelerate vllm==0.5.3.post1
```

Developers can easily integrate EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K into their projects using popular libraries like Transformers and vLLM. The following sections illustrate the usage with simple hands-on examples:

Optional: to use build in tool, please add to system prompt: "Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n"    


## Use Alpaca Prompt template: 
```python
alpaca_prompt = """Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instructions:
{}

### Input:
{}

### Response:
{}"""
```
## Recommend system prompt for generatel use: 
```python
"""
You should reason about the input and provide a logical explanation.
The explanation should follow these rules:
- The explanation should be written at graduate level engineering, science, math and literature
- The explanation should be split into subtasks
- The explanation should always end with 2-3 related concepts.
- subtasks have their own chain of thoughts
"""
```

## Recommend system prompt for coding: 
```python
"""
    Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n
    You are a coding assistant with expert with everything\n
    Ensure any code you provide can be executed \n
    with all required imports and variables defined. List the imports.  Structure your answer with a description of the code solution. \n
    write only the code. do not print anything else.\n
    debug code if error occurs. \n
    Here is the user question: {question}
    """
```

### Conversational Use-case
#### Use with [Transformers](https://github.com/huggingface/transformers)
##### Using `transformers.pipeline()` API , best use for 4bit for fast response. 
```python
import transformers
import torch
from langchain_community.llms import HuggingFaceEndpoint
from langchain_community.chat_models.huggingface import ChatHuggingFace

from transformers import BitsAndBytesConfig

quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype="float16",
    bnb_4bit_use_double_quant=True,
)

model_id = EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT"
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"quantization_config": quantization_config}, #for fast response. For full 16bit inference, remove this code.
    device_map="auto",
)
messages = [
    {"role": "system", "content":  """
    Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n
    You are a coding assistant with expert with everything\n
    Ensure any code you provide can be executed \n
    with all required imports and variables defined. List the imports.  Structure your answer with a description of the code solution. \n
    write only the code. do not print anything else.\n
    debug code if error occurs. \n
    Here is the user question: {question}
    """},
    {"role": "user", "content": "Create a bar plot showing the market capitalization of the top 7 publicly listed companies using matplotlib"}
]
outputs = pipeline(messages, max_new_tokens=128, do_sample=True, temperature=0.01, top_k=100, top_p=0.95)
print(outputs[0]["generated_text"][-1])  
```

# Example:
Please go to Colab for sample of the code using Langchain [Colab](https://colab.research.google.com/drive/129SEHVRxlr24r73yf34BKnIHOlD3as09?authuser=1)

# Unsloth Fast

```python
%%capture
# Installs Unsloth, Xformers (Flash Attention) and all other packages!
!pip install unsloth
# Get latest Unsloth
!pip install --upgrade --no-deps "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
!pip install langchain_experimental

from unsloth import FastLanguageModel
from google.colab import userdata


# 4bit pre quantized models we support for 4x faster downloading + no OOMs.
fourbit_models = [
    "unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
    "unsloth/gemma-7b-it-bnb-4bit",
] # More models at https://huggingface.co./unsloth

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "EpistemeAI2/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT",
    max_seq_length = 128000,
    load_in_4bit = True,
    token =userdata.get('HF_TOKEN')
)
def chatbot(query):
  messages = [
      {"from": "system", "value":
       """
      Environment: ipython. Tools: brave_search, wolfram_alpha. Cutting Knowledge Date: December 2023. Today Date: 4 October 2024\n
      You are a coding assistant with expert with everything\n
      Ensure any code you provide can be executed \n
      with all required imports and variables defined. List the imports.  Structure your answer with a description of the code solution. \n
      write only the code. do not print anything else.\n
      use ipython for search tool. \n
      debug code if error occurs. \n
      Here is the user question: {question}
      """
       },
      {"from": "human", "value": query},
  ]
  inputs = tokenizer.apply_chat_template(messages, tokenize = True, add_generation_prompt = True, return_tensors = "pt").to("cuda")

  text_streamer = TextStreamer(tokenizer)
  _ = model.generate(input_ids = inputs, streamer = text_streamer, max_new_tokens = 2048, use_cache = True)
```


# Response

```python
<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instructions:
Concept: Abductive Reasoning

### Input:
Split in tasks for Concept: Mathematical Proof Structuring

### Response 1:
Task 1:
- Identify the key components of a mathematical proof, including the statement, assumptions, and conclusion.
- Explain the role of each component in the overall structure of the proof.

Task 2:
- Describe the process of abductive reasoning in the context of mathematical proof.
- Provide an example of how abductive reasoning can be used to infer a mathematical statement.

Task 3:
- Discuss the importance of structuring a mathematical proof.
- Explain how a well-structured proof can help to establish the validity of a mathematical statement.

Task 4:
- Compare and contrast deductive and abductive reasoning in the context of mathematical proof.
- Provide an example of how each type of reasoning can be used to establish the validity of a mathematical statement.

Task 5:
- Design a template for structuring a mathematical proof.
- Explain how the template can be used to ensure that all necessary components are included in the proof.<|im_end|>
```
### Response 2:

```python
<|begin_of_text|>Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.

### Instructions:
Concept: Occam's Razor

### Input:
Split in tasks for Concept: Occam's Razor and perform the task

### Response:
Split the task into the following subtasks:

1.  **Task 1: Define Occam's Razor**
    *   Definition: Occam's Razor is a principle used in problem-solving and evidence-based reasoning, stating that, all things being equal, the simplest explanation or solution is usually the best one.
2.  **Task 2: Explain the Key Components of Occam's Razor**
    *   Components: 
        *   **Simplicity**: The principle favors explanations that require fewer assumptions or hypothetical entities.
        *   **Evidence**: It emphasizes the importance of empirical evidence in supporting or rejecting explanations.
        *   ** Parsimony**: Occam's Razor encourages the selection of explanations that are more parsimonious, meaning they require fewer assumptions or entities.
3.  **Task 3: Provide Examples of Occam's Razor in Action**
    *   Examples: 
        *   **The Solar System**: The ancient Greeks proposed a complex system with multiple celestial spheres. Occam's Razor would suggest a simpler explanation, like the Copernican heliocentric model.
        *   **Medical Diagnosis**: A doctor might initially suspect a rare disease, but Occam's Razor would favor a more common and simpler explanation, such as a viral infection
```

# Execute code (Make sure to use virtual environments)
```bash
python3 -m venv env
source env/bin/activate
```

##  Execution code responses from Llama
#### Please use execute python code function for local.  For langchain, please use Python REPL() to execute code

execute code funciton locally in python:
```python
def execute_Python_code(code):
     # A string stream to capture the outputs of exec
    output = io.StringIO() 
    try:
        # Redirect stdout to the StringIO object
        with contextlib.redirect_stdout(output):  
            # Allow imports 
            exec(code, globals())
    except Exception as e:
        # If an error occurs, capture it as part of the output
        print(f"Error: {e}", file=output)  
    return output.getvalue()
```

Langchain python Repl
- Install

```bash
!pip install langchain_experimental
```

Code: 
```python
from langchain_core.tools import Tool
from langchain_experimental.utilities import PythonREPL

python_repl = PythonREPL()

# You can create the tool to pass to an agent
repl_tool = Tool(
    name="python_repl",
    description="A Python shell. Use this to execute python commands. Input should be a valid python command. If you want to see the output of a value, you should print it out with `print(...)`.",
    func=python_repl.run,
)
repl_tool(outputs[0]["generated_text"][-1])
```

# Safety inputs/ outputs procedures
Fo all inputs, please use Llama-Guard: meta-llama/Llama-Guard-3-8B for safety classification. 
Go to model card [Llama-Guard](https://huggingface.co./meta-llama/Llama-Guard-3-8B)

## Other usess

#### ToT - Tree of Thought
- Use system prompt: 
```python
"Imagine three different experts are answering this question.
All experts will write down 1 step of their thinking,
then share it with the group.
Then all experts will go on to the next step, etc.
If any expert realises they're wrong at any point then they leave.
The question is..."
```
#### ReAct
example from langchain agent - [langchain React agent](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/react/agent.py)
- Use system prompt:
```python
"""
Answer the following questions as best you can. You have access to the following tools:

            {tools}

            Use the following format:

            Question: the input question you must answer
            Thought: you should always think about what to do
            Action: the action to take, should be one of [{tool_names}]
            Action Input: the input to the action
            Observation: the result of the action
            ... (this Thought/Action/Action Input/Observation can repeat N times)
            Thought: I now know the final answer
            Final Answer: the final answer to the original input question

            Begin!

            Question: {input}
            Thought:{agent_scratchpad}
"""
```


# Uploaded  model

- **Developed by:** EpistemeAI
- **License:** apache-2.0
- **Finetuned from model :** EpistemeAI/Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.003-128K-code

This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.

[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co./spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co./datasets/open-llm-leaderboard/details_EpistemeAI2__Fireball-Meta-Llama-3.1-8B-Instruct-Agent-0.005-128K-code-COT)

|      Metric       |Value|
|-------------------|----:|
|Avg.               |20.84|
|IFEval (0-Shot)    |46.33|
|BBH (3-Shot)       |26.40|
|MATH Lvl 5 (4-Shot)|10.50|
|GPQA (0-shot)      | 8.28|
|MuSR (0-shot)      | 5.01|
|MMLU-PRO (5-shot)  |28.50|