File size: 8,095 Bytes
51e4adc
 
 
 
 
 
 
 
 
 
431347d
51e4adc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72f0f1a
51e4adc
 
 
 
 
 
 
adcb9ce
60830f3
 
adcb9ce
 
51e4adc
 
 
 
68fca60
51e4adc
 
 
68fca60
 
51e4adc
 
68fca60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
51e4adc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfba75a
51e4adc
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
---
license: llama2
base_model: codellama/CodeLlama-13b-Instruct-hf
model-index:
- name: NexusRaven-13B
  results: []
---
# NexusRaven-13B: Surpassing the state-of-the-art in open-source function calling LLMs. 

<p align="center">
<a href="https://huggingface.co./Nexusflow" target="_blank">Nexusflow HF</a> - <a href="http://nexusflow.ai/blog" target="_blank">NexusRaven blog post</a> - <a href="https://huggingface.co./Nexusflow/NexusRaven-13B" target="_blank">NexusRaven-13B</a> - <a href="https://x.com/NexusflowX/status/1707470614012035561?s=20" target="_blank">NexusRaven-13B Twitter Thread</a> - <a href="https://github.com/nexusflowai/NexusRaven/" target="_blank">NexusRaven-13B Github</a> - <a href="https://huggingface.co./datasets/Nexusflow/NexusRaven_API_evaluation" target="_blank">NexusRaven API evaluation dataset</a>
</p>

<p align="center" width="100%">
<a><img src="NexusRaven.png" alt="NexusRaven" style="width: 40%; min-width: 300px; display: block; margin: auto;"></a>
</p>

Table of contents
- [NexusRaven-13B: Surpassing the state-of-the-art in open-source function calling LLMs.](#nexusraven-13b-surpassing-the-state-of-the-art-in-open-source-function-calling-llms)
  - [Introducing NexusRaven-13B](#introducing-nexusraven-13b)
  - [NexusRaven model usage](#nexusraven-model-usage)
  - [Training procedure](#training-procedure)
    - [Training hyperparameters](#training-hyperparameters)
    - [Framework versions](#framework-versions)
- [Limitations](#limitations)
  - [License](#license)
  - [References](#references)
  - [Citation](#citation)
  - [Contact](#contact)


This model is a fine-tuned version of [codellama/CodeLlama-13b-Instruct-hf](https://huggingface.co./codellama/CodeLlama-13b-Instruct-hf).

## Introducing NexusRaven-13B
NexusRaven is an open-source and commercially viable function calling LLM that surpasses the state-of-the-art in function calling capabilities. 

📊 Performance Highlights: With our demonstration retrieval system, NexusRaven-13B achieves a 95% success rate in using cybersecurity tools such as CVE/CPE Search and VirusTotal, while prompting GPT-4 achieves 64%. It has significantly lower cost and faster inference speed compared to GPT-4.

🔧 Generalization to the Unseen: NexusRaven-13B generalizes to tools never seen during model training, achieving a success rate comparable with GPT-3.5 in zero-shot setting, significantly outperforming all other open-source LLMs of similar sizes. 

🔥 Commercially Permissive: The training of NexusRaven-13B does not involve any data generated by proprietary LLMs such as GPT-4. You have full control of the model when deployed in commercial applications.

<p align="center" width="100%">
<a><img src="Retrieval-augmented_Evaluation.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
<a><img src="Zero-shot_Evaluation.png" alt="NexusRaven" style="width: 80%; min-width: 300px; display: block; margin: auto;"></a>
</p>


## NexusRaven model usage
NexusRaven accepts a list of python functions. These python functions can do anything (including sending GET/POST requests to external APIs!). The two requirements include the python function signature and the appropriate docstring to generate the function call. 

NexusRaven is highly compatible with langchain. See [langchain_example.py](https://huggingface.co./Nexusflow/NexusRaven-13B/blob/main/langchain_example.py). An example without langchain can be found in [non_langchain_example.py](https://huggingface.co./Nexusflow/NexusRaven-13B/blob/main/non_langchain_example.py).

Please note that the model will reflect on the answer sometimes, so we highly recommend stopping the model generation at a stopping criteria of `["\nReflection:"]`, to avoid spending unnecessary tokens during inference, but the reflection might help in some rare cases. This is reflected in our langchain example. 

More information about how to prompt the model can be found in [prompting_readme.md](prompting_readme.md). 

The "Initial Answer" can be executed to run the function.

### Quickstart
You can run the model on a GPU using the following code. 
```python
# Please `pip install transformers accelerate`
from transformers import pipeline


pipeline = pipeline(
    "text-generation",
    model="Nexusflow/NexusRaven-13B",
    torch_dtype="auto",
    device_map="auto",
)

prompt_template = """
<human>:
OPTION:
<func_start>def hello_world(n : int)<func_end>
<docstring_start>
\"\"\"
Prints hello world to the user.

Args:
n (int) : Number of times to print hello world.
\"\"\"
<docstring_end>

OPTION:
<func_start>def hello_universe(n : int)<func_end>
<docstring_start>
\"\"\"
Prints hello universe to the user.

Args:
n (int) : Number of times to print hello universe.
\"\"\"
<docstring_end>

User Query: Question: {question}

Please pick a function from the above options that best answers the user query and fill in the appropriate arguments.<human_end>
"""
prompt = prompt_template.format(question="Please print hello world 10 times.")

result = pipeline(prompt, max_new_tokens=100, return_full_text=False, do_sample=False)[0]["generated_text"]

# Get the "Initial Call" only
start_str = "Initial Answer: "
end_str = "\nReflection: "
start_idx = result.find(start_str) + len(start_str)
end_idx = result.find(end_str)
function_call = result[start_idx: end_idx]

print (f"Generated Call: {function_call}")
```
This will output: 
```text
Generated Call: hello_world(10) 
```
Which can be executed. 


## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 2.0


### Framework versions

- Transformers 4.33.2
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3


# Limitations
1. We highly recommend using a stop criteria of `["\nReflection:"]`. The model was trained to first generate an answer and then reflect on its answer to either improve the answer or keep the answer the same. However, this "chain of thought" is often not helpful, and the final answer is seldom better than the initial call. Therefore, we strongly recommend using the Initial Call as the main call to execute. 
2. The model works best when it is connected with a retriever when there are a multitude of functions, as a large number of functions will saturate the context window of this model. 
3. The model can be prone to generate incorrect calls. Please ensure proper guardrails to capture errant behavior is in place. 


## License
This model was trained on commercially viable data and is licensed under the [Llama 2 community license](https://huggingface.co./codellama/CodeLlama-13b-hf/blob/main/LICENSE) following the original [CodeLlama-13b-hf](https://huggingface.co./codellama/CodeLlama-13b-hf/) model. 


## References
We thank the CodeLlama team for their amazing models!

```
@misc{rozière2023code,
      title={Code Llama: Open Foundation Models for Code}, 
      author={Baptiste Rozière and Jonas Gehring and Fabian Gloeckle and Sten Sootla and Itai Gat and Xiaoqing Ellen Tan and Yossi Adi and Jingyu Liu and Tal Remez and Jérémy Rapin and Artyom Kozhevnikov and Ivan Evtimov and Joanna Bitton and Manish Bhatt and Cristian Canton Ferrer and Aaron Grattafiori and Wenhan Xiong and Alexandre Défossez and Jade Copet and Faisal Azhar and Hugo Touvron and Louis Martin and Nicolas Usunier and Thomas Scialom and Gabriel Synnaeve},
      year={2023},
      eprint={2308.12950},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```


## Citation
```
@misc{nexusraven,
      title={NexusRaven: Surpassing the state-of-the-art in open-source function calling LLMs}, 
      author={Nexusflow.ai team},
      year={2023},
      url={http://nexusflow.ai/blog}
}
```

## Contact
Please reach out to [email protected] for any questions!