Gorilla OpenFunctions v2
💡 SoTA for open-source models. On-par with GPT-4.
🚀 Check out the Berkeley Function Calling Leaderboard
📣 Read more in our OpenFunctions v2 release blog and Berkeley Function Calling Leaderboard blog
🟢 Check out Quantized GGUF models in gorilla-llm/gorilla-openfunctions-v2-gguf
Introduction
Gorilla OpenFunctions extends Large Language Model(LLM) Chat Completion feature to formulate executable APIs call given natural language instructions and API context. With OpenFunctions v2, we now support:
- Multiple functions - choose betwen functions
- Parallel functions - call the same function
N
time with different parameter values - Multiple & parallel - both of the above in a single chatcompletion call (one generation)
- Relevance detection - when chatting, chat. When asked for function, returns a function
- Python - supports
string, number, boolean, list, tuple, dict
parameter datatypes andAny
for those not natively supported. - JAVA - support for
byte, short, int, float, double, long, boolean, char, Array, ArrayList, Set, HashMap, Hashtable, Queue, Stack, and Any
datatypes. - JavaScript - support for
String, Number, Bigint, Boolean, dict (object), Array, Date, and Any
datatypes. - REST - native REST support
Performance
Model | Overall Accuracy* |
---|---|
GPT-4-0125-Preview | 85.12% |
Gorilla-openfunctions-v2 | 83.67% |
GPT-3.5-turbo | 82.23% |
Mistral-medium | 79.70% |
Nexusflow Raven-v2 | 55.72% |
GPT-4-0613 | 54.16% |
*: Overall Accuracy is defined in Berkeley Function Calling Leaderboard blog, read more details if you are interested! |
Models Available
Model | Functionality |
---|---|
gorilla-openfunctions-v2 | Multiple, parallel, multiple & parallel, relevance detection, Python + JAVA + JS + REST |
gorilla-openfunctions-v1 | Parallel functions, and can choose between functions |
gorilla-openfunctions-v0 | Given a function, and user intent, returns properly formatted json with the right arguments |
All of our models are hosted on our Huggingface UC Berkeley gorilla-llm org: gorilla-openfunctions-v2, gorilla-openfunctions-v1, and gorilla-openfunctions-v0.
Training
Gorilla Openfunctions v2 is a 7B parameter model, and is built on top of the deepseek coder LLM. Check out openfunctions-v2 blog to learn more about the data composition and some insights into the training process.
Example Usage (Hosted)
Please reference README.md
in https://github.com/ShishirPatil/gorilla/tree/main/openfunctions for file dependencies and used utils.
- OpenFunctions is compatible with OpenAI Functions
!pip install openai==0.28.1
- Point to Gorilla hosted servers
import openai
def get_gorilla_response(prompt="Call me an Uber ride type \"Plus\" in Berkeley at zipcode 94704 in 10 minutes", model="gorilla-openfunctions-v0", functions=[]):
openai.api_key = "EMPTY"
openai.api_base = "http://luigi.millennium.berkeley.edu:8000/v1"
try:
completion = openai.ChatCompletion.create(
model="gorilla-openfunctions-v2",
temperature=0.0,
messages=[{"role": "user", "content": prompt}],
functions=functions,
)
return completion.choices[0]
except Exception as e:
print(e, model, prompt)
- Pass the user argument and set of functions, Gorilla OpenFunctions returns a fully formatted json
query = "What's the weather like in the two cities of Boston and San Francisco?"
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
]
get_gorilla_response(query, functions=functions)
- Expected output NEW
Gorilla returns a readily accessible string AND Open-AI compatible JSON.
{
"index": 0,
"message": {
"role": "assistant",
"content": "get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')",
"function_call": [
{
"name": "get_current_weather",
"arguments": {
"location": "Boston, MA"
}
},
{
"name": "get_current_weather",
"arguments": {
"location": "San Francisco, CA"
}
}
]
},
"finish_reason": "stop"
}
We have retained the string functionality that our community loved from OpenFunctions v1 get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
above. And Notice the function_call
key in the JSON to be OpenAI compatible.
This is possible in OpenFunctions v2, because we ensure that the output includes the name of the argument and not just the value. This enables us to parse the output into a JSON. In those scenarios where the output is not parsable into JSON, we will always return the function call string.
End to End Example
Run the example code in [inference_hosted.py](https://github.com/ShishirPatil/gorilla/tree/main/openfunctions)
to see how the model works.
python inference_hosted.py
Expected Output:
(.py3) shishir@dhcp-132-64:~/Work/Gorilla/openfunctions/$ python inference_hosted.py
--------------------
Function call strings(s): get_current_weather(location='Boston, MA'), get_current_weather(location='San Francisco, CA')
--------------------
OpenAI compatible `function_call`: [<OpenAIObject at 0x1139ba890> JSON:
{
"name": "get_current_weather",
"arguments":
{
"location": "Boston, MA"
}
}, <OpenAIObject at 0x1139ba930> JSON: {
"name": "get_current_weather",
"arguments":
{
"location": "San Francisco, CA"
}
}]
Running OpenFunctions Locally
If you want to Run OpenFunctions locally, here is the prompt format that we used:
def get_prompt(user_query: str, functions: list = []) -> str:
"""
Generates a conversation prompt based on the user's query and a list of functions.
Parameters:
- user_query (str): The user's query.
- functions (list): A list of functions to include in the prompt.
Returns:
- str: The formatted conversation prompt.
"""
system = "You are an AI programming assistant, utilizing the Gorilla LLM model, developed by Gorilla LLM, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer."
if len(functions) == 0:
return f"{system}\n### Instruction: <<question>> {user_query}\n### Response: "
functions_string = json.dumps(functions)
return f"{system}\n### Instruction: <<function>>{functions_string}\n<<question>>{user_query}\n### Response: "
Further, here is how we format the response:
Install the dependencies with:
pip3 install tree_sitter
git clone https://github.com/tree-sitter/tree-sitter-java.git
git clone https://github.com/tree-sitter/tree-sitter-javascript.git
And you can use the following code to format the response:
from openfunctions_utils import strip_function_calls, parse_function_call
def format_response(response: str):
"""
Formats the response from the OpenFunctions model.
Parameters:
- response (str): The response generated by the LLM.
Returns:
- str: The formatted response.
- dict: The function call(s) extracted from the response.
"""
function_call_dicts = None
try:
response = strip_function_calls(response)
# Parallel function calls returned as a str, list[dict]
if len(response) > 1:
function_call_dicts = []
for function_call in response:
function_call_dicts.append(parse_function_call(function_call))
response = ", ".join(response)
# Single function call returned as a str, dict
else:
function_call_dicts = parse_function_call(response[0])
response = response[0]
except Exception as e:
# Just faithfully return the generated response str to the user
pass
return response, function_call_dicts
In the current directory, run the example code in inference_local.py
to see how the model works.
python inference_local.py
Note: Use the get_prompt
and format_response
only if you are hosting it Locally. If you are using the Berkeley hosted models through the Chat-completion API, we do this in the backend, so you don't have to do this. The model is supported in Hugging Face 🤗 Transformers and can be run up locally:
License
Gorilla OpenFunctions v2 is distributed under the Apache 2.0 license. This software incorporates elements from the Deepseek model. Consequently, the licensing of Gorilla OpenFunctions v2 adheres to the Apache 2.0 license, with additional terms as outlined in Appendix A of the Deepseek license.
Contributing
Gorilla is an open source effort from UC Berkeley and we welcome contributors. Please email us your comments, criticism, and questions. More information about the project can be found at https://gorilla.cs.berkeley.edu/
- Downloads last month
- 565