---
license: other
language:
- en
library_name: transformers
tags:
- RLHF
- Nexusflow
- Athene
- Function Calling
- Agent
- Extraction
base_model:
- Qwen/Qwen2.5-72B-Instruct
---
# Athene-V2-Agent: Surpassing GPT-4o for Tool Use And Agentic Usecases
Nexusflow HF - Nexusflow Discord - Athene-V2 Blogpost
## Introducing Athene-V2-Agent
Athene-V2-Agent is an open-source Agent LLM that surpasses the state-of-the-art in function calling and agentic capabilities.
💪 **Versatile Agent Capability**: Athene-V2-Agent is an agent model, capable of operating in environments with deeply nested dependencies with the environment. It is capable of reasoning and doing planning for trajectories with many tool calls necessary to answer a single query.
📊 **Performance Highlights**: Athene-V2-Agent surpasses GPT-4o in single FC tasks by 18% in function calling success rates, and by 17% in Agentic success rates.
🔧 **Generalization to the Unseen**: Athene-V2-Agent has never been trained on the functions or agentic settings used in evaluation.
- **Developed by:** The Nexusflow Team
- **Model type:** Agent Model
- **Finetuned from model:** [Qwen-2.5-72B-Intruct](https://huggingface.co./Qwen/Qwen2.5-72B-Instruct)
- **License**: [Nexusflow Research License](https://huggingface.co./Nexusflow/Athene-V2-Agent/blob/main/Nexusflow_Research_License_.pdf)
- **Blog**: https://nexusflow.ai/blogs/athene-v2
## Athene-V2-Agent Model Usage
### OpenAI-Compatible FC
Athene-V2-Agent is usable in any OpenAI API-compatible environment using our VLLM docker image. This should be a simple "drop-in" replacement to any agentic or tool-use setting with our VLLM docker image.
```
docker run --name athene-v2-agent \
--runtime nvidia --gpus '"device=0,1,2,3,4,5,6,7"' \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING_FACE_HUB_TOKEN=" \
-p :8000 \
--ipc=host \
ghcr.io/nexusflowai/athene-v2-vllm:latest \
--model Nexusflow/Athene-V2-Agent \
--dtype=auto \
--tensor-parallel-size=8 \
--enable-auto-tool-choice \
--tool-call-parser Athene-V2-Agent
```
You can now submit any OpenAI-Compatible tool-use requests to the model by hitting the VLLM endpoint. Athene-V2-Agent will be able to issue tool calls that you can execute and return results for.
**WARNING**: Athene-V2-Agent uses a *CUSTOM* prompting style that is baked into the custom docker image, as the executable calls are extracted from the model's generated planning. For best performance, please ensure to use the docker image above for Athene-V2-Agent, including when benchmarking the model. Using HuggingFace tokenizer's chat template will yield suboptimal results for Agent usecases. Please reach out to us on Discord if you run into any issues!
### Examples
An example Weather agent for this can be found here: [Link](example/vllm_v2_weather_agent.py#L186-L193). This example includes handling Athene for queries that are answerable and not answerable by the current tools.
An example extraction and RAG-Agent can be found here: [Link](example/vllm_v2_extraction_agent.py#L270-L284). This example includes handling RAG-based queries with a wikipedia tool.
### Prompting Tricks
1. When giving docstrings to Athene-V2-Agent, please provide well-indented, detailed, and well-written docstrings as this can help accuracy.
2. We strongly recommend using the docker image to interact with Athene-V2-Agent.
4. We strongly recommend to set sampling to False when prompting Athene-V2-Agent.
5. We strongly recommend a zero temperature.
6. Athene-V2-Agent is designed to work within systems, so it's tuned to be very controllable with the instructions specified in the tools, including for broad behaviors (like rejecting queries, or chatting)
#### Handling Irrelevant Queries
The Athene-V2-Agent model is strongly tuned to have its behavior be controllable with tools to make it easy to integrate into systems.
Therefore, the model won't by default reject queries that are out of domain, as it will try its best to issue the most relevant call.
However, when expecting irrelevant user queries and wanting the model to reject them, you can use a no-op function. For example, something like this would work:
```python
{
"type": "function",
"function" : {
"name": "no_relevant_function",
"description": "Call this when no other provided function can be called to answer the user query.",
"parameters": {
"type": "object",
"properties": {
"user_query_span": {
"type": "string",
"description": "The part of the user_query that cannot be answered by any other function calls."
}
},
"required": ["user_query_span"]
}
}
}
```
Please see the example [Link](example/vllm_v2_weather_agent.py) here for a demo of this.
#### Handling Chat With FC
Since Athene-V2-Agent model is strongly tuned to be controllable, so we wanted to ensure that it does not chat unless explicitly instructed to do so.
You can do this by adding a `chat` tool, and allowing it to do so in the system prompt:
```python
{
"type": "function",
"function": {
"name": "chat",
"description": "Call this tool when you want to chat with the user. The user won't see anything except for whatever you pass into this function.",
"parameters": {
"type": "object",
"properties": {
"chat_string": {
"type": "string",
"description": "The string to send to the user to chat back to them.",
}
},
"required": ["chat_string"],
},
},
}
```
And the following system prompt, as an example (but feel free to experiment to make Athene-V2-Agent behave the way you want it to!):
```python
{"role" : "system", "content" : "Make sure to use the chat function to provide the final answer to the user."},
```
Please see the example [Link](example/weather_with_chat.py) here for a demo of this.
## Contact
Please join our [Discord Channel](https://discord.gg/HDSVmNAs3y) to reach out for any issues and comments!