GGUF
Edit model card

A planner LLM fine-tuned on synthetic trajectories from an agent simulation. It can be used in ReAct-style LLM agents where planning is separated from function calling. Trajectory generation and planner fine-tuning are described in the bot-with-plan project.

The planner has been fine-tuned on the krasserm/gba-trajectories dataset with a loss over the completion only (i.e. no loss over the prompt). The original QLoRA model is available at krasserm/gba-planner-7B-completion-only-v0.2.

Server setup

Download model:

mkdir -p models

wget https://huggingface.co./krasserm/gba-planner-7B-completion-only-v0.2-GGUF/resolve/main/gba-planner-7B-completion-only-v0.2-Q8_0.gguf?download=true \
  -O models/gba-planner-7B-completion-only-v0.2-Q8_0.gguf

Start llama.cpp server:

docker run --gpus all --rm -p 8082:8080 -v $(realpath models):/models ghcr.io/ggerganov/llama.cpp:server-cuda--b1-17b291a \
  -m /models/gba-planner-7B-completion-only-v0.2-Q8_0.gguf -c 1024 --n-gpu-layers 33 --host 0.0.0.0 --port 8080

Usage example

Create a planner instance on the client side.

import json
from gba.client import ChatClient, LlamaCppClient, MistralInstruct
from gba.planner import FineTunedPlanner
from gba.utils import Scratchpad

llm = LlamaCppClient(url="http://localhost:8082/completion")
model = MistralInstruct(llm=llm)
client = ChatClient(model=model)
planner = FineTunedPlanner(client=client)

Define a user request and past task-observation pairs (scratchpad) of the current trajectory.

request = "Get the average Rotten Tomatoes scores for DreamWorks' last 5 movies."
scratchpad = Scratchpad()
scratchpad.add(
    task="Find the last 5 movies released by DreamWorks.", 
    result="The last five movies released by DreamWorks are \"The Bad Guys\" (2022), \"Boss Baby: Family Business\" (2021), \"Trolls World Tour\" (2020), \"Abominable\" (2019), and \"How to Train Your Dragon: The Hidden World\" (2019).")
scratchpad.add(
    task="Search the internet for the Rotten Tomatoes score of \"The Bad Guys\" (2022)", 
    result="The Rotten Tomatoes score of \"The Bad Guys\" (2022) is 88%.",
)

Then generate a plan for the next step in the trajectory.

result = planner.plan(request=request, scratchpad=scratchpad)
print(json.dumps(result.to_dict(), indent=2))
{
  "context_information_summary": "The last five movies released by DreamWorks are \"The Bad Guys\" (2022), \"Boss Baby: Family Business\" (2021), \"Trolls World Tour\" (2020), \"Abominable\" (2019), and \"How to Train Your Dragon: The Hidden World\" (2019). The Rotten Tomatoes score of \"The Bad Guys\" (2022) is 88%.",
  "thoughts": "Since we already have the Rotten Tomatoes score for \"The Bad Guys\", the next logical step is to find the scores for the remaining movies in the list, starting with \"Boss Baby: Family Business\".",
  "task": "Search the internet for the Rotten Tomatoes score of \"Boss Baby: Family Business\" (2021).",
  "selected_tool": "search_internet"
}

The planner selects a tool and generates a task for the next step. The task is tool-specific and executed by the tool, in this case the search_internet tool, which results in the next observation on the trajectory. If the final_answer tool is selected, a final answer is available or can be generated from the trajectory. The output JSON schema is enforced by the planner via constrained decoding on the llama.cpp server.

Tools

The planner learned a (static) set of available tools during fine-tuning. These are:

Tool name Tool description
ask_user Useful for asking user about information missing in the request.
calculate_number Useful for numerical tasks that result in a single number.
create_event Useful for adding a single entry to my calendar at given date and time.
search_wikipedia Useful for searching factual information in Wikipedia.
search_internet Useful for up-to-date information on the internet.
send_email Useful for sending an email to a single recipient.
use_bash Useful for executing commands in a Linux bash.
final_answer Useful for providing the final answer to a request. Must always be used in the last step.

The framework provided by the bot-with-plan project can easily be adjusted to a different set of tools for specialization to other application domains.

Downloads last month
8
GGUF
Model size
7.25B params
Architecture
llama

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Dataset used to train krasserm/gba-planner-7B-completion-only-v0.2-GGUF

Collection including krasserm/gba-planner-7B-completion-only-v0.2-GGUF