As a user
pip install auto-gpt-benchmarks
- Add boilerplate code to run and kill agent
agbenchmark
--category challenge_category
to run tests in a specific category--mock
to only run mock tests if they exists for each test--noreg
to skip any tests that have passed in the past. When you run without this flag and a previous challenge that passed fails, it will now not be regression tests
- We call boilerplate code for your agent
- Show pass rate of tests, logs, and any other metrics
Contributing
Diagrams: https://whimsical.com/agbenchmark-5n4hXBq1ZGzBwRsK4TVY7x
To run the existing mocks
- clone the repo
auto-gpt-benchmarks
pip install poetry
poetry shell
poetry install
cp .env_example .env
git submodule update --init --remote --recursive
uvicorn server:app --reload
agbenchmark --mock
Keep config the same and watch the logs :)
To run with mini-agi
- Navigate to
auto-gpt-benchmarks/agent/mini-agi
pip install -r requirements.txt
cp .env_example .env
, setPROMPT_USER=false
and add yourOPENAI_API_KEY=
. SsetMODEL="gpt-3.5-turbo"
if you don't have access togpt-4
yet. Also make sure you have Python 3.10^ installed- set
AGENT_NAME=mini-agi
in.env
file and where you want yourREPORTS_FOLDER
to be - Make sure to follow the commands above, and remove mock flag
agbenchmark
- To add requirements
poetry add requirement
.
Feel free to create prs to merge with main
at will (but also feel free to ask for review) - if you can't send msg in R&D chat for access.
If you push at any point and break things - it'll happen to everyone - fix it asap. Step 1 is to revert master
to last working commit
Let people know what beautiful code you write does, document everything well
Share your progress :)
Dataset
Manually created, existing challenges within Auto-Gpt, https://osu-nlp-group.github.io/Mind2Web/
How do I add new agents to agbenchmark ?
Example with smol developer.
1- Create a github branch with your agent following the same pattern as this example:
https://github.com/smol-ai/developer/pull/114/files
2- Create the submodule and the github workflow by following the same pattern as this example:
https://github.com/Significant-Gravitas/Auto-GPT-Benchmarks/pull/48/files
How do I run agent in different environments?
To just use as the benchmark for your agent. pip install
the package and run agbenchmark
For internal Auto-GPT ci runs, specify the AGENT_NAME
you want you use and set the HOME_ENV
.
Ex. AGENT_NAME=mini-agi
To develop agent alongside benchmark, you can specify the AGENT_NAME
you want you use and add as a submodule to the repo