kakumusic commited on
Commit
e7c3249
1 Parent(s): 85ccfb6

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .dockerignore +60 -0
  2. .env.template +5 -0
  3. .gitattributes +1 -0
  4. .github/CODEOWNERS +1 -0
  5. .github/CODE_OF_CONDUCT.md +131 -0
  6. .github/CONTRIBUTING.md +135 -0
  7. .github/FUNDING.yml +4 -0
  8. .github/ISSUE_TEMPLATE/bug-report.md +32 -0
  9. .github/ISSUE_TEMPLATE/documentation-clarification.md +19 -0
  10. .github/ISSUE_TEMPLATE/feature-request.md +19 -0
  11. .github/PULL_REQUEST_TEMPLATE/PULL_REQUEST_TEMPLATE.md +9 -0
  12. .github/workflows/automation.yml +29 -0
  13. .github/workflows/ci.yaml +49 -0
  14. .github/workflows/pre-commit.yaml +22 -0
  15. .github/workflows/release.yaml +73 -0
  16. .gitignore +98 -0
  17. .pre-commit-config.yaml +27 -0
  18. .readthedocs.yaml +39 -0
  19. Acknowledgements.md +5 -0
  20. DISCLAIMER.md +11 -0
  21. GOVERNANCE.md +76 -0
  22. LICENSE +21 -0
  23. MANIFEST.in +1 -0
  24. Makefile +52 -0
  25. README.md +112 -12
  26. ROADMAP.md +31 -0
  27. TERMS_OF_USE.md +13 -0
  28. WINDOWS_README.md +68 -0
  29. citation.cff +10 -0
  30. docker-compose.yml +16 -0
  31. docker/Dockerfile +29 -0
  32. docker/README.md +69 -0
  33. docker/entrypoint.sh +10 -0
  34. docs/Makefile +20 -0
  35. docs/api_reference.rst +144 -0
  36. docs/code_conduct_link.rst +2 -0
  37. docs/conf.py +204 -0
  38. docs/contributing_link.rst +2 -0
  39. docs/create_api_rst.py +98 -0
  40. docs/disclaimer_link.rst +2 -0
  41. docs/docs_building.md +64 -0
  42. docs/examples/open_llms/README.md +56 -0
  43. docs/examples/open_llms/langchain_interface.py +17 -0
  44. docs/examples/open_llms/openai_api_interface.py +21 -0
  45. docs/index.rst +41 -0
  46. docs/installation.rst +63 -0
  47. docs/introduction.md +20 -0
  48. docs/make.bat +36 -0
  49. docs/open_models.md +148 -0
  50. docs/quickstart.rst +68 -0
.dockerignore ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # See https://help.github.com/ignore-files/ for more about ignoring files.
2
+
3
+ # Byte-compiled / optimized / DLL files
4
+ __pycache__/
5
+ *.py[cod]
6
+ *$py.class
7
+
8
+ # Distribution / packaging
9
+ dist/
10
+ build/
11
+ *.egg-info/
12
+ *.egg
13
+
14
+ # Virtual environments
15
+ .env
16
+ .env.sh
17
+ venv/
18
+ ENV/
19
+
20
+ # IDE-specific files
21
+ .vscode/
22
+ .idea/
23
+
24
+ # Compiled Python modules
25
+ *.pyc
26
+ *.pyo
27
+ *.pyd
28
+
29
+ # Python testing
30
+ .pytest_cache/
31
+ .ruff_cache/
32
+ .coverage
33
+ .mypy_cache/
34
+
35
+ # macOS specific files
36
+ .DS_Store
37
+
38
+ # Windows specific files
39
+ Thumbs.db
40
+
41
+ # this application's specific files
42
+ archive
43
+
44
+ # any log file
45
+ *log.txt
46
+ todo
47
+ scratchpad
48
+
49
+ # Ignore GPT Engineer files
50
+ projects
51
+ !projects/example
52
+
53
+ # Pyenv
54
+ .python-version
55
+
56
+ # Benchmark files
57
+ benchmark
58
+ !benchmark/*/prompt
59
+
60
+ .gpte_consent
.env.template ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ### OpenAI Setup ###
2
+
3
+ # OPENAI_API_KEY=Your personal OpenAI API key from https://platform.openai.com/account/api-keys
4
+ OPENAI_API_KEY=...
5
+ ANTHROPIC_API_KEY=...
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tests/test_data/mona_lisa.jpg filter=lfs diff=lfs merge=lfs -text
.github/CODEOWNERS ADDED
@@ -0,0 +1 @@
 
 
1
+ .github/workflows/ @ATheorell
.github/CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our
6
+ community a harassment-free experience for everyone, regardless of age, body
7
+ size, visible or invisible disability, ethnicity, sex characteristics, gender
8
+ identity or expression, level of experience, education, socio-economic status,
9
+ nationality, personal appearance, race, caste, color, religion, or sexual
10
+ identity and orientation.
11
+
12
+ We pledge to act and interact in ways that contribute to an open, welcoming,
13
+ diverse, inclusive, and healthy community.
14
+
15
+ ## Our Standards
16
+
17
+ Examples of behavior that contributes to a positive environment for our
18
+ community include:
19
+
20
+ * Demonstrating empathy and kindness toward other people
21
+ * Being respectful of differing opinions, viewpoints, and experiences
22
+ * Giving and gracefully accepting constructive feedback
23
+ * Accepting responsibility and apologizing to those affected by our mistakes,
24
+ and learning from the experience
25
+ * Focusing on what is best not just for us as individuals, but for the overall
26
+ community
27
+
28
+ Examples of unacceptable behavior include:
29
+
30
+ * The use of sexualized language or imagery, and sexual attention or advances of
31
+ any kind
32
+ * Trolling, insulting or derogatory comments, and personal or political attacks
33
+ * Public or private harassment
34
+ * Publishing others' private information, such as a physical or email address,
35
+ without their explicit permission
36
+ * Other conduct which could reasonably be considered inappropriate in a
37
+ professional setting
38
+
39
+ ## Enforcement Responsibilities
40
+
41
+ Community leaders are responsible for clarifying and enforcing our standards of
42
+ acceptable behavior and will take appropriate and fair corrective action in
43
+ response to any behavior that they deem inappropriate, threatening, offensive,
44
+ or harmful.
45
+
46
+ Community leaders have the right and responsibility to remove, edit, or reject
47
+ comments, commits, code, wiki edits, issues, and other contributions that are
48
+ not aligned to this Code of Conduct, and will communicate reasons for moderation
49
+ decisions when appropriate.
50
+
51
+ ## Scope
52
+
53
+ This Code of Conduct applies within all community spaces, and also applies when
54
+ an individual is officially representing the community in public spaces.
55
+ Examples of representing our community include using an official e-mail address,
56
+ posting using an official social media account, or acting as an appointed
57
+ representative at an online or offline event.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported to the community leaders responsible for enforcement at
63
64
+ All complaints will be reviewed and investigated promptly and fairly.
65
+
66
+ All community leaders are obligated to respect the privacy and security of reporters of incidents.
67
+
68
+ ## Enforcement Guidelines
69
+
70
+ Community leaders will follow these Community Impact Guidelines in determining
71
+ the consequences for any action they deem in violation of this Code of Conduct:
72
+
73
+ ### 1. Correction
74
+
75
+ **Community Impact**: Use of inappropriate language or other behavior deemed
76
+ unprofessional or unwelcome in the community.
77
+
78
+ **Consequence**: A private, written warning from community leaders, providing
79
+ clarity around the nature of the violation and an explanation of why the
80
+ behavior was inappropriate. A public apology may be requested.
81
+
82
+ ### 2. Warning
83
+
84
+ **Community Impact**: A violation through a single incident or series of
85
+ actions.
86
+
87
+ **Consequence**: A warning with consequences for continued behavior. No
88
+ interaction with the people involved, including unsolicited interaction with
89
+ those enforcing the Code of Conduct, for a specified period of time. This
90
+ includes avoiding interactions in community spaces as well as external channels
91
+ like social media. Violating these terms may lead to a temporary or permanent
92
+ ban.
93
+
94
+ ### 3. Temporary Ban
95
+
96
+ **Community Impact**: A serious violation of community standards, including
97
+ sustained inappropriate behavior.
98
+
99
+ **Consequence**: A temporary ban from any sort of interaction or public
100
+ communication with the community for a specified period of time. No public or
101
+ private interaction with the people involved, including unsolicited interaction
102
+ with those enforcing the Code of Conduct, is allowed during this period.
103
+ Violating these terms may lead to a permanent ban.
104
+
105
+ ### 4. Permanent Ban
106
+
107
+ **Community Impact**: Demonstrating a pattern of violation of community
108
+ standards, including sustained inappropriate behavior, harassment of an
109
+ individual, or aggression toward or disparagement of classes of individuals.
110
+
111
+ **Consequence**: A permanent ban from any sort of public interaction within the
112
+ community.
113
+
114
+ ## Attribution
115
+
116
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage],
117
+ version 2.1, available at
118
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
119
+
120
+ Community Impact Guidelines were inspired by
121
+ [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
122
+
123
+ For answers to common questions about this code of conduct, see the FAQ at
124
+ [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
125
+ [https://www.contributor-covenant.org/translations][translations].
126
+
127
+ [homepage]: https://www.contributor-covenant.org
128
+ [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
129
+ [Mozilla CoC]: https://github.com/mozilla/diversity
130
+ [FAQ]: https://www.contributor-covenant.org/faq
131
+ [translations]: https://www.contributor-covenant.org/translations
.github/CONTRIBUTING.md ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing to gpt-engineer
2
+
3
+ The gpt-engineer is a community project and lives from your contributions - they are warmly appreciated. The main contribution avenues are:
4
+ - Pull request: implement code and have it reviewed and potentially merged by the maintainers. Implementations of existing feature requests or fixes to bug reports are likely to be merged.
5
+ - Bug report: report when something in gpt-engineer doesn't work. Do not report errors in programs written _by_ gpt-engineer.
6
+ - Feature request: provide a detailed sketch about something you want to have implemented in gpt-engineer. There is no guarantee that features will be implemented.
7
+ - Discussion: raise awareness of a potential improvement. This is often a good starting point before making a detailed feature request.
8
+
9
+ By participating in this project, you agree to abide by the [code of conduct](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/.github/CODE_OF_CONDUCT.md).
10
+
11
+ ## Merge Policy for Pull Requests
12
+ Code that is likely to introduce breaking changes, or significantly change the user experience for users and developers, require [board approval](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/GOVERNANCE.md) to be merged. Smaller code changes can be merged directly.
13
+ As a rule, cosmetic pull requests, for example rephrasing the readme or introducing more compact syntax, that do not yield clear practical improvements are not merged. Such pull requests are generally discouraged, both to save time for the maintainers and to establish a lower bar for becoming a contributor.
14
+
15
+ ## Getting Started with Pull Requests to gpt-engineer
16
+
17
+ To get started with contributing, please follow these steps:
18
+
19
+ 1. Fork the repository and clone it to your local machine.
20
+ 2. Install any necessary dependencies.
21
+ 3. Create a new branch for your changes: `git checkout -b my-branch-name`.
22
+ 4. Make your desired changes or additions.
23
+ 5. Run the tests to ensure everything is working as expected.
24
+ 6. Commit your changes: `git commit -m "Descriptive commit message"`.
25
+ 7. Push to the branch: `git push origin my-branch-name`.
26
+ 8. Submit a pull request to the `main` branch of the original repository.
27
+
28
+ ## Code Style
29
+
30
+ Please make sure to follow the established code style guidelines for this project. Consistent code style helps maintain readability and makes it easier for others to contribute to the project.
31
+
32
+ To enforce this we use [`pre-commit`](https://pre-commit.com/) to run [`black`](https://black.readthedocs.io/en/stable/index.html) and [`ruff`](https://beta.ruff.rs/docs/) on every commit.
33
+
34
+ To install gpt-engineer as a developer, clone the repository and install the dependencies with:
35
+
36
+ ```bash
37
+ $ poetry install
38
+ $ poetry shell
39
+ ```
40
+
41
+ And then install the `pre-commit` hooks with:
42
+
43
+ ```bash
44
+ $ pre-commit install
45
+
46
+ # output:
47
+ pre-commit installed at .git/hooks/pre-commit
48
+ ```
49
+
50
+ If you are not familiar with the concept of [git hooks](https://git-scm.com/docs/githooks) and/or [`pre-commit`](https://pre-commit.com/) please read the documentation to understand how they work.
51
+
52
+ As an introduction of the actual workflow, here is an example of the process you will encounter when you make a commit:
53
+
54
+ Let's add a file we have modified with some errors, see how the pre-commit hooks run `black` and fails.
55
+ `black` is set to automatically fix the issues it finds:
56
+
57
+ ```bash
58
+ $ git add chat_to_files.py
59
+ $ git commit -m "commit message"
60
+ black....................................................................Failed
61
+ - hook id: black
62
+ - files were modified by this hook
63
+
64
+ reformatted chat_to_files.py
65
+
66
+ All done! ✨ 🍰 ✨
67
+ 1 file reformatted.
68
+ ```
69
+
70
+ You can see that `chat_to_files.py` is both staged and not staged for commit. This is because `black` has formatted it and now it is different from the version you have in your working directory. To fix this you can simply run `git add chat_to_files.py` again and now you can commit your changes.
71
+
72
+ ```bash
73
+ $ git status
74
+ On branch pre-commit-setup
75
+ Changes to be committed:
76
+ (use "git restore --staged <file>..." to unstage)
77
+ modified: chat_to_files.py
78
+
79
+ Changes not staged for commit:
80
+ (use "git add <file>..." to update what will be committed)
81
+ (use "git restore <file>..." to discard changes in working directory)
82
+ modified: chat_to_files.py
83
+ ```
84
+
85
+ Now let's add the file again to include the latest commits and see how `ruff` fails.
86
+
87
+ ```bash
88
+ $ git add chat_to_files.py
89
+ $ git commit -m "commit message"
90
+ black....................................................................Passed
91
+ ruff.....................................................................Failed
92
+ - hook id: ruff
93
+ - exit code: 1
94
+ - files were modified by this hook
95
+
96
+ Found 2 errors (2 fixed, 0 remaining).
97
+ ```
98
+
99
+ Same as before, you can see that `chat_to_files.py` is both staged and not staged for commit. This is because `ruff` has formatted it and now it is different from the version you have in your working directory. To fix this you can simply run `git add chat_to_files.py` again and now you can commit your changes.
100
+
101
+ ```bash
102
+ $ git add chat_to_files.py
103
+ $ git commit -m "commit message"
104
+ black....................................................................Passed
105
+ ruff.....................................................................Passed
106
+ fix end of files.........................................................Passed
107
+ [pre-commit-setup f00c0ce] testing
108
+ 1 file changed, 1 insertion(+), 1 deletion(-)
109
+ ```
110
+
111
+ Now your file has been committed and you can push your changes.
112
+
113
+ At the beginning this might seem like a tedious process (having to add the file again after `black` and `ruff` have modified it) but it is actually very useful. It allows you to see what changes `black` and `ruff` have made to your files and make sure that they are correct before you commit them.
114
+
115
+ ### Important Note When `pre-commit` Fails in the Build Pipeline
116
+ Sometimes `pre-commit` will seemingly run successfully, as follows:
117
+
118
+ ```bash
119
+ black................................................(no files to check)Skipped
120
+ ruff.................................................(no files to check)Skipped
121
+ check toml...........................................(no files to check)Skipped
122
+ check yaml...........................................(no files to check)Skipped
123
+ detect private key...................................(no files to check)Skipped
124
+ fix end of files.....................................(no files to check)Skipped
125
+ trim trailing whitespace.............................(no files to check)Skipped
126
+ ```
127
+
128
+ However, you may see `pre-commit` fail in the build pipeline upon submitting a PR. The solution to this is to run `pre-commit run --all-files` to force `pre-commit` to execute these checks, and make any necessary file modifications, to all files.
129
+
130
+
131
+ ## Licensing
132
+
133
+ By contributing to gpt-engineer, you agree that your contributions will be licensed under the [LICENSE](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/LICENSE) file of the project.
134
+
135
+ Thank you for your interest in contributing to gpt-engineer! We appreciate your support and look forward to your contributions.
.github/FUNDING.yml ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ # These are supported funding model platforms
2
+
3
+ github: [antonosika]
4
+ patreon: gpt_eng
.github/ISSUE_TEMPLATE/bug-report.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Bug report
3
+ about: Create a report to help us improve
4
+ title: ''
5
+ labels: bug, triage
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ ## Policy and info
11
+ - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
12
+ - Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
13
+
14
+ ## Expected Behavior
15
+
16
+ Please describe the behavior you are expecting.
17
+
18
+ ## Current Behavior
19
+
20
+ What is the current behavior?
21
+
22
+ ## Failure Information
23
+
24
+ Information about the failure, including environment details, such as LLM used.
25
+
26
+ ### Failure Logs
27
+
28
+ If your project includes a debug_log_file.txt, kindly upload it from your_project/.gpteng/memory/ directory. This file encompasses all the necessary logs. Should the file prove extensive, consider utilizing GitHub's "add files" functionality.
29
+
30
+ ## System Information
31
+
32
+ Please copy and paste the output of the `gpte --sysinfo` command as part of your bug report.
.github/ISSUE_TEMPLATE/documentation-clarification.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Documentation improvement
3
+ about: Inaccuracies, inadequacies in the docs pages
4
+ title: ''
5
+ labels: documentation, triage
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ ## Policy and info
11
+ - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
12
+ - Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
13
+
14
+
15
+ ## Description
16
+ A clear and concise description of how the documentation at https://gpt-engineer.readthedocs.io/en/latest/ is providing wrong/insufficient information.
17
+
18
+ ## Suggestion
19
+ How can it be improved
.github/ISSUE_TEMPLATE/feature-request.md ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Feature request
3
+ about: Suggest an idea for this project
4
+ title: ''
5
+ labels: enhancement, triage
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ ## Policy and info
11
+ - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
12
+ - Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
13
+ - Consider adding the label "good first issue" for interesting, but easy features.
14
+
15
+ ## Feature description
16
+ A clear and concise description of what you would like to have
17
+
18
+ ## Motivation/Application
19
+ Why is this feature useful?
.github/PULL_REQUEST_TEMPLATE/PULL_REQUEST_TEMPLATE.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ **YOU MAY DELETE THE ENTIRE TEMPLATE BELOW.**
2
+
3
+ ## How Has This Been Tested?
4
+
5
+ Please describe if you have either:
6
+
7
+ - Generated the "example" project
8
+ - Ran the entire benchmark suite
9
+ - Something else
.github/workflows/automation.yml ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Automation Workflow
2
+
3
+ on:
4
+ schedule:
5
+ - cron: '0 0 * * *'
6
+ issues:
7
+ types: [opened, edited, reopened]
8
+ pull_request:
9
+ types: [opened, edited, reopened]
10
+
11
+ jobs:
12
+ mark-stale-issues:
13
+ runs-on: ubuntu-latest
14
+ steps:
15
+ - name: Mark stale issues
16
+ uses: actions/stale@v4
17
+ with:
18
+ stale-issue-message: 'This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.'
19
+ days-before-stale: 60
20
+
21
+ # Add additional jobs as needed
22
+ # job-name:
23
+ # runs-on: ubuntu-latest
24
+ # steps:
25
+ # - name: Job step name
26
+ # uses: action-name@version
27
+ # with:
28
+ # parameter1: value1
29
+ #
.github/workflows/ci.yaml ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Tox pytest all python versions
2
+
3
+ on:
4
+ push:
5
+ branches: [ main ]
6
+ paths:
7
+ - gpt_engineer/**
8
+ - tests/**
9
+ pull_request:
10
+ branches: [ main ]
11
+
12
+ concurrency:
13
+ group: ${{ github.workflow }} - ${{ github.ref }}
14
+ cancel-in-progress: true
15
+
16
+ jobs:
17
+ test:
18
+ runs-on: ubuntu-latest
19
+ strategy:
20
+ matrix:
21
+ python-version: ['3.10', '3.11', '3.12']
22
+
23
+ steps:
24
+ - name: Checkout repository
25
+ uses: actions/checkout@v3
26
+
27
+ - name: Set up Python ${{ matrix.python-version }}
28
+ uses: actions/setup-python@v4
29
+ with:
30
+ python-version: ${{ matrix.python-version == '3.12' && '3.12.3' || matrix.python-version }} # Using 3.12.3 to resolve Pydantic ForwardRef issue
31
+ cache: 'pip' # Note that pip is for the tox level. Poetry is still used for installing the specific environments (tox.ini)
32
+
33
+ - name: Check Python Version
34
+ run: python --version
35
+
36
+ - name: Install dependencies
37
+ run: |
38
+ python -m pip install --upgrade pip
39
+ pip install tox==4.15.0 poetry
40
+
41
+ - name: Run tox
42
+ env:
43
+ OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
44
+ run: tox
45
+
46
+ # Temporarily disabling codecov until we resolve codecov rate limiting issue
47
+ # - name: Report coverage
48
+ # run: |
49
+ # bash <(curl -s https://codecov.io/bash)
.github/workflows/pre-commit.yaml ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: pre-commit
2
+
3
+ on:
4
+ pull_request:
5
+ push:
6
+ branches: [main]
7
+
8
+ jobs:
9
+ pre-commit:
10
+ runs-on: ubuntu-latest
11
+
12
+ permissions:
13
+ contents: write
14
+
15
+ steps:
16
+ - uses: actions/checkout@v3
17
+
18
+ - uses: actions/setup-python@v4
19
+
20
+ - uses: pre-commit/[email protected]
21
+ with:
22
+ extra_args: --all-files
.github/workflows/release.yaml ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Build and publish Python packages to PyPI
2
+
3
+ on:
4
+ workflow_dispatch:
5
+ release:
6
+ types:
7
+ - published
8
+
9
+ jobs:
10
+ build:
11
+ runs-on: ubuntu-latest
12
+ strategy:
13
+ matrix:
14
+ python-version:
15
+ - "3.10"
16
+ steps:
17
+ - uses: actions/checkout@v3
18
+
19
+ - uses: actions/setup-python@v4
20
+ with:
21
+ python-version: ${{ matrix.python-version }}
22
+ # Removed the cache line that was here
23
+
24
+ # Install Poetry
25
+ - name: Install Poetry
26
+ run: |
27
+ curl -sSL https://install.python-poetry.org | python3 -
28
+
29
+ # Add Poetry to PATH
30
+ - name: Add Poetry to PATH
31
+ run: echo "$HOME/.local/bin" >> $GITHUB_PATH
32
+
33
+ # Cache Poetry's dependencies based on the lock file
34
+ - name: Set up Poetry cache
35
+ uses: actions/cache@v3
36
+ with:
37
+ path: ~/.cache/pypoetry
38
+ key: ${{ runner.os }}-poetry-${{ hashFiles('**/poetry.lock') }}
39
+ restore-keys: |
40
+ ${{ runner.os }}-poetry-
41
+
42
+ # Install dependencies using Poetry (if any)
43
+ - name: Install dependencies
44
+ run: poetry install
45
+
46
+ # Build package using Poetry
47
+ - name: Build package
48
+ run: poetry build --format sdist
49
+
50
+ # Upload package as build artifact
51
+ - uses: actions/upload-artifact@v3
52
+ with:
53
+ name: package
54
+ path: dist/
55
+
56
+ publish:
57
+ runs-on: ubuntu-latest
58
+ needs: build
59
+ environment:
60
+ name: pypi
61
+ url: https://pypi.org/p/gpt-engineer
62
+ permissions:
63
+ id-token: write
64
+ steps:
65
+ - uses: actions/download-artifact@v3
66
+ with:
67
+ name: package
68
+ path: dist/
69
+
70
+ - name: Publish packages to PyPI
71
+ uses: pypa/gh-action-pypi-publish@release/v1
72
+ with:
73
+ password: ${{ secrets.PYPI_API_TOKEN }}
.gitignore ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # See https://help.github.com/ignore-files/ for more about ignoring files.
2
+
3
+ # Byte-compiled / optimized / DLL files
4
+ __pycache__/
5
+ *.py[cod]
6
+ *$py.class
7
+ .history/
8
+
9
+ # Distribution / packaging
10
+ dist/
11
+ build/
12
+ *.egg-info/
13
+ *.egg
14
+
15
+ # Virtual environments
16
+ .env
17
+ .env.sh
18
+ venv/
19
+ ENV/
20
+ venv_test_installation/
21
+
22
+ # IDE-specific files
23
+ .vscode/
24
+ .idea/
25
+
26
+ # Compiled Python modules
27
+ *.pyc
28
+ *.pyo
29
+ *.pyd
30
+
31
+ # Python testing
32
+ .pytest_cache/
33
+ .ruff_cache/
34
+ .mypy_cache/
35
+ .coverage
36
+ coverage.*
37
+
38
+ # macOS specific files
39
+ .DS_Store
40
+
41
+ # Windows specific files
42
+ Thumbs.db
43
+
44
+ # this application's specific files
45
+ archive
46
+
47
+ # any log file
48
+ *log.txt
49
+ todo
50
+ scratchpad
51
+
52
+ # Pyenv
53
+ .python-version
54
+
55
+ .gpte_consent
56
+
57
+ # projects folder apart from default prompt
58
+
59
+ projects/*
60
+ !projects/example/prompt
61
+ !projects/example-improve
62
+ !projects/example-vision
63
+
64
+ # docs
65
+
66
+ docs/_build
67
+ docs/applications
68
+ docs/benchmark
69
+ docs/cli
70
+ docs/core
71
+ docs/intro
72
+ docs/tools
73
+
74
+ # coding assistants
75
+ .aider*
76
+ .gpteng
77
+
78
+ # webapp specific
79
+ webapp/node_modules
80
+ webapp/package-lock.json
81
+
82
+ webapp/.next/
83
+
84
+ .langchain.db
85
+
86
+ # TODO files
87
+ /!todo*
88
+
89
+ #ignore tox files
90
+ .tox
91
+
92
+ # locally saved datasets
93
+ gpt_engineer/benchmark/benchmarks/apps/dataset
94
+ gpt_engineer/benchmark/benchmarks/mbpp/dataset
95
+
96
+ gpt_engineer/benchmark/minimal_bench_config.toml
97
+
98
+ test.json
.pre-commit-config.yaml ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # See https://pre-commit.com for more information
2
+ # See https://pre-commit.com/hooks.html for more hooks
3
+ fail_fast: true
4
+ default_stages: [commit]
5
+
6
+ repos:
7
+ - repo: https://github.com/psf/black
8
+ rev: 23.3.0
9
+ hooks:
10
+ - id: black
11
+ args: [--config, pyproject.toml]
12
+ types: [python]
13
+
14
+ - repo: https://github.com/charliermarsh/ruff-pre-commit
15
+ rev: "v0.0.272"
16
+ hooks:
17
+ - id: ruff
18
+ args: [--fix, --exit-non-zero-on-fix]
19
+
20
+ - repo: https://github.com/pre-commit/pre-commit-hooks
21
+ rev: v4.4.0
22
+ hooks:
23
+ - id: check-toml
24
+ - id: check-yaml
25
+ - id: detect-private-key
26
+ - id: end-of-file-fixer
27
+ - id: trailing-whitespace
.readthedocs.yaml ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # .readthedocs.yaml
2
+ # Read the Docs configuration file
3
+ # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
4
+
5
+ # Required
6
+ version: 2
7
+
8
+ # Set the OS, Python version and other tools you might need
9
+ build:
10
+ os: ubuntu-22.04
11
+ tools:
12
+ python: "3.11"
13
+ # You can also specify other tool versions:
14
+ # nodejs: "19"
15
+ # rust: "1.64"
16
+ # golang: "1.19"
17
+ jobs:
18
+ post_create_environment:
19
+ - pip install poetry
20
+ post_install:
21
+ - VIRTUAL_ENV=$READTHEDOCS_VIRTUALENV_PATH poetry install --with docs
22
+ pre_build:
23
+ - python docs/create_api_rst.py
24
+
25
+ # Build documentation in the "docs/" directory with Sphinx
26
+ sphinx:
27
+ configuration: docs/conf.py
28
+
29
+ # Optionally build your docs in additional formats such as PDF and ePub
30
+ # formats:
31
+ # - pdf
32
+ # - epub
33
+
34
+ # Optional but recommended, declare the Python requirements required
35
+ # to build your documentation
36
+ # See https://docs.readthedocs.io/en/stable/guides/reproducible-builds.html
37
+ #python:
38
+ # install:
39
+ # - requirements: docs/requirements.txt
Acknowledgements.md ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ # We thank the following people for inspiration
2
+
3
+ | Person | Content | File(s) | Source |
4
+ |----|---|---|---|
5
+ | Paul Gauthier | The prompt for the `improve code` step is strongly based on Paul's prompt in Aider | /preprompts/improve.txt | https://github.com/paul-gauthier/aider/blob/main/aider/coders/editblock_coder.py
DISCLAIMER.md ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Disclaimer
2
+
3
+ gpt-engineer is an experimental application and is provided "as-is" without any warranty, express or implied. By using this software, you agree to assume all risks associated with its use, including but not limited to data loss, system failure, or any other issues that may arise.
4
+
5
+ The developers and contributors of this project do not accept any responsibility or liability for any losses, damages, or other consequences that may occur as a result of using this software. You are solely responsible for any decisions and actions taken based on the information provided by gpt-engineer.
6
+
7
+ Please note that the use of the GPT-4 language model can be expensive due to its token usage. By utilizing this project, you acknowledge that you are responsible for monitoring and managing your own token usage and the associated costs. It is highly recommended to check your OpenAI API usage regularly and set up any necessary limits or alerts to prevent unexpected charges.
8
+
9
+ As an autonomous experiment, gpt-engineer may generate code or take actions that are not in line with real-world business practices or legal requirements. It is your responsibility to ensure that any actions or decisions made by the generated code comply with all applicable laws, regulations, and ethical standards. The developers and contributors of this project shall not be held responsible for any consequences arising from the use of this software.
10
+
11
+ By using gpt-engineer, you agree to indemnify, defend, and hold harmless the developers, contributors, and any affiliated parties from and against any and all claims, damages, losses, liabilities, costs, and expenses (including reasonable attorneys' fees) arising from your use of this software or your violation of these terms.
GOVERNANCE.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Governance Model of GPT-Engineer
2
+
3
+ ## I. Project Board Structure
4
+
5
+ ### Project Board
6
+
7
+ The Project Board is the central decision-making body for the project, overseeing both strategic and technical decisions of the open source project GPT-Engineer.
8
+
9
+ #### Composition:
10
+ - The Board consists of the project's founder, Anton Osika, and representatives from each significant contributing entity, including individual contributors and commercial partners.
11
+ - The board is restricted to a maximum of 7 seats.
12
+ - New board members are admitted by majority vote.
13
+ - Board members may be expelled by majority vote.
14
+
15
+ ## II. Roles and Responsibilities
16
+
17
+ ### Veto due to Ethical Considerations
18
+ - The founder has veto right over any decisions made by the Board.
19
+ - This veto power is a safeguard to ensure the project's direction remains true to its original vision and ethos.
20
+
21
+ ### Contribution-Conditioned Decision Making
22
+ - Each board member has one vote as long as they qualify as active contributors.
23
+ - To qualify as an active contributor, a board member or the entity they represent, must have made 6 significant contributions on the GPT-Engineer GitHub page over the past 90 days.
24
+ - A significant contribution is:
25
+ - A merged pull request with at least 3 lines of code.
26
+ - Engagement in a GitHub/Discord bug report, where the board members' input leads to the confirmed resolution of the bug. If the solution is in terms of a merged pull request, the bug resolution together with the merged pull request counts as one significant contribution.
27
+ - A non-code, but necessary, community activity agreed on by the board, such as administration, corporate design, workflow design etc, deemed to take more than 1 hour. Participation in meetings or discussions does not count as a significant contribution.
28
+ - A board member may retain its seat on the board without voting right.
29
+
30
+ ## III. Decision-Making Process
31
+
32
+ ### Majority Voting
33
+ - Decisions are made based on a simple majority vote. Majority means more than half of board members with voting rights agree on one decision, regardless of the number of choices.
34
+ - The founder's veto can override the majority decision if exercised.
35
+
36
+ ### Regular Meetings and Reporting
37
+ - The Board will convene regularly, with the frequency of meetings decided by the Board members.
38
+ - Decisions, discussion points, and contributions will be transparently documented and shared within the project community.
39
+
40
+ ## IV. Data Access and Confidentiality
41
+
42
+ ### Board Members' Right to Access Data
43
+ - Any confidential data collected by GPT-Engineer is accessible to the board members after signing a relevant non-disclosure agreement (NDA).
44
+ - A relevant NDA requires a board member to erase any copies of confidential data obtained by the time of leaving the board.
45
+
46
+ ## V. Scope of Voting
47
+
48
+ ### Essential Topics
49
+ - Board voting is restricted to essential topics.
50
+ - Essential topics include essential technical topics and essential community topics.
51
+ - An essential technical topic is a change in the GPT-engineer code base that is likely to introduce breaking changes, or significantly change the user experience for users and developers.
52
+ - Essential community topics are changes to the community's governance or other central policy documents such as the readme or license.
53
+ - Day-to-day tasks such as bug fixes or implementation of new features outside the core module do not require voting.
54
+
55
+ ## VI. Transparency
56
+
57
+ ### Commitment to Transparency
58
+ - The governance process will be transparent, with key decisions, meeting minutes, and voting results publicly available, except for sensitive or confidential matters.
59
+
60
+ ## VII. Amendments
61
+
62
+ ### Changes to Governance Structure
63
+ - The governance model can be revised as the project evolves. Proposals for changes can be made by any Board member and will require a majority vote for adoption.
64
+
65
+ ## VIII. The GPT-Engineer Brand
66
+
67
+ ### Copyright and Stewardship
68
+ - The creator of GPT-engineer (Anton Osika) will be the steward of the GPT-engineer brand to decide when and how it can be used, and is committed to never jeopardizing the interest of the open source community in this stewardship.
69
+ - Anton Osika possesses the exclusive intellectual property rights for the trademark 'GPT-engineer,' encompassing all case variations such as 'gpt-engineer,' 'GPT-engineer,' and 'GPTE.' This ownership extends to the exclusive legal authority to utilize the 'GPT-engineer' trademark in the establishment and branding of both commercial and non-profit entities. It includes, but is not limited to, the use of the trademark in business names, logos, marketing materials, and other forms of corporate identity. Any use of the 'GPT-engineer' trademark, in any of its case variations, by other parties for commercial or non-commercial purposes requires express permission or a license agreement from Anton Osika.
70
+
71
+ # Current Board Members
72
+ - Anton Osika
73
+ - Axel Theorell
74
+ - Corey Gallon
75
+ - Peter Harrington
76
+ - Theo McCabe
LICENSE ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2023 Anton Osika
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
MANIFEST.in ADDED
@@ -0,0 +1 @@
 
 
1
+ recursive-include gpt_engineer/preprompts *
Makefile ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #Sets the default shell for executing commands as /bin/bash and specifies command should be executed in a Bash shell.
2
+ SHELL := /bin/bash
3
+
4
+ # Color codes for terminal output
5
+ COLOR_RESET=\033[0m
6
+ COLOR_CYAN=\033[1;36m
7
+ COLOR_GREEN=\033[1;32m
8
+
9
+ # Defines the targets help, install, dev-install, and run as phony targets.
10
+ .PHONY: help install run
11
+
12
+ #sets the default goal to help when no target is specified on the command line.
13
+ .DEFAULT_GOAL := help
14
+
15
+ #Disables echoing of commands.
16
+ .SILENT:
17
+
18
+ #Sets the variable name to the second word from the MAKECMDGOALS.
19
+ name := $(word 2,$(MAKECMDGOALS))
20
+
21
+ #Defines a target named help.
22
+ help:
23
+ @echo "Please use 'make <target>' where <target> is one of the following:"
24
+ @echo " help Return this message with usage instructions."
25
+ @echo " install Will install the dependencies using Poetry."
26
+ @echo " run <folder_name> Runs GPT Engineer on the folder with the given name."
27
+
28
+ #Defines a target named install. This target will install the project using Poetry.
29
+ install: poetry-install install-pre-commit farewell
30
+
31
+ #Defines a target named poetry-install. This target will install the project dependencies using Poetry.
32
+ poetry-install:
33
+ @echo -e "$(COLOR_CYAN)Installing project with Poetry...$(COLOR_RESET)" && \
34
+ poetry install
35
+
36
+ #Defines a target named install-pre-commit. This target will install the pre-commit hooks.
37
+ install-pre-commit:
38
+ @echo -e "$(COLOR_CYAN)Installing pre-commit hooks...$(COLOR_RESET)" && \
39
+ poetry run pre-commit install
40
+
41
+ #Defines a target named farewell. This target will print a farewell message.
42
+ farewell:
43
+ @echo -e "$(COLOR_GREEN)All done!$(COLOR_RESET)"
44
+
45
+ #Defines a target named run. This target will run GPT Engineer on the folder with the given name.
46
+ run:
47
+ @echo -e "$(COLOR_CYAN)Running GPT Engineer on $(COLOR_GREEN)$(name)$(COLOR_CYAN) folder...$(COLOR_RESET)" && \
48
+ poetry run gpt-engineer projects/$(name)
49
+
50
+ # Counts the lines of code in the project
51
+ cloc:
52
+ cloc . --exclude-dir=node_modules,dist,build,.mypy_cache,benchmark --exclude-list-file=.gitignore --fullpath --not-match-d='docs/_build' --by-file
README.md CHANGED
@@ -1,12 +1,112 @@
1
- ---
2
- title: Gpt Eng
3
- emoji: 👁
4
- colorFrom: yellow
5
- colorTo: green
6
- sdk: gradio
7
- sdk_version: 4.38.1
8
- app_file: app.py
9
- pinned: false
10
- ---
11
-
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # gpt-engineer
2
+
3
+ [![GitHub Repo stars](https://img.shields.io/github/stars/gpt-engineer-org/gpt-engineer?style=social)](https://github.com/gpt-engineer-org/gpt-engineer)
4
+ [![Discord Follow](https://dcbadge.vercel.app/api/server/8tcDQ89Ej2?style=flat)](https://discord.gg/8tcDQ89Ej2)
5
+ [![License](https://img.shields.io/github/license/gpt-engineer-org/gpt-engineer)](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/LICENSE)
6
+ [![GitHub Issues or Pull Requests](https://img.shields.io/github/issues/gpt-engineer-org/gpt-engineer)](https://github.com/gpt-engineer-org/gpt-engineer/issues)
7
+ ![GitHub Release](https://img.shields.io/github/v/release/gpt-engineer-org/gpt-engineer)
8
+ [![Twitter Follow](https://img.shields.io/twitter/follow/antonosika?style=social)](https://twitter.com/antonosika)
9
+
10
+ gpt-engineer lets you:
11
+ - Specify software in natural language
12
+ - Sit back and watch as an AI writes and executes the code
13
+ - Ask the AI to implement improvements
14
+
15
+ ## Getting Started
16
+
17
+ ### Install gpt-engineer
18
+
19
+ For **stable** release:
20
+
21
+ - `python -m pip install gpt-engineer`
22
+
23
+ For **development**:
24
+ - `git clone https://github.com/gpt-engineer-org/gpt-engineer.git`
25
+ - `cd gpt-engineer`
26
+ - `poetry install`
27
+ - `poetry shell` to activate the virtual environment
28
+
29
+ We actively support Python 3.10 - 3.12. The last version to support Python 3.8 - 3.9 was [0.2.6](https://pypi.org/project/gpt-engineer/0.2.6/).
30
+
31
+ ### Setup API key
32
+
33
+ Choose **one** of:
34
+ - Export env variable (you can add this to .bashrc so that you don't have to do it each time you start the terminal)
35
+ - `export OPENAI_API_KEY=[your api key]`
36
+ - .env file:
37
+ - Create a copy of `.env.template` named `.env`
38
+ - Add your OPENAI_API_KEY in .env
39
+ - Custom model:
40
+ - See [docs](https://gpt-engineer.readthedocs.io/en/latest/open_models.html), supports local model, azure, etc.
41
+
42
+ Check the [Windows README](./WINDOWS_README.md) for Windows usage.
43
+
44
+ **Other ways to run:**
45
+ - Use Docker ([instructions](docker/README.md))
46
+ - Do everything in your browser:
47
+ [![Open in GitHub Codespaces](https://github.com/codespaces/badge.svg)](https://github.com/gpt-engineer-org/gpt-engineer/codespaces)
48
+
49
+ ### Create new code (default usage)
50
+ - Create an empty folder for your project anywhere on your computer
51
+ - Create a file called `prompt` (no extension) inside your new folder and fill it with instructions
52
+ - Run `gpte <project_dir>` with a relative path to your folder
53
+ - For example: `gpte projects/my-new-project` from the gpt-engineer directory root with your new folder in `projects/`
54
+
55
+ ### Improve existing code
56
+ - Locate a folder with code which you want to improve anywhere on your computer
57
+ - Create a file called `prompt` (no extension) inside your new folder and fill it with instructions for how you want to improve the code
58
+ - Run `gpte <project_dir> -i` with a relative path to your folder
59
+ - For example: `gpte projects/my-old-project -i` from the gpt-engineer directory root with your folder in `projects/`
60
+
61
+ ### Benchmark custom agents
62
+ - gpt-engineer installs the binary 'bench', which gives you a simple interface for benchmarking your own agent implementations against popular public datasets.
63
+ - The easiest way to get started with benchmarking is by checking out the [template](https://github.com/gpt-engineer-org/gpte-bench-template) repo, which contains detailed instructions and an agent template.
64
+ - Currently supported benchmark:
65
+ - [APPS](https://github.com/hendrycks/apps)
66
+ - [MBPP](https://github.com/google-research/google-research/tree/master/mbpp)
67
+
68
+ By running gpt-engineer, you agree to our [terms](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/TERMS_OF_USE.md).
69
+
70
+
71
+ ## Relation to gptengineer.app (GPT Engineer)
72
+ [gptengineer.app](https://gptengineer.app/) is a commercial project for the automatic generation of web apps.
73
+ It features a UI for non-technical users connected to a git-controlled codebase.
74
+ The gptengineer.app team is actively supporting the open source community.
75
+
76
+
77
+ ## Features
78
+
79
+ ### Pre Prompts
80
+ You can specify the "identity" of the AI agent by overriding the `preprompts` folder with your own version of the `preprompts`. You can do so via the `--use-custom-preprompts` argument.
81
+
82
+ Editing the `preprompts` is how you make the agent remember things between projects.
83
+
84
+ ### Vision
85
+
86
+ By default, gpt-engineer expects text input via a `prompt` file. It can also accept image inputs for vision-capable models. This can be useful for adding UX or architecture diagrams as additional context for GPT Engineer. You can do this by specifying an image directory with the `—-image_directory` flag and setting a vision-capable model in the second CLI argument.
87
+
88
+ E.g. `gpte projects/example-vision gpt-4-vision-preview --prompt_file prompt/text --image_directory prompt/images -i`
89
+
90
+ ### Open source, local and alternative models
91
+
92
+ By default, gpt-engineer supports OpenAI Models via the OpenAI API or Azure OpenAI API, as well as Anthropic models.
93
+
94
+ With a little extra setup, you can also run with open source models like WizardCoder. See the [documentation](https://gpt-engineer.readthedocs.io/en/latest/open_models.html) for example instructions.
95
+
96
+ ## Mission
97
+
98
+ The gpt-engineer community mission is to **maintain tools that coding agent builders can use and facilitate collaboration in the open source community**.
99
+
100
+ If you are interested in contributing to this, we are interested in having you.
101
+
102
+ If you want to see our broader ambitions, check out the [roadmap](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/ROADMAP.md), and join
103
+ [discord](https://discord.gg/8tcDQ89Ej2)
104
+ to learn how you can [contribute](.github/CONTRIBUTING.md) to it.
105
+
106
+ gpt-engineer is [governed](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/GOVERNANCE.md) by a board of long-term contributors. If you contribute routinely and have an interest in shaping the future of gpt-engineer, you will be considered for the board.
107
+
108
+ ## Example
109
+
110
+
111
+
112
+ https://github.com/gpt-engineer-org/gpt-engineer/assets/4467025/40d0a9a8-82d0-4432-9376-136df0d57c99
ROADMAP.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Roadmap
2
+
3
+ <img width="800" alt="image" src="https://github.com/gpt-engineer-org/gpt-engineer/assets/48092564/06bce891-00ef-4052-bbbd-77af8b843fff">
4
+
5
+
6
+ This document is a general roadmap guide to the gpt-engineer project's strategic direction.
7
+ Our goal is to continually improve by focusing on three main pillars:
8
+ - User Experience,
9
+ - Technical Features, and
10
+ - Performance Tracking/Testing.
11
+
12
+ Each pillar is supported by a set of epics, reflecting our major goals and initiatives.
13
+
14
+
15
+ ## Tracking Progress with GitHub Projects
16
+
17
+ We are using [GitHub Projects](https://github.com/orgs/gpt-engineer-org/projects/3) to track the progress of our roadmap.
18
+
19
+ Each issue within our project is categorized under one of the main pillars and, in most cases, associated epics. You can check our [Project's README](https://github.com/orgs/gpt-engineer-org/projects/3?pane=info) section to understand better our logic and organization.
20
+
21
+
22
+
23
+ # How you can help out
24
+
25
+ You can:
26
+
27
+ - Post a "design" as a Google Doc in our [Discord](https://discord.com/channels/1119885301872070706/1120698764445880350), and ask for feedback to address one of the items in the roadmap
28
+ - Submit PRs to address one of the items in the roadmap
29
+ - Do a review of someone else's PR and propose next steps (further review, merge, close)
30
+
31
+ 🙌 Volunteer work in any of these will get acknowledged.🙌
TERMS_OF_USE.md ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Terms of Use
2
+
3
+ Welcome to gpt-engineer! By utilizing this powerful tool, you acknowledge and agree to the following comprehensive Terms of Use. We also encourage you to review the linked [disclaimer of warranty](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/DISCLAIMER.md) for additional information.
4
+
5
+ Both OpenAI, L.L.C. and the dedicated creators behind the remarkable gpt-engineer have implemented a data collection process focused on enhancing the product's capabilities. This endeavor is undertaken with utmost care and dedication to safeguarding user privacy. Rest assured that no information that could be directly attributed to any individual is stored.
6
+
7
+ It's important to be aware that the utilization of natural text inputs, including the 'prompt' and 'feedback' files, may be subject to storage. While it's theoretically possible to establish connections between a person's writing style or content within these files and their real-life identity, please note that the creators of gpt-engineer explicitly assure that such attempts will never be made.
8
+
9
+ For a deeper understanding of OpenAI's overarching terms of use, we encourage you to explore the details available [here](https://openai.com/policies/terms-of-use).
10
+
11
+ Optionally, gpt-engineer collects usage data for the purpose of improving gpt-engineer. Data collection only happens when a consent file called .gpte_consent is present in the gpt-engineer directory. Note that gpt-engineer cannot prevent that data streams passing through gpt-engineer to a third party may be stored by that third party (for example OpenAI).
12
+
13
+ Your engagement with gpt-engineer is an acknowledgment and acceptance of these terms, demonstrating your commitment to using this tool responsibly and within the bounds of ethical conduct. We appreciate your trust and look forward to the exciting possibilities that gpt-engineer can offer in your endeavors.
WINDOWS_README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Windows Setup
2
+ ## Short version
3
+
4
+ On Windows, follow the standard [README.md](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/README.md), but to set API key do one of:
5
+ - `set OPENAI_API_KEY=[your api key]` on cmd
6
+ - `$env:OPENAI_API_KEY="[your api key]"` on powershell
7
+
8
+ ## Full setup guide
9
+
10
+ Choose either **stable** or **development**.
11
+
12
+ For **stable** release:
13
+
14
+ Run `pip install gpt-engineer` in the command line as an administrator
15
+
16
+ Or:
17
+
18
+ 1. Open your web browser and navigate to the Python Package Index (PyPI) website: <https://pypi.org/project/gpt-engineer/>.
19
+ 2. On the PyPI page for the gpt-engineer package, locate the "Download files" section. Here you'll find a list of available versions and their corresponding download links.
20
+ 3. Identify the version of gpt-engineer you want to install and click on the associated download link. This will download the package file (usually a .tar.gz or .whl file) to your computer.
21
+ 4. Once the package file is downloaded, open your Python development environment or IDE.
22
+ 5. In your Python development environment, look for an option to install packages or manage dependencies. The exact location and terminology may vary depending on your IDE. For example, in PyCharm, you can go to "File" > "Settings" > "Project: \<project-name>" > "Python Interpreter" to manage packages.
23
+ 6. In the package management interface, you should see a list of installed packages. Look for an option to add or install a new package.
24
+ 7. Click on the "Add Package" or "Install Package" button.
25
+ 8. In the package installation dialog, choose the option to install from a file or from a local source.
26
+ 9. Browse and select the downloaded gpt-engineer package file from your computer.
27
+
28
+ For **development**:
29
+
30
+ - `git clone [email protected]:gpt-engineer-org/gpt-engineer.git`
31
+ - `cd gpt-engineer`
32
+ - `poetry install`
33
+ - `poetry shell` to activate the virtual environment
34
+
35
+ ### Setup
36
+
37
+ With an api key from OpenAI:
38
+
39
+ Run `set OPENAI_API_KEY=[your API key]` in the command line
40
+
41
+ Or:
42
+
43
+ 1. In the Start Menu, type to search for "Environment Variables" and click on "Edit the system environment variables".
44
+ 2. In the System Properties window, click on the "Environment Variables" button.
45
+ 3. In the Environment Variables window, you'll see two sections: User variables and System variables.
46
+ 4. To set a user-specific environment variable, select the "New" button under the User variables section.
47
+ 5. To set a system-wide environment variable, select the "New" button under the System variables section.
48
+ 6. Enter the variable name "OPENAI_API_KEY" in the "Variable name" field.
49
+ 7. Enter the variable value (e.g., your API key) in the "Variable value" field.
50
+ 8. Click "OK" to save the changes.
51
+ 9. Close any open command prompt or application windows and reopen them for the changes to take effect.
52
+
53
+ Now you can use `%OPENAI_API_KEY%` when prompted to input your key.
54
+
55
+ ### Run
56
+
57
+ - Create an empty folder. If inside the repo, you can:
58
+ - Run `xcopy /E projects\example projects\my-new-project` in the command line
59
+ - Or hold CTRL and drag the folder down to create a copy, then rename to fit your project
60
+ - Fill in the `prompt` file in your new folder
61
+ - `gpt-engineer projects/my-new-project`
62
+ - (Note, `gpt-engineer --help` lets you see all available options. For example `--steps use_feedback` lets you improve/fix code in a project)
63
+
64
+ By running gpt-engineer you agree to our [ToS](https://github.com/gpt-engineer-org/gpt-engineer/blob/main/TERMS_OF_USE.md).
65
+
66
+ ### Results
67
+
68
+ - Check the generated files in `projects/my-new-project/workspace`
citation.cff ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ cff-version: 1.0.0
2
+ message: "If you use this software, please cite it as below."
3
+ authors:
4
+ - family-names: Osika
5
+ given-names: Anton
6
+ title: gpt-engineer
7
+ version: 0.1.0
8
+ date-released: 2023-04-23
9
+ repository-code: https://github.com/gpt-engineer-org/gpt-engineer
10
+ url: https://gpt-engineer.readthedocs.io
docker-compose.yml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ services:
2
+ gpt-engineer:
3
+ build:
4
+ context: .
5
+ dockerfile: docker/Dockerfile
6
+ stdin_open: true
7
+ tty: true
8
+ # Set the API key from the .env file
9
+ env_file:
10
+ - .env
11
+ ## OR set the API key directly
12
+ # environment:
13
+ # - OPENAI_API_KEY=YOUR_API_KEY
14
+ image: gpt-engineer
15
+ volumes:
16
+ - ./projects/example:/project
docker/Dockerfile ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Stage 1: Builder stage
2
+ FROM python:3.11-slim AS builder
3
+
4
+ RUN apt-get update && apt-get install -y --no-install-recommends \
5
+ tk \
6
+ tcl \
7
+ curl \
8
+ git \
9
+ && rm -rf /var/lib/apt/lists/*
10
+
11
+ WORKDIR /app
12
+
13
+ COPY . .
14
+
15
+ RUN pip install --no-cache-dir -e .
16
+
17
+ # Stage 2: Final stage
18
+ FROM python:3.11-slim
19
+
20
+ WORKDIR /app
21
+
22
+ COPY --from=builder /usr/local/lib/python3.11/site-packages /usr/local/lib/python3.11/site-packages
23
+ COPY --from=builder /usr/local/bin /usr/local/bin
24
+ COPY --from=builder /usr/bin /usr/bin
25
+ COPY --from=builder /app .
26
+
27
+ COPY docker/entrypoint.sh .
28
+
29
+ ENTRYPOINT ["bash", "/app/entrypoint.sh"]
docker/README.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Getting Started Using Docker
2
+
3
+ This guide provides step-by-step instructions on how to set up and run the Docker environment for your GPT-Engineer project.
4
+
5
+ ## Prerequisites
6
+
7
+ - Docker installed on your machine.
8
+ - Git (for cloning the repository).
9
+
10
+ ## Setup Instructions
11
+
12
+ ### Using Docker CLI
13
+
14
+ 1. **Clone the Repository**
15
+
16
+ ```bash
17
+ git clone https://github.com/gpt-engineer-org/gpt-engineer.git
18
+ cd gpt-engineer
19
+ ```
20
+
21
+ 2. **Build the Docker Image**
22
+
23
+ ```bash
24
+ docker build --rm -t gpt-engineer -f docker/Dockerfile .
25
+ ```
26
+
27
+ 3. **Run the Docker Container**
28
+
29
+ ```bash
30
+ docker run -it --rm -e OPENAI_API_KEY="YOUR_OPENAI_KEY" -v ./your-project:/project gpt-engineer
31
+ ```
32
+
33
+ Replace `YOUR_OPENAI_KEY` with your actual OpenAI API key. The `-v` flag mounts your local `your-project` directory inside the container. Replace this with your actual project directory. Ensure this directory contains all necessary files, including the `prompt` file.
34
+
35
+ ### Using Docker Compose
36
+
37
+ 1. **Clone the Repository** (if not already done)
38
+
39
+ ```bash
40
+ git clone https://github.com/gpt-engineer-org/gpt-engineer.git
41
+ cd gpt-engineer
42
+ ```
43
+
44
+ 2. **Build and Run using Docker Compose**
45
+
46
+ ```bash
47
+ docker-compose -f docker-compose.yml build
48
+ docker-compose run --rm gpt-engineer
49
+ ```
50
+
51
+ Set the `OPENAI_API_KEY` in the `docker/docker-compose.yml` using an `.env` file or as an environment variable. Mount your project directory to the container using volumes, e.g., `"./projects/example:/project"` where `./projects/example` is the path to your project directory.
52
+
53
+ 3. **Another alternative using Docker Compose**
54
+
55
+ Since there is only one `docker-compose.yml` file, you could run it without the -f option.
56
+ - `docker compose up -d --build` - To build and start the containers defined in your `docker-compose.yml` file in detached mode
57
+ - `docker compose up -d` - To start the containers defined in your `docker-compose.yml` file in detached mode
58
+ - `docker compose down` - To stop and remove all containers, networks, and volumes associated with the `docker-compose.yml`
59
+ - `docker compose restart` - To restart the containers defined in the `docker-compose.yml` file
60
+
61
+ ## Debugging
62
+
63
+ To facilitate debugging, you can run a shell inside the built Docker image:
64
+
65
+ ```bash
66
+ docker run -it --entrypoint /bin/bash gpt-engineer
67
+ ```
68
+
69
+ This opens a shell inside the Docker container, allowing you to execute commands and inspect the environment manually.
docker/entrypoint.sh ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+ # -*- coding: utf-8 -*-
3
+
4
+ project_dir="/project"
5
+
6
+ # Run the gpt engineer script
7
+ gpt-engineer $project_dir "$@"
8
+
9
+ # Patch the permissions of the generated files to be owned by nobody except prompt file
10
+ find "$project_dir" -mindepth 1 -maxdepth 1 ! -path "$project_dir/prompt" -exec chown -R nobody:nogroup {} + -exec chmod -R 777 {} +
docs/Makefile ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Minimal makefile for Sphinx documentation
2
+ #
3
+
4
+ # You can set these variables from the command line.
5
+ SPHINXOPTS =
6
+ SPHINXBUILD = python -msphinx
7
+ SPHINXPROJ = gpt_engineer
8
+ SOURCEDIR = .
9
+ BUILDDIR = _build
10
+
11
+ # Put it first so that "make" without argument is like "make help".
12
+ help:
13
+ @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
14
+
15
+ .PHONY: help Makefile
16
+
17
+ # Catch-all target: route all unknown targets to Sphinx using the new
18
+ # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
19
+ %: Makefile
20
+ @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
docs/api_reference.rst ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .. _api_reference:
2
+
3
+ =============
4
+ API Reference
5
+ =============
6
+
7
+ :mod:`gpt_engineer.applications`: Applications
8
+ ===============================================
9
+
10
+ .. automodule:: gpt_engineer.applications
11
+ :no-members:
12
+ :no-inherited-members:
13
+
14
+ Classes
15
+ --------------
16
+ .. currentmodule:: gpt_engineer
17
+
18
+ .. autosummary::
19
+ :toctree: applications
20
+ :template: class.rst
21
+
22
+ applications.cli.cli_agent.CliAgent
23
+ applications.cli.file_selector.DisplayablePath
24
+
25
+ Functions
26
+ --------------
27
+ .. currentmodule:: gpt_engineer
28
+
29
+ .. autosummary::
30
+ :toctree: applications
31
+
32
+ applications.cli.collect.collect_and_send_human_review
33
+ applications.cli.collect.collect_learnings
34
+ applications.cli.collect.send_learning
35
+ applications.cli.learning.ask_collection_consent
36
+ applications.cli.learning.ask_for_valid_input
37
+ applications.cli.learning.check_collection_consent
38
+ applications.cli.learning.extract_learning
39
+ applications.cli.learning.get_session
40
+ applications.cli.learning.human_review_input
41
+ applications.cli.main.get_preprompts_path
42
+ applications.cli.main.load_env_if_needed
43
+ applications.cli.main.load_prompt
44
+ applications.cli.main.main
45
+
46
+ :mod:`gpt_engineer.benchmark`: Benchmark
47
+ =========================================
48
+
49
+ .. automodule:: gpt_engineer.benchmark
50
+ :no-members:
51
+ :no-inherited-members:
52
+
53
+ Functions
54
+ --------------
55
+ .. currentmodule:: gpt_engineer
56
+
57
+ .. autosummary::
58
+ :toctree: benchmark
59
+
60
+ benchmark.__main__.get_agent
61
+ benchmark.__main__.main
62
+ benchmark.benchmarks.gpteng.eval_tools.assert_exists_in_source_code
63
+ benchmark.benchmarks.gpteng.eval_tools.check_evaluation_component
64
+ benchmark.benchmarks.gpteng.eval_tools.check_language
65
+ benchmark.benchmarks.gpteng.eval_tools.run_code_class_has_property
66
+ benchmark.benchmarks.gpteng.eval_tools.run_code_class_has_property_w_value
67
+ benchmark.benchmarks.gpteng.eval_tools.run_code_eval_function
68
+ benchmark.benchmarks.gpteng.load.eval_to_task
69
+ benchmark.benchmarks.gpteng.load.expect_to_assertion
70
+ benchmark.benchmarks.gpteng.load.load_gpteng
71
+ benchmark.benchmarks.gptme.load.load_gptme
72
+ benchmark.benchmarks.load.get_benchmark
73
+ benchmark.run.print_results
74
+ benchmark.run.run
75
+
76
+ :mod:`gpt_engineer.core`: Core
77
+ ===============================
78
+
79
+ .. automodule:: gpt_engineer.core
80
+ :no-members:
81
+ :no-inherited-members:
82
+
83
+ Classes
84
+ --------------
85
+ .. currentmodule:: gpt_engineer
86
+
87
+ .. autosummary::
88
+ :toctree: core
89
+ :template: class.rst
90
+
91
+ core.base_agent.BaseAgent
92
+ core.base_execution_env.BaseExecutionEnv
93
+ core.default.disk_execution_env.DiskExecutionEnv
94
+ core.default.disk_memory.DiskMemory
95
+ core.default.simple_agent.SimpleAgent
96
+ core.files_dict.FilesDict
97
+ core.version_manager.BaseVersionManager
98
+
99
+ Functions
100
+ --------------
101
+ .. currentmodule:: gpt_engineer
102
+
103
+ .. autosummary::
104
+ :toctree: core
105
+
106
+ core.ai.serialize_messages
107
+ core.chat_to_files.apply_diffs
108
+ core.chat_to_files.chat_to_files_dict
109
+ core.chat_to_files.parse_diff_block
110
+ core.chat_to_files.parse_diffs
111
+ core.chat_to_files.parse_hunk_header
112
+ core.default.paths.memory_path
113
+ core.default.paths.metadata_path
114
+ core.default.simple_agent.default_config_agent
115
+ core.default.steps.curr_fn
116
+ core.default.steps.execute_entrypoint
117
+ core.default.steps.gen_code
118
+ core.default.steps.gen_entrypoint
119
+ core.default.steps.improve
120
+ core.default.steps.salvage_correct_hunks
121
+ core.default.steps.setup_sys_prompt
122
+ core.default.steps.setup_sys_prompt_existing_code
123
+ core.diff.count_ratio
124
+ core.diff.is_similar
125
+ core.files_dict.file_to_lines_dict
126
+
127
+ :mod:`gpt_engineer.tools`: Tools
128
+ =================================
129
+
130
+ .. automodule:: gpt_engineer.tools
131
+ :no-members:
132
+ :no-inherited-members:
133
+
134
+ Functions
135
+ --------------
136
+ .. currentmodule:: gpt_engineer
137
+
138
+ .. autosummary::
139
+ :toctree: tools
140
+
141
+ tools.custom_steps.clarified_gen
142
+ tools.custom_steps.get_platform_info
143
+ tools.custom_steps.lite_gen
144
+ tools.custom_steps.self_heal
docs/code_conduct_link.rst ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .. include:: ../.github/CODE_OF_CONDUCT.md
2
+ :parser: myst_parser.sphinx_
docs/conf.py ADDED
@@ -0,0 +1,204 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ #
3
+ # file_processor documentation build configuration file, created by
4
+ # sphinx-quickstart on Fri Jun 9 13:47:02 2017.
5
+ #
6
+ # This file is execfile()d with the current directory set to its
7
+ # containing dir.
8
+ #
9
+ # Note that not all possible configuration values are present in this
10
+ # autogenerated file.
11
+ #
12
+ # All configuration values have a default; values that are commented out
13
+ # serve to show the default.
14
+
15
+ # If extensions (or modules to document with autodoc) are in another
16
+ # directory, add these directories to sys.path here. If the directory is
17
+ # relative to the documentation root, use os.path.abspath to make it
18
+ # absolute, like shown here.
19
+ #
20
+ import os
21
+ import sys
22
+
23
+ from pathlib import Path
24
+
25
+ import toml
26
+
27
+ sys.path.insert(0, os.path.abspath(".."))
28
+
29
+ ROOT_DIR = Path(__file__).parents[1].absolute()
30
+
31
+ with open("../pyproject.toml") as f:
32
+ data = toml.load(f)
33
+
34
+
35
+ # The master toctree document.
36
+ master_doc = "index"
37
+
38
+ # General information about the project.
39
+ project = data["tool"]["poetry"]["name"]
40
+ copyright = "2023 Anton Osika"
41
+ author = " Anton Osika & Contributors"
42
+
43
+ # The version info for the project you're documenting, acts as replacement
44
+ # for |version| and |release|, also used in various other places throughout
45
+ # the built documents.
46
+ #
47
+ # The short X.Y version.
48
+ version = data["tool"]["poetry"]["version"]
49
+ # The full version, including alpha/beta/rc tags.
50
+ release = data["tool"]["poetry"]["version"]
51
+
52
+
53
+ # -- General configuration ---------------------------------------------
54
+
55
+ # If your documentation needs a minimal Sphinx version, state it here.
56
+ #
57
+ # needs_sphinx = '1.0'
58
+
59
+ # Add any Sphinx extension module names here, as strings. They can be
60
+ # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
61
+ extensions = [
62
+ "sphinx.ext.autodoc",
63
+ "sphinx.ext.autodoc.typehints",
64
+ "sphinx.ext.autosummary",
65
+ "sphinx.ext.napoleon",
66
+ "sphinx.ext.viewcode",
67
+ "sphinx_copybutton",
68
+ "myst_parser",
69
+ "IPython.sphinxext.ipython_console_highlighting",
70
+ ]
71
+
72
+ # The suffix(es) of source filenames.
73
+ # You can specify multiple suffix as a list of string:
74
+
75
+ source_suffix = [".rst", ".md"]
76
+
77
+ autodoc_pydantic_model_show_json = False
78
+ autodoc_pydantic_field_list_validators = False
79
+ autodoc_pydantic_config_members = False
80
+ autodoc_pydantic_model_show_config_summary = False
81
+ autodoc_pydantic_model_show_validator_members = False
82
+ autodoc_pydantic_model_show_validator_summary = False
83
+ autodoc_pydantic_model_signature_prefix = "class"
84
+ autodoc_pydantic_field_signature_prefix = "param"
85
+ autodoc_member_order = "groupwise"
86
+ autoclass_content = "both"
87
+ autodoc_typehints_format = "short"
88
+
89
+ autodoc_default_options = {
90
+ "members": True,
91
+ "show-inheritance": True,
92
+ "inherited-members": "BaseModel",
93
+ "undoc-members": False,
94
+ }
95
+
96
+ # Add any paths that contain templates here, relative to this directory.
97
+ templates_path = ["_templates"]
98
+
99
+
100
+ # source_suffix = '.rst'
101
+
102
+
103
+ # The language for content autogenerated by Sphinx. Refer to documentation
104
+ # for a list of supported languages.
105
+ #
106
+ # This is also used if you do content translation via gettext catalogs.
107
+ # Usually you set "language" from the command line for these cases.
108
+ language = "en"
109
+
110
+ # List of patterns, relative to source directory, that match files and
111
+ # directories to ignore when looking for source files.
112
+ # This patterns also effect to html_static_path and html_extra_path
113
+ exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"]
114
+
115
+ # The name of the Pygments (syntax highlighting) style to use.
116
+ pygments_style = "sphinx"
117
+
118
+ # If true, `todo` and `todoList` produce output, else they produce nothing.
119
+ todo_include_todos = False
120
+
121
+
122
+ # -- Options for HTML output -------------------------------------------
123
+
124
+ # The theme to use for HTML and HTML Help pages. See the documentation for
125
+ # a list of builtin themes.
126
+ #
127
+ # html_theme = 'alabaster'
128
+ html_theme = "sphinx_rtd_theme"
129
+
130
+ # Theme options are theme-specific and customize the look and feel of a
131
+ # theme further. For a list of options available for each theme, see the
132
+ # documentation.
133
+ #
134
+ # html_theme_options = {}
135
+
136
+ # Add any paths that contain custom static files (such as style sheets) here,
137
+ # relative to this directory. They are copied after the builtin static files,
138
+ # so a file named "default.css" will overwrite the builtin "default.css".
139
+ # html_static_path = ["_static"]
140
+
141
+
142
+ # -- Options for HTMLHelp output ---------------------------------------
143
+
144
+ # Output file base name for HTML help builder.
145
+ htmlhelp_basename = "gpt_engineerdoc"
146
+
147
+
148
+ # -- Options for LaTeX output ------------------------------------------
149
+
150
+ latex_elements = {
151
+ # The paper size ('letterpaper' or 'a4paper').
152
+ #
153
+ # 'papersize': 'letterpaper',
154
+ # The font size ('10pt', '11pt' or '12pt').
155
+ #
156
+ # 'pointsize': '10pt',
157
+ # Additional stuff for the LaTeX preamble.
158
+ #
159
+ # 'preamble': '',
160
+ # Latex figure (float) alignment
161
+ #
162
+ # 'figure_align': 'htbp',
163
+ }
164
+
165
+ # Grouping the document tree into LaTeX files. List of tuples
166
+ # (source start file, target name, title, author, documentclass
167
+ # [howto, manual, or own class]).
168
+ latex_documents = [
169
+ (master_doc, "gpt_engineer.tex", "GPT-ENgineer Documentation", "manual"),
170
+ ]
171
+
172
+
173
+ # -- Options for manual page output ------------------------------------
174
+
175
+ # One entry per manual page. List of tuples
176
+ # (source start file, name, description, authors, manual section).
177
+ man_pages = [(master_doc, "gpt_engineer", "GPT-Engineer Documentation", [author], 1)]
178
+
179
+
180
+ # -- Options for Texinfo output ----------------------------------------
181
+
182
+ # Grouping the document tree into Texinfo files. List of tuples
183
+ # (source start file, target name, title, author,
184
+ # dir menu entry, description, category)
185
+ texinfo_documents = [
186
+ (
187
+ master_doc,
188
+ "gpt_engineer",
189
+ "GPT-Engineer Documentation",
190
+ author,
191
+ "gpt_engineer",
192
+ "One line description of project.",
193
+ "Miscellaneous",
194
+ ),
195
+ ]
196
+
197
+ # generate autosummary even if no references
198
+ autosummary_generate = True
199
+
200
+ myst_enable_extensions = [
201
+ "colon_fence",
202
+ ]
203
+
204
+ myst_all_links_external = True
docs/contributing_link.rst ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .. include:: ../.github/CONTRIBUTING.md
2
+ :parser: myst_parser.sphinx_
docs/create_api_rst.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Script for auto-generating api_reference.rst"""
2
+ import glob
3
+ import re
4
+
5
+ from pathlib import Path
6
+
7
+ ROOT_DIR = Path(__file__).parents[1].absolute()
8
+ print(ROOT_DIR)
9
+ PKG_DIR = ROOT_DIR / "gpt_engineer"
10
+ WRITE_FILE = Path(__file__).parent / "api_reference.rst"
11
+
12
+
13
+ def load_members() -> dict:
14
+ members: dict = {}
15
+ for py in glob.glob(str(PKG_DIR) + "/**/*.py", recursive=True):
16
+ module = py[len(str(PKG_DIR)) + 1 :].replace(".py", "").replace("/", ".")
17
+ top_level = module.split(".")[0]
18
+ if top_level not in members:
19
+ members[top_level] = {"classes": [], "functions": []}
20
+ with open(py, "r") as f:
21
+ for line in f.readlines():
22
+ cls = re.findall(r"^class ([^_].*)\(", line)
23
+ members[top_level]["classes"].extend([module + "." + c for c in cls])
24
+ func = re.findall(r"^def ([^_].*)\(", line)
25
+ afunc = re.findall(r"^async def ([^_].*)\(", line)
26
+ func_strings = [module + "." + f for f in func + afunc]
27
+ members[top_level]["functions"].extend(func_strings)
28
+ return members
29
+
30
+
31
+ def construct_doc(members: dict) -> str:
32
+ full_doc = """\
33
+ .. _api_reference:
34
+
35
+ =============
36
+ API Reference
37
+ =============
38
+
39
+ """
40
+ for module, _members in sorted(members.items(), key=lambda kv: kv[0]):
41
+ classes = _members["classes"]
42
+ functions = _members["functions"]
43
+ if not (classes or functions):
44
+ continue
45
+
46
+ module_title = module.replace("_", " ").title()
47
+ if module_title == "Llms":
48
+ module_title = "LLMs"
49
+ section = f":mod:`gpt_engineer.{module}`: {module_title}"
50
+ full_doc += f"""\
51
+ {section}
52
+ {'=' * (len(section) + 1)}
53
+
54
+ .. automodule:: gpt_engineer.{module}
55
+ :no-members:
56
+ :no-inherited-members:
57
+
58
+ """
59
+
60
+ if classes:
61
+ cstring = "\n ".join(sorted(classes))
62
+ full_doc += f"""\
63
+ Classes
64
+ --------------
65
+ .. currentmodule:: gpt_engineer
66
+
67
+ .. autosummary::
68
+ :toctree: {module}
69
+ :template: class.rst
70
+
71
+ {cstring}
72
+
73
+ """
74
+ if functions:
75
+ fstring = "\n ".join(sorted(functions))
76
+ full_doc += f"""\
77
+ Functions
78
+ --------------
79
+ .. currentmodule:: gpt_engineer
80
+
81
+ .. autosummary::
82
+ :toctree: {module}
83
+
84
+ {fstring}
85
+
86
+ """
87
+ return full_doc
88
+
89
+
90
+ def main() -> None:
91
+ members = load_members()
92
+ full_doc = construct_doc(members)
93
+ with open(WRITE_FILE, "w") as f:
94
+ f.write(full_doc)
95
+
96
+
97
+ if __name__ == "__main__":
98
+ main()
docs/disclaimer_link.rst ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ .. include:: ../DISCLAIMER.md
2
+ :parser: myst_parser.sphinx_
docs/docs_building.md ADDED
@@ -0,0 +1,64 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Building Docs with Sphinx
2
+ =========================
3
+
4
+ This example shows a basic Sphinx docs project with Read the Docs. This project is using `sphinx` with `readthedocs`
5
+ project template.
6
+
7
+ Some useful links are given below to lear and contribute in the project.
8
+
9
+ 📚 [docs/](https://www.sphinx-doc.org/en/master/usage/quickstart.html)<br>
10
+ A basic Sphinx project lives in `docs/`, it was generated using Sphinx defaults. All the `*.rst` & `*.md` make up sections in the documentation. Both `.rst` and `.md` formats are supported in this project
11
+
12
+ ⚙️ [.readthedocs.yaml](https://docs.readthedocs.io/en/stable/config-file/v2.html)<br>
13
+ Read the Docs Build configuration is stored in `.readthedocs.yaml`.
14
+
15
+
16
+ Example Project usage
17
+ ---------------------
18
+
19
+ ``Poetry`` is the package manager for ``gpt-engineer``. In order to build documentation, we have to add docs requirements in
20
+ development environment.
21
+
22
+ This project has a standard readthedocs layout which is built by Read the Docs almost the same way that you would build it
23
+ locally (on your own laptop!).
24
+
25
+ You can build and view this documentation project locally - we recommend that you activate a ``poetry shell``.
26
+
27
+ Update ``repository_stats.md`` file under ``docs/intro``
28
+
29
+ ```console
30
+ # Install required Python dependencies (MkDocs etc.)
31
+ poetry install
32
+ cd docs/
33
+
34
+ # Create the `api_reference.rst`
35
+ python create_api_rst.py
36
+
37
+ # Build the docs
38
+ make html
39
+
40
+ ## Alternatively, to rebuild the docs on changes with live-reload in the browser
41
+ sphinx-autobuild . _build/html
42
+ ```
43
+
44
+ Project Docs Structure
45
+ ----------------------
46
+ If you are new to Read the Docs, you may want to refer to the [Read the Docs User documentation](https://docs.readthedocs.io/).
47
+
48
+ Below is the rundown of documentation structure for `pandasai`, you need to know:
49
+
50
+ 1. place your `docs/` folder alongside your Python project.
51
+ 2. copy `.readthedocs.yaml` and the `docs/` folder into your project root.
52
+ 3. `docs/api_reference.rst` contains the API documentation created using `docstring`. Run the `create_api_rst.py` to update the API reference file.
53
+ 4. Project is using standard Google Docstring Style.
54
+ 5. Rebuild the documentation locally to see that it works.
55
+ 6. Documentation are hosted on [Read the Docs tutorial](https://docs.readthedocs.io/en/stable/tutorial/)
56
+
57
+
58
+ Read the Docs tutorial
59
+ ----------------------
60
+
61
+ To get started with Read the Docs, you may also refer to the
62
+ [Read the Docs tutorial](https://docs.readthedocs.io/en/stable/tutorial/). I
63
+
64
+ With every release, build the documentation manually.
docs/examples/open_llms/README.md ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Test that the Open LLM is running
2
+
3
+ First start the server by using only CPU:
4
+
5
+ ```bash
6
+ export model_path="TheBloke/CodeLlama-13B-GGUF/codellama-13b.Q8_0.gguf"
7
+ python -m llama_cpp.server --model $model_path
8
+ ```
9
+
10
+ Or with GPU support (recommended):
11
+
12
+ ```bash
13
+ python -m llama_cpp.server --model TheBloke/CodeLlama-13B-GGUF/codellama-13b.Q8_0.gguf --n_gpu_layers 1
14
+ ```
15
+
16
+ If you have more `GPU` layers available set `--n_gpu_layers` to the higher number.
17
+
18
+ To find the amount of available run the above command and look for `llm_load_tensors: offloaded 1/41 layers to GPU` in the output.
19
+
20
+ ## Test API call
21
+
22
+ Set the environment variables:
23
+
24
+ ```bash
25
+ export OPENAI_API_BASE="http://localhost:8000/v1"
26
+ export OPENAI_API_KEY="sk-xxx"
27
+ export MODEL_NAME="CodeLlama"
28
+ ````
29
+
30
+ Then ping the model via `python` using `OpenAI` API:
31
+
32
+ ```bash
33
+ python examples/open_llms/openai_api_interface.py
34
+ ```
35
+
36
+ If you're not using `CodeLLama` make sure to change the `MODEL_NAME` parameter.
37
+
38
+ Or using `curl`:
39
+
40
+ ```bash
41
+ curl --request POST \
42
+ --url http://localhost:8000/v1/chat/completions \
43
+ --header "Content-Type: application/json" \
44
+ --data '{ "model": "CodeLlama", "prompt": "Who are you?", "max_tokens": 60}'
45
+ ```
46
+
47
+ If this works also make sure that `langchain` interface works since that's how `gpte` interacts with LLMs.
48
+
49
+ ## Langchain test
50
+
51
+ ```bash
52
+ export MODEL_NAME="CodeLlama"
53
+ python examples/open_llms/langchain_interface.py
54
+ ```
55
+
56
+ That's it 🤓 time to go back [to](/docs/open_models.md#running-the-example) and give `gpte` a try.
docs/examples/open_llms/langchain_interface.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
4
+ from langchain_openai import ChatOpenAI
5
+
6
+ model = ChatOpenAI(
7
+ model=os.getenv("MODEL_NAME"),
8
+ temperature=0.1,
9
+ callbacks=[StreamingStdOutCallbackHandler()],
10
+ streaming=True,
11
+ )
12
+
13
+ prompt = (
14
+ "Provide me with only the code for a simple python function that sums two numbers."
15
+ )
16
+
17
+ model.invoke(prompt)
docs/examples/open_llms/openai_api_interface.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ from openai import OpenAI
4
+
5
+ client = OpenAI(
6
+ base_url=os.getenv("OPENAI_API_BASE"), api_key=os.getenv("OPENAI_API_KEY")
7
+ )
8
+
9
+ response = client.chat.completions.create(
10
+ model=os.getenv("MODEL_NAME"),
11
+ messages=[
12
+ {
13
+ "role": "user",
14
+ "content": "Provide me with only the code for a simple python function that sums two numbers.",
15
+ },
16
+ ],
17
+ temperature=0.7,
18
+ max_tokens=200,
19
+ )
20
+
21
+ print(response.choices[0].message.content)
docs/index.rst ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Welcome to GPT-ENGINEER's Documentation
2
+ =======================================
3
+
4
+ .. toctree::
5
+ :maxdepth: 2
6
+ :caption: GET STARTED:
7
+
8
+ introduction.md
9
+ installation
10
+ quickstart
11
+
12
+ .. toctree::
13
+ :maxdepth: 2
14
+ :caption: USER GUIDES:
15
+
16
+ windows_readme_link
17
+ open_models.md
18
+ tracing_debugging.md
19
+
20
+ .. toctree::
21
+ :maxdepth: 2
22
+ :caption: CONTRIBUTE:
23
+
24
+ contributing_link
25
+ roadmap_link
26
+ code_conduct_link
27
+ disclaimer_link
28
+ docs_building.md
29
+ terms_link
30
+
31
+ .. toctree::
32
+ :maxdepth: 2
33
+ :caption: PACKAGE API:
34
+
35
+ api_reference
36
+
37
+ Indices and tables
38
+ ==================
39
+ * :ref:`genindex`
40
+ * :ref:`modindex`
41
+ * :ref:`search`
docs/installation.rst ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .. highlight:: shell
2
+
3
+ ============
4
+ Installation
5
+ ============
6
+
7
+
8
+ Stable release
9
+ --------------
10
+
11
+ To install ``gpt-engineer``, run this command in your terminal:
12
+
13
+ .. code-block:: console
14
+
15
+ $ python -m pip install gpt-engineer
16
+
17
+ This is the preferred method to install ``gpt-engineer``, as it will always install the most recent stable release.
18
+
19
+ If you don't have `pip`_ installed, this `Python installation guide`_ can guide
20
+ you through the process.
21
+
22
+ .. _pip: https://pip.pypa.io
23
+ .. _Python installation guide: http://docs.python-guide.org/en/latest/starting/installation/
24
+
25
+
26
+ From sources
27
+ ------------
28
+
29
+ The sources for ``gpt-engineer`` can be downloaded from the `Github repo`_.
30
+
31
+ You can either clone the public repository:
32
+
33
+ .. code-block:: console
34
+
35
+ $ git clone https://github.com/gpt-engineer-org/gpt-engineer.git
36
+
37
+ Once you have a copy of the source, you can install it with:
38
+
39
+ .. code-block:: console
40
+
41
+ $ cd gpt-engineer
42
+ $ poetry install
43
+ $ poetry shell
44
+
45
+
46
+ .. _Github repo: https://github.com/gpt-engineer-org/gpt-engineer.git
47
+
48
+ Troubleshooting
49
+ ---------------
50
+
51
+ For mac and linux system, there are sometimes slim python installations that do not include the ``gpt-engineer`` requirement tkinter, which is a standard library and thus not pip installable.
52
+
53
+ To install tkinter on mac, you can for example use brew:
54
+
55
+ .. code-block:: console
56
+
57
+ $ brew install python-tk
58
+
59
+ On debian-based linux systems you can use:
60
+
61
+ .. code-block:: console
62
+
63
+ $ sudo apt-get install python3-tk
docs/introduction.md ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction
2
+ ``gpt-engineer`` is a project that uses LLMs (such as GPT-4) to automate the process of software engineering. It includes several Python scripts that interact with the LLM to generate code, clarify requirements, generate specifications, and more.
3
+
4
+ <br>
5
+
6
+ ## Get started
7
+ [Here’s](/en/latest/installation.html) how to install ``gpt-engineer``, set up your environment, and start building.
8
+
9
+ We recommend following our [Quickstart](/en/latest/quickstart.html) guide to familiarize yourself with the framework by building your first application with ``gpt-engineer``.
10
+
11
+ <br>
12
+
13
+ ## Example
14
+ You can find an example of the project in action here.
15
+
16
+ <video width="100%" controls>
17
+ <source src="https://github.com/gpt-engineer-org/gpt-engineer/assets/4467025/6e362e45-4a94-4b0d-973d-393a31d92d9b
18
+ " type="video/mp4">
19
+ Your browser does not support the video tag.
20
+ </video>
docs/make.bat ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ @ECHO OFF
2
+
3
+ pushd %~dp0
4
+
5
+ REM Command file for Sphinx documentation
6
+
7
+ if "%SPHINXBUILD%" == "" (
8
+ set SPHINXBUILD=python -msphinx
9
+ )
10
+ set SOURCEDIR=.
11
+ set BUILDDIR=_build
12
+ set SPHINXPROJ=file_processor
13
+
14
+ if "%1" == "" goto help
15
+
16
+ %SPHINXBUILD% >NUL 2>NUL
17
+ if errorlevel 9009 (
18
+ echo.
19
+ echo.The Sphinx module was not found. Make sure you have Sphinx installed,
20
+ echo.then set the SPHINXBUILD environment variable to point to the full
21
+ echo.path of the 'sphinx-build' executable. Alternatively you may add the
22
+ echo.Sphinx directory to PATH.
23
+ echo.
24
+ echo.If you don't have Sphinx installed, grab it from
25
+ echo.http://sphinx-doc.org/
26
+ exit /b 1
27
+ )
28
+
29
+ %SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
30
+ goto end
31
+
32
+ :help
33
+ %SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS%
34
+
35
+ :end
36
+ popd
docs/open_models.md ADDED
@@ -0,0 +1,148 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Using with open/local models
2
+ ============================
3
+
4
+ **Use `gpte` first with OpenAI models to get a feel for the `gpte` tool.**
5
+
6
+ **Then go play with experimental Open LLMs 🐉 support and try not to get 🔥!!**
7
+
8
+ At the moment the best option for coding is still the use of `gpt-4` models provided by OpenAI. But open models are catching up and are a good free and privacy-oriented alternative if you possess the proper hardware.
9
+
10
+ You can integrate `gpt-engineer` with open-source models by leveraging an OpenAI-compatible API.
11
+
12
+ We provide the minimal and cleanest solution below. What is described is not the only way to use open/local models, but the one we tested and would recommend to most users.
13
+
14
+ More details on why the solution below is recommended in [this blog post](https://zigabrencic.com/blog/2024-02-21).
15
+
16
+ Setup
17
+ -----
18
+
19
+ For inference engine we recommend for the users to use [llama.cpp](https://github.com/ggerganov/llama.cpp) with its `python` bindings `llama-cpp-python`.
20
+
21
+ We choose `llama.cpp` because:
22
+
23
+ - 1.) It supports the largest amount of hardware acceleration backends.
24
+ - 2.) It supports the diverse set of open LLMs.
25
+ - 3.) Is written in `python` and directly on top of `llama.cpp` inference engine.
26
+ - 4.) Supports the `openAI` API and `langchain` interface.
27
+
28
+ To install `llama-cpp-python` follow the official [installation docs](https://llama-cpp-python.readthedocs.io/en/latest/) and [those docs](https://llama-cpp-python.readthedocs.io/en/latest/install/macos/) for MacOS with Metal support.
29
+
30
+ If you want to benefit from proper hardware acceleration on your machine make sure to set up the proper compiler flags before installing your package.
31
+
32
+ - `linux`: `CMAKE_ARGS="-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS"`
33
+ - `macos` with Metal support: `CMAKE_ARGS="-DLLAMA_METAL=on"`
34
+ - `windows`: `$env:CMAKE_ARGS = "-DLLAMA_BLAS=ON -DLLAMA_BLAS_VENDOR=OpenBLAS"`
35
+
36
+ This will enable the `pip` installer to compile the `llama.cpp` with the proper hardware acceleration backend.
37
+
38
+ Then run:
39
+
40
+ ```bash
41
+ pip install llama-cpp-python
42
+ ```
43
+
44
+ For our use case we also need to set up the web server that `llama-cpp-python` library provides. To install:
45
+
46
+ ```bash
47
+ pip install 'llama-cpp-python[server]'
48
+ ```
49
+
50
+ For detailed use consult the [`llama-cpp-python` docs](https://llama-cpp-python.readthedocs.io/en/latest/server/).
51
+
52
+ Before we proceed we need to obtain the model weights in the `gguf` format. That should be a single file on your disk.
53
+
54
+ In case you have weights in other formats check the `llama-cpp-python` docs for conversion to `gguf` format.
55
+
56
+ Models in other formats `ggml`, `.safetensors`, etc. won't work without prior conversion to `gguf` file format with the solution described below!
57
+
58
+ Which open model to use?
59
+ ==================
60
+
61
+ Your best choice would be:
62
+
63
+ - CodeLlama 70B
64
+ - Mixtral 8x7B
65
+
66
+ We are still testing this part, but the larger the model you can run the better. Sure the responses might be slower in terms of (token/s), but code quality will be higher.
67
+
68
+ For testing that the open LLM `gpte` setup works we recommend starting with a smaller model. You can download weights of [CodeLlama-13B-GGUF by the `TheBloke`](https://huggingface.co/TheBloke/CodeLlama-13B-GGUF) choose the largest model version you can run (for example `Q6_K`), since quantisation will degrade LLM performance.
69
+
70
+ Feel free to try out larger models on your hardware and see what happens.
71
+
72
+ Running the Example
73
+ ==================
74
+
75
+ To see that your setup works check [test open LLM setup](examples/test_open_llm/README.md).
76
+
77
+ If above tests work proceed 😉
78
+
79
+ For checking that `gpte` works with the `CodeLLama` we recommend for you to create a project with `prompt` file content:
80
+
81
+ ```
82
+ Write a python script that sums up two numbers. Provide only the `sum_two_numbers` function and nothing else.
83
+
84
+ Provide two tests:
85
+
86
+ assert(sum_two_numbers(100, 10) == 110)
87
+ assert(sum_two_numbers(10.1, 10) == 20.1)
88
+ ```
89
+
90
+ Now run the LLM in separate terminal:
91
+
92
+ ```bash
93
+ python -m llama_cpp.server --model $model_path --n_batch 256 --n_gpu_layers 30
94
+ ```
95
+
96
+ Then in another terminal window set the following environment variables:
97
+
98
+ ```bash
99
+ export OPENAI_API_BASE="http://localhost:8000/v1"
100
+ export OPENAI_API_KEY="sk-xxx"
101
+ export MODEL_NAME="CodeLLama"
102
+ export LOCAL_MODEL=true
103
+ ```
104
+
105
+ And run `gpt-engineer` with the following command:
106
+
107
+ ```bash
108
+ gpte <project_dir> $MODEL_NAME --lite --temperature 0.1
109
+ ```
110
+
111
+ The `--lite` mode is needed for now since open models for some reason behave worse with too many instructions at the moment. Temperature is set to `0.1` to get consistent best possible results.
112
+
113
+ That's it.
114
+
115
+ *If sth. doesn't work as expected, or you figure out how to improve the open LLM support please let us know.*
116
+
117
+ Using Open Router models
118
+ ==================
119
+
120
+ In case you don't posses the hardware to run local LLM's yourself you can use the hosting on [Open Router](https://openrouter.ai) and pay as you go for the tokens.
121
+
122
+ To set it up you need to Sign In and load purchase 💰 the LLM credits. Pricing per token is different for (each model](https://openrouter.ai/models), but mostly cheaper then Open AI.
123
+
124
+ Then create the API key.
125
+
126
+ To for example use [Meta: Llama 3 8B Instruct (extended)](https://openrouter.ai/models/meta-llama/llama-3-8b-instruct:extended) with `gpte` we need to set:
127
+
128
+ ```bash
129
+ export OPENAI_API_BASE="https://openrouter.ai/api/v1"
130
+ export OPENAI_API_KEY="sk-key-from-open-router"
131
+ export MODEL_NAME="meta-llama/llama-3-8b-instruct:extended"
132
+ export LOCAL_MODEL=true
133
+ ```
134
+
135
+ ```bash
136
+ gpte <project_dir> $MODEL_NAME --lite --temperature 0.1
137
+ ```
138
+
139
+ Using Azure models
140
+ ==================
141
+
142
+ You set your Azure OpenAI key:
143
+ - `export OPENAI_API_KEY=[your api key]`
144
+
145
+ Then you call `gpt-engineer` with your service endpoint `--azure https://aoi-resource-name.openai.azure.com` and set your deployment name (which you created in the Azure AI Studio) as the model name (last `gpt-engineer` argument).
146
+
147
+ Example:
148
+ `gpt-engineer --azure https://myairesource.openai.azure.com ./projects/example/ my-gpt4-project-name`
docs/quickstart.rst ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ==========
2
+ Quickstart
3
+ ==========
4
+
5
+ Installation
6
+ ============
7
+
8
+ To install LangChain run:
9
+
10
+ .. code-block:: console
11
+
12
+ $ python -m pip install gpt-engineer
13
+
14
+ For more details, see our [Installation guide](/instllation.html).
15
+
16
+ Setup API Key
17
+ =============
18
+
19
+ Choose one of the following:
20
+
21
+ - Export env variable (you can add this to ``.bashrc`` so that you don't have to do it each time you start the terminal)
22
+
23
+ .. code-block:: console
24
+
25
+ $ export OPENAI_API_KEY=[your api key]
26
+
27
+ - Add it to the ``.env`` file:
28
+
29
+ - Create a copy of ``.env.template`` named ``.env``
30
+ - Add your ``OPENAI_API_KEY`` in .env
31
+
32
+ - If you want to use a custom model, visit our docs on `using open models and azure models <./open_models.html>`_.
33
+
34
+ - To set API key on windows check the `Windows README <./windows_readme_link.html>`_.
35
+
36
+ Building with ``gpt-engineer``
37
+ ==============================
38
+
39
+ Create new code (default usage)
40
+ -------------------------------
41
+
42
+ - Create an empty folder for your project anywhere on your computer
43
+ - Create a file called ``prompt`` (no extension) inside your new folder and fill it with instructions
44
+ - Run ``gpte <project_dir>`` with a relative path to your folder
45
+ - For example, if you create a new project inside the gpt-engineer ``/projects`` directory:
46
+
47
+ .. code-block:: console
48
+
49
+ $ gpte projects/my-new-project
50
+
51
+ Improve Existing Code
52
+ ---------------------
53
+
54
+ - Locate a folder with code which you want to improve anywhere on your computer
55
+ - Create a file called ``prompt`` (no extension) inside your new folder and fill it with instructions for how you want to improve the code
56
+ - Run ``gpte <project_dir> -i`` with a relative path to your folder
57
+ - For example, if you want to run it against an existing project inside the gpt-engineer ``/projects`` directory:
58
+
59
+ .. code-block:: console
60
+
61
+ $ gpte projects/my-old-project -i
62
+
63
+ By running ``gpt-engineer`` you agree to our `terms <./terms_link.html>`_.
64
+
65
+ To **run in the browser** you can simply:
66
+
67
+ .. image:: https://github.com/codespaces/badge.svg
68
+ :target: https://github.com/gpt-engineer-org/gpt-engineer/codespaces