Spaces:
Running
Running
File size: 16,153 Bytes
2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 14c70c0 2712d99 5d23420 2712d99 14c70c0 2712d99 5d23420 2712d99 14c70c0 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 5d23420 2712d99 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 |
> **Note**
>
> This English README is automatically generated by the markdown translation plugin in this project, and may not be 100% correct.
>
# <img src="logo.png" width="40" > ChatGPT Academic Optimization
**If you like this project, please give it a Star. If you've come up with more useful academic shortcuts or functional plugins, feel free to open an issue or pull request. We also have a [README in English](docs/README_EN.md) translated by this project itself.**
> **Note**
>
> 1. Please note that only **functions with red color** supports reading files, some functions are located in the **dropdown menu** of plugins. Additionally, we welcome and prioritize any new plugin PRs with **highest priority**!
>
> 2. The functionality of each file in this project is detailed in the self-translation report [`self_analysis.md`](https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) of the project. With the iteration of the version, you can also click on the relevant function plugins at any time to call GPT to regenerate the self-analysis report of the project. The FAQ summary is in the [`wiki`](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%B8%B8%E8%A7%81%E9%97%AE%E9%A2%98) section.
>
<div align="center">
Function | Description
--- | ---
One-Click Polish | Supports one-click polishing and finding grammar errors in academic papers.
One-Key Translation Between Chinese and English | One-click translation between Chinese and English.
One-Key Code Interpretation | Can correctly display and interpret code.
[Custom Shortcut Keys](https://www.bilibili.com/video/BV14s4y1E7jN) | Supports custom shortcut keys.
[Configure Proxy Server](https://www.bilibili.com/video/BV1rc411W7Dr) | Supports configuring proxy servers.
Modular Design | Supports custom high-order function plugins and [function plugins], and plugins support [hot updates](https://github.com/binary-husky/chatgpt_academic/wiki/%E5%87%BD%E6%95%B0%E6%8F%92%E4%BB%B6%E6%8C%87%E5%8D%97).
[Self-programming Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] [One-Key Read] (https://github.com/binary-husky/chatgpt_academic/wiki/chatgpt-academic%E9%A1%B9%E7%9B%AE%E8%87%AA%E8%AF%91%E8%A7%A3%E6%8A%A5%E5%91%8A) The source code of this project is analyzed.
[Program Analysis](https://www.bilibili.com/video/BV1cj411A7VW) | [Function Plugin] One-click can analyze the project tree of other Python/C/C++/Java/Lua/... projects
Read the Paper | [Function Plugin] One-click interpretation of the full text of latex paper and generation of abstracts
Latex Full Text Translation, Proofreading | [Function Plugin] One-click translation or proofreading of latex papers.
Batch Comment Generation | [Function Plugin] One-click batch generation of function comments
Chat Analysis Report Generation | [Function Plugin] After running, an automatic summary report will be generated
[Arxiv Assistant](https://www.bilibili.com/video/BV1LM4y1279X) | [Function Plugin] Enter the arxiv article url to translate the abstract and download the PDF with one click
[Full-text Translation Function of PDF Paper](https://www.bilibili.com/video/BV1KT411x7Wn) | [Function Plugin] Extract the title & abstract of the PDF paper + translate the full text (multithreading)
[Google Scholar Integration Assistant](https://www.bilibili.com/video/BV19L411U7ia) | [Function Plugin] Given any Google Scholar search page URL, let gpt help you choose interesting articles.
Formula / Picture / Table Display | Can display both the tex form and the rendering form of formulas at the same time, support formula and code highlighting
Multithreaded Function Plugin Support | Supports multi-threaded calling chatgpt, one-click processing of massive text or programs
Start Dark Gradio [Theme](https://github.com/binary-husky/chatgpt_academic/issues/173) | Add ```/?__dark-theme=true``` at the end of the browser url to switch to dark theme
[Multiple LLM Models](https://www.bilibili.com/video/BV1wT411p7yf) support, [API2D](https://api2d.com/) interface support | It must feel nice to be served by both GPT3.5, GPT4, and [Tsinghua ChatGLM](https://github.com/THUDM/ChatGLM-6B)!
Huggingface non-Science Net [Online Experience](https://huggingface.co./spaces/qingxu98/gpt-academic) | After logging in to huggingface, copy [this space](https://huggingface.co./spaces/qingxu98/gpt-academic)
... | ...
</div>
- New interface (switch between "left-right layout" and "up-down layout" by modifying the LAYOUT option in config.py)
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/230361456-61078362-a966-4eb5-b49e-3c62ef18b860.gif" width="700" >
</div>
- All buttons are dynamically generated by reading functional.py and can add custom functionality at will, freeing up clipboard
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/231975334-b4788e91-4887-412f-8b43-2b9c5f41d248.gif" width="700" >
</div>
- Proofreading / correcting
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/231980294-f374bdcb-3309-4560-b424-38ef39f04ebd.gif" width="700" >
</div>
- If the output contains formulas, it will be displayed in both the tex form and the rendering form at the same time, which is convenient for copying and reading
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/230598842-1d7fcddd-815d-40ee-af60-baf488a199df.png" width="700" >
</div>
- Don't want to read the project code? Just take the whole project to chatgpt
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="700" >
</div>
- Multiple major language model mixing calls (ChatGLM + OpenAI-GPT3.5 + [API2D](https://api2d.com/)-GPT4)
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/232537274-deca0563-7aa6-4b5d-94a2-b7c453c47794.png" width="700" >
</div>
Multiple major language model mixing call [huggingface beta version](https://huggingface.co./spaces/qingxu98/academic-chatgpt-beta) (the huggingface version does not support chatglm)
---
## Installation-Method 1: Run directly (Windows, Linux or MacOS)
1. Download project
```sh
git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic
```
2. Configure API_KEY and proxy settings
In `config.py`, configure the overseas Proxy and OpenAI API KEY as follows:
```
1. If you are in China, you need to set up an overseas proxy to use the OpenAI API smoothly. Please read config.py carefully for setup details (1. Modify USE_PROXY to True; 2. Modify proxies according to the instructions).
2. Configure the OpenAI API KEY. You need to register and obtain an API KEY on the OpenAI website. Once you get the API KEY, you can configure it in the config.py file.
3. Issues related to proxy networks (network timeouts, proxy failures) are summarized at https://github.com/binary-husky/chatgpt_academic/issues/1
```
(P.S. When the program runs, it will first check whether there is a private configuration file named `config_private.py` and use the same-name configuration in `config.py` to overwrite it. Therefore, if you can understand our configuration reading logic, we strongly recommend that you create a new configuration file named `config_private.py` next to `config.py` and transfer (copy) the configuration in `config.py` to` config_private.py`. `config_private.py` is not controlled by git and can make your privacy information more secure.))
3. Install dependencies
```sh
# (Option One) Recommended
python -m pip install -r requirements.txt
# (Option Two) If you use anaconda, the steps are similar:
# (Option Two.1) conda create -n gptac_venv python=3.11
# (Option Two.2) conda activate gptac_venv
# (Option Two.3) python -m pip install -r requirements.txt
# Note: Use official pip source or Ali pip source. Other pip sources (such as some university pips) may have problems, and temporary replacement methods are as follows:
# python -m pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/
```
If you need to support Tsinghua ChatGLM, you need to install more dependencies (if you are not familiar with python or your computer configuration is not good, we recommend not to try):
```sh
python -m pip install -r request_llm/requirements_chatglm.txt
```
4. Run
```sh
python main.py
```
5. Test function plugins
```
- Test Python project analysis
In the input area, enter `./crazy_functions/test_project/python/dqn`, and then click "Analyze the entire Python project"
- Test self-code interpretation
Click "[Multithreading Demo] Interpretation of This Project Itself (Source Code Interpretation)"
- Test experimental function template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
Click "[Function Plugin Template Demo] Today in History"
- There are more functions to choose from in the function plugin area drop-down menu.
```
## Installation-Method 2: Use Docker (Linux)
1. ChatGPT only (recommended for most people)
``` sh
# download project
git clone https://github.com/binary-husky/chatgpt_academic.git
cd chatgpt_academic
# configure overseas Proxy and OpenAI API KEY
Edit config.py with any text editor
# Install
docker build -t gpt-academic .
# Run
docker run --rm -it --net=host gpt-academic
# Test function plug-in
## Test function plugin template function (requires gpt to answer what happened today in history). You can use this function as a template to implement more complex functions.
Click "[Function Plugin Template Demo] Today in History"
## Test Abstract Writing for Latex Projects
Enter ./crazy_functions/test_project/latex/attention in the input area, and then click "Read Tex Paper and Write Abstract"
## Test Python Project Analysis
Enter ./crazy_functions/test_project/python/dqn in the input area and click "Analyze the entire Python project."
More functions are available in the function plugin area drop-down menu.
```
2. ChatGPT+ChatGLM (requires strong familiarity with docker + strong computer configuration)
``` sh
# Modify dockerfile
cd docs && nano Dockerfile+ChatGLM
# How to build | 如何构建 (Dockerfile+ChatGLM在docs路径下,请先cd docs)
docker build -t gpt-academic --network=host -f Dockerfile+ChatGLM .
# How to run | 如何运行 (1) 直接运行:
docker run --rm -it --net=host --gpus=all gpt-academic
# How to run | 如何运行 (2) 我想运行之前进容器做一些调整:
docker run --rm -it --net=host --gpus=all gpt-academic bash
```
## Installation-Method 3: Other Deployment Methods
1. Remote Cloud Server Deployment
Please visit [Deployment Wiki-1] (https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BA%91%E6%9C%8D%E5%8A%A1%E5%99%A8%E8%BF%9C%E7%A8%8B%E9%83%A8%E7%BD%B2%E6%8C%87%E5%8D%97)
2. Use WSL2 (Windows Subsystem for Linux)
Please visit [Deployment Wiki-2](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BD%BF%E7%94%A8WSL2%EF%BC%88Windows-Subsystem-for-Linux-%E5%AD%90%E7%B3%BB%E7%BB%9F%EF%BC%89%E9%83%A8%E7%BD%B2)
## Installation-Proxy Configuration
### Method 1: Conventional method
[Configure Proxy](https://github.com/binary-husky/chatgpt_academic/issues/1)
### Method Two: Step-by-step tutorial for newcomers
[Step-by-step tutorial for newcomers](https://github.com/binary-husky/chatgpt_academic/wiki/%E4%BB%A3%E7%90%86%E8%BD%AF%E4%BB%B6%E9%97%AE%E9%A2%98%E7%9A%84%E6%96%B0%E6%89%8B%E8%A7%A3%E5%86%B3%E6%96%B9%E6%B3%95%EF%BC%88%E6%96%B9%E6%B3%95%E5%8F%AA%E9%80%82%E7%94%A8%E4%BA%8E%E6%96%B0%E6%89%8B%EF%BC%89)
---
## Customizing Convenient Buttons (Customizing Academic Shortcuts)
Open `core_functional.py` with any text editor and add an item as follows, then restart the program (if the button has been successfully added and visible, both the prefix and suffix support hot modification without the need to restart the program to take effect). For example:
```
"Super English to Chinese translation": {
# Prefix, which will be added before your input. For example, to describe your requirements, such as translation, code interpretation, polishing, etc.
"Prefix": "Please translate the following content into Chinese and use a markdown table to interpret the proprietary terms in the text one by one:\n\n",
# Suffix, which will be added after your input. For example, combined with the prefix, you can put your input content in quotes.
"Suffix": "",
},
```
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226899272-477c2134-ed71-4326-810c-29891fe4a508.png" width="500" >
</div>
---
## Some Function Displays
### Image Display:
You are a professional academic paper translator.
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/228737599-bf0a9d9c-1808-4f43-ae15-dfcc7af0f295.png" width="800" >
</div>
### If a program can understand and analyze itself:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936850-c77d7183-0749-4c1c-9875-fd4891842d0c.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226936618-9b487e4b-ab5b-4b6e-84c6-16942102e917.png" width="800" >
</div>
### Analysis of any Python/Cpp project:
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226935232-6b6a73ce-8900-4aee-93f9-733c7e6fef53.png" width="800" >
</div>
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/226969067-968a27c1-1b9c-486b-8b81-ab2de8d3f88a.png" width="800" >
</div>
### One-click reading comprehension and summary generation of Latex papers
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227504406-86ab97cd-f208-41c3-8e4a-7000e51cf980.png" width="800" >
</div>
### Automatic report generation
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/227503770-fe29ce2c-53fd-47b0-b0ff-93805f0c2ff4.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504617-7a497bb3-0a2a-4b50-9a8a-95ae60ea7afd.png" height="300" >
<img src="https://user-images.githubusercontent.com/96192199/227504005-efeaefe0-b687-49d0-bf95-2d7b7e66c348.png" height="300" >
</div>
### Modular functional design
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229288270-093643c1-0018-487a-81e6-1d7809b6e90f.png" height="400" >
<img src="https://user-images.githubusercontent.com/96192199/227504931-19955f78-45cd-4d1c-adac-e71e50957915.png" height="400" >
</div>
### Source code translation to English
<div align="center">
<img src="https://user-images.githubusercontent.com/96192199/229720562-fe6c3508-6142-4635-a83d-21eb3669baee.png" height="400" >
</div>
## Todo and version planning:
- version 3.2+ (todo): Function plugin supports more parameter interfaces
- version 3.1: Support for inquiring multiple GPT models at the same time! Support for api2d, support for multiple apikeys load balancing
- version 3.0: Support for chatglm and other small llms
- version 2.6: Refactored the plugin structure, improved interactivity, added more plugins
- version 2.5: Self-updating, solves the problem of text being too long and token overflowing when summarizing large project source code
- version 2.4: (1) Added PDF full text translation function; (2) Added function to switch input area position; (3) Added vertical layout option; (4) Multi-threaded function plugin optimization.
- version 2.3: Enhanced multi-threaded interactivity
- version 2.2: Function plugin supports hot reloading
- version 2.1: Foldable layout
- version 2.0: Introduction of modular function plugins
- version 1.0: Basic functions
## Reference and learning
```
The code design of this project has referenced many other excellent projects, including:
# Reference project 1: Borrowed many tips from ChuanhuChatGPT
https://github.com/GaiZhenbiao/ChuanhuChatGPT
# Reference project 2: Tsinghua ChatGLM-6B:
https://github.com/THUDM/ChatGLM-6B
```
|