Spaces:
Runtime error
Runtime error
添加Docker Compose快速启动指南并优化配置 / Add Docker Compose Quick Start Guide and Optimize Configuration (#103)
Browse files* Update Dockerfile and docker-compose.yaml for backend and frontend services
* Update docker-compose.yaml for backend and frontend services
* Update docker-compose.yaml for backend and frontend services
* Rewrite comments in English and update base image note with GPU compatibility guidance
* chore: Update Docker Compose guide with CORS considerations and temporary solutions
- .dockerignore +12 -0
- docker/README.md +113 -0
- docker/README_zh-CN.md +111 -0
- docker/backend.dockerfile +29 -0
- docker/docker-compose.yaml +64 -0
- docker/frontend.dockerfile +38 -0
- mindsearch/agent/models.py +3 -1
.dockerignore
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
**/node_modules
|
2 |
+
**/dist
|
3 |
+
**/.git
|
4 |
+
**/.gitignore
|
5 |
+
**/.vscode
|
6 |
+
**/README.md
|
7 |
+
**/LICENSE
|
8 |
+
**/.env
|
9 |
+
**/npm-debug.log
|
10 |
+
**/yarn-debug.log
|
11 |
+
**/yarn-error.log
|
12 |
+
**/.pnpm-debug.log
|
docker/README.md
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# MindSearch Docker Compose User Guide
|
2 |
+
|
3 |
+
## 🚀 Quick Start with Docker Compose
|
4 |
+
|
5 |
+
MindSearch now supports quick deployment and startup using Docker Compose. This method simplifies the environment configuration process, allowing you to easily run the entire system.
|
6 |
+
|
7 |
+
### Prerequisites
|
8 |
+
|
9 |
+
- Docker installed (Docker Compose V2 is integrated into Docker)
|
10 |
+
- NVIDIA GPU and NVIDIA Container Toolkit (required for NVIDIA GPU support)
|
11 |
+
|
12 |
+
Note: Newer versions of Docker have integrated Docker Compose V2, so you can directly use the `docker compose` command without installing docker-compose separately.
|
13 |
+
|
14 |
+
### First-time Startup
|
15 |
+
|
16 |
+
Execute the following commands in the project root directory:
|
17 |
+
|
18 |
+
```bash
|
19 |
+
cd docker
|
20 |
+
docker compose up --build -d
|
21 |
+
```
|
22 |
+
|
23 |
+
This will build the necessary Docker images and start the services in the background.
|
24 |
+
|
25 |
+
### Daily Use
|
26 |
+
|
27 |
+
Start services:
|
28 |
+
|
29 |
+
```bash
|
30 |
+
cd docker
|
31 |
+
docker compose up -d
|
32 |
+
```
|
33 |
+
|
34 |
+
View running services:
|
35 |
+
|
36 |
+
```bash
|
37 |
+
docker ps
|
38 |
+
```
|
39 |
+
|
40 |
+
Stop services:
|
41 |
+
|
42 |
+
```bash
|
43 |
+
docker compose down
|
44 |
+
```
|
45 |
+
|
46 |
+
### Configuration Instructions
|
47 |
+
|
48 |
+
1. **Environment Variable Settings**:
|
49 |
+
The system will automatically read the following variables from your environment and pass them to the containers:
|
50 |
+
|
51 |
+
- `OPENAI_API_KEY`: Your OpenAI API key (required when using GPT models)
|
52 |
+
- `OPENAI_API_BASE`: Base URL for OpenAI API (default is https://api.openai.com/v1)
|
53 |
+
- `LANG`: Set language, e.g., 'en' or 'zh'
|
54 |
+
- `MODEL_FORMAT`: Set model format, e.g., 'gpt4' or 'internlm_server'
|
55 |
+
|
56 |
+
You can set these variables before running the Docker Compose command, for example:
|
57 |
+
|
58 |
+
```bash
|
59 |
+
export OPENAI_API_KEY=your_api_key_here
|
60 |
+
export OPENAI_API_BASE=https://your-custom-endpoint.com/v1
|
61 |
+
export LANG=en
|
62 |
+
export MODEL_FORMAT=gpt4
|
63 |
+
docker compose up -d
|
64 |
+
```
|
65 |
+
|
66 |
+
2. **Model Cache**:
|
67 |
+
The container maps the `/root/.cache:/root/.cache` path. If you use the local large model mode (`internlm_server`), model files will be downloaded to this directory. To change the storage location, please modify the corresponding configuration in docker-compose.yaml.
|
68 |
+
|
69 |
+
3. **GPU Support**:
|
70 |
+
The current configuration defaults to using NVIDIA GPUs. For other GPU types (such as AMD or Apple M series), please refer to the comments in docker-compose.yaml for appropriate adjustments.
|
71 |
+
|
72 |
+
4. **Service Ports**:
|
73 |
+
The default API service address is `http://0.0.0.0:8002`. To change this, please modify the corresponding configuration in docker-compose.yaml.
|
74 |
+
|
75 |
+
### Notes
|
76 |
+
|
77 |
+
- During the first run, depending on your chosen model and network conditions, it may take some time to download the necessary model files.
|
78 |
+
- Ensure you have sufficient disk space to store model files and Docker images.
|
79 |
+
- If you encounter permission issues, you may need to use sudo to run Docker commands.
|
80 |
+
|
81 |
+
### Cross-Origin Access Considerations
|
82 |
+
|
83 |
+
When accessing the frontend, it's important to be aware of potential cross-origin issues. The current Docker Compose configuration is a starting point for the project but doesn't fully resolve all cross-origin problems that might be encountered in a production environment. Please note the following points:
|
84 |
+
|
85 |
+
1. **API Service Address Consistency**:
|
86 |
+
Ensure that the API service address matches the address you use to access the frontend. For example:
|
87 |
+
|
88 |
+
- For local deployment: Use `0.0.0.0` or `127.0.0.1`
|
89 |
+
- For LAN or public network deployment: Use the same IP address or domain name
|
90 |
+
|
91 |
+
2. **Current Limitations**:
|
92 |
+
The current configuration is primarily suitable for development and testing environments. You may still encounter cross-origin issues in certain deployment scenarios.
|
93 |
+
|
94 |
+
3. **Future Improvements**:
|
95 |
+
To enhance the system's robustness and adapt to more deployment scenarios, we plan to implement the following improvements in future versions:
|
96 |
+
|
97 |
+
- Modify server-side code to properly configure CORS (Cross-Origin Resource Sharing)
|
98 |
+
- Adjust client-side code to handle API requests more flexibly
|
99 |
+
- Consider introducing reverse proxy solutions
|
100 |
+
|
101 |
+
4. **Temporary Solutions**:
|
102 |
+
Before we implement these improvements, if you encounter cross-origin issues in specific environments, you can consider using browser plugins to temporarily disable cross-origin restrictions (for testing purposes only) or using a simple reverse proxy server.
|
103 |
+
|
104 |
+
5. **Docker Environment Settings**:
|
105 |
+
In the `docker-compose.yaml` file, ensure that the `API_URL` environment variable is set correctly, for example:
|
106 |
+
```yaml
|
107 |
+
environment:
|
108 |
+
- API_URL=http://your-server-address:8002
|
109 |
+
```
|
110 |
+
|
111 |
+
We appreciate your understanding and patience. MindSearch is still in its early stages, and we are working hard to improve various aspects of the system. Your feedback is very important to us as it helps us continually refine the project. If you encounter any issues or have any suggestions during use, please feel free to provide feedback.
|
112 |
+
|
113 |
+
By using Docker Compose, you can quickly deploy MindSearch without worrying about complex environment configurations. This method is particularly suitable for rapid testing and development environment deployments. If you encounter any problems during deployment, please refer to our troubleshooting guide or seek community support.
|
docker/README_zh-CN.md
ADDED
@@ -0,0 +1,111 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# MindSearch Docker Compose 使用指南
|
2 |
+
|
3 |
+
## 🚀 使用 Docker Compose 快速启动
|
4 |
+
|
5 |
+
MindSearch 现在支持使用 Docker Compose 进行快速部署和启动。这种方法简化了环境配置过程,使您能够轻松运行整个系统。
|
6 |
+
|
7 |
+
### 前提条件
|
8 |
+
|
9 |
+
- 安装 Docker(Docker Compose V2 已集成到 Docker 中)
|
10 |
+
- NVIDIA GPU 和 NVIDIA Container Toolkit(对于 NVIDIA GPU 支持是必需的)
|
11 |
+
|
12 |
+
注意:较新版本的 Docker 已经整合了 Docker Compose V2,因此您可以直接使用 `docker compose` 命令,无需单独安装 docker-compose。
|
13 |
+
|
14 |
+
### 首次启动
|
15 |
+
|
16 |
+
在项目根目录下执行以下命令:
|
17 |
+
|
18 |
+
```bash
|
19 |
+
cd docker
|
20 |
+
docker compose up --build -d
|
21 |
+
```
|
22 |
+
|
23 |
+
这将构建必要的 Docker 镜像并在后台启动服务。
|
24 |
+
|
25 |
+
### 日常使用
|
26 |
+
|
27 |
+
启动服务:
|
28 |
+
|
29 |
+
```bash
|
30 |
+
cd docker
|
31 |
+
docker compose up -d
|
32 |
+
```
|
33 |
+
|
34 |
+
查看运行中的服务:
|
35 |
+
|
36 |
+
```bash
|
37 |
+
docker ps
|
38 |
+
```
|
39 |
+
|
40 |
+
停止服务:
|
41 |
+
|
42 |
+
```bash
|
43 |
+
docker compose down
|
44 |
+
```
|
45 |
+
|
46 |
+
### 配置说明
|
47 |
+
|
48 |
+
1. **环境变量设置**:
|
49 |
+
系统会自动从您的环境中读取以下变量并传递给容器:
|
50 |
+
|
51 |
+
- `OPENAI_API_KEY`:您的 OpenAI API 密钥(使用 GPT 模型时需要)
|
52 |
+
- `OPENAI_API_BASE`:OpenAI API 的基础 URL(默认为 https://api.openai.com/v1)
|
53 |
+
- `LANG`:设置语言,如 'en' 或 'zh'
|
54 |
+
- `MODEL_FORMAT`:设置模型格式,如 'gpt4' 或 'internlm_server'
|
55 |
+
|
56 |
+
您可以在运行 Docker Compose 命令前设置这些变量,例如:
|
57 |
+
|
58 |
+
```bash
|
59 |
+
export OPENAI_API_KEY=your_api_key_here
|
60 |
+
export OPENAI_API_BASE=https://your-custom-endpoint.com/v1
|
61 |
+
export LANG=en
|
62 |
+
export MODEL_FORMAT=gpt4
|
63 |
+
docker compose up -d
|
64 |
+
```
|
65 |
+
|
66 |
+
2. **模型缓存**:
|
67 |
+
容器会映射 `/root/.cache:/root/.cache` 路径。如果您使用本地运行大模型模式(`internlm_server`),模型文件将下载到此目录。如需更改存储位置,请修改 docker-compose.yaml 中的相应配置。
|
68 |
+
|
69 |
+
3. **GPU 支持**:
|
70 |
+
当前配置默认使用 NVIDIA GPU。对于其他 GPU 类型(如 AMD 或 Apple M 系列),请参考 docker-compose.yaml 中的注释进行相应调整。
|
71 |
+
|
72 |
+
4. **服务端口**:
|
73 |
+
默认 API 服务地址为 `http://0.0.0.0:8002`。如需更改,请修改 docker-compose.yaml 中的相应配置。
|
74 |
+
|
75 |
+
### 注意事项
|
76 |
+
|
77 |
+
- 首次运行时,根据您选择的模型和网络状况,可能需要一些时间来下载必要的模型文件。
|
78 |
+
- 确保您有足够的磁盘空间来存储模型文件和 Docker 镜像。
|
79 |
+
- 如果遇到权限问题,可能需要使用 sudo 运行 Docker 命令。
|
80 |
+
|
81 |
+
### 跨域访问注意事项
|
82 |
+
|
83 |
+
在访问前端时,需要特别注意避免跨域问题。目前的 Docker Compose 配置是项目的一个起点,但还没有完全解决所有生产环境中可能遇到的跨域问题。请注意以下几点:
|
84 |
+
|
85 |
+
1. **API 服务地址一致性**:
|
86 |
+
确保 API 服务地址与您访问前端的服务地址保持一致。例如:
|
87 |
+
- 本地部署时:使用 `0.0.0.0` 或 `127.0.0.1`
|
88 |
+
- 局域网或公网部署时:使用相同的 IP 地址或域名
|
89 |
+
|
90 |
+
2. **当前限制**:
|
91 |
+
目前的配置主要适用于开发和测试环境。在某些部署场景下,您可能仍会遇到跨域问题。
|
92 |
+
|
93 |
+
3. **未来改进**:
|
94 |
+
为了提高系统的鲁棒性和适应更多的部署场景,我们计划在未来的版本中进行以下改进:
|
95 |
+
- 修改服务端代码以适当配置 CORS(跨源资源共享)
|
96 |
+
- 调整客户端代码以更灵活地处理 API 请求
|
97 |
+
- 考虑引入反向代理方案
|
98 |
+
|
99 |
+
4. **临时解决方案**:
|
100 |
+
在我们实现这些改进之前,如果您在特定环境中遇到跨域问题,可以考虑使用浏览器插件暂时禁用跨域限制(仅用于测试),或者使用简单的反向代理服务器。
|
101 |
+
|
102 |
+
5. **Docker 环境中的设置**:
|
103 |
+
在 `docker-compose.yaml` 文件中,确保 `API_URL` 环境变量设置正确,例如:
|
104 |
+
```yaml
|
105 |
+
environment:
|
106 |
+
- API_URL=http://your-server-address:8002
|
107 |
+
```
|
108 |
+
|
109 |
+
我们感谢您的理解和耐心。MindSearch 仍处于早期阶段,我们正在努力改进系统的各个方面。您的反馈对我们非常重要,它帮助我们不断完善项目。如果您在使用过程中遇到任何问题或有任何建议,请随时向我们反馈。
|
110 |
+
|
111 |
+
通过使用 Docker Compose,您可以快速部署 MindSearch,而无需担心复杂的环境配置。这种方法特别适合快速测试和开发环境部署。如果您在部署过程中遇到任何问题,请查阅我们的故障排除指南或寻求社区支持。
|
docker/backend.dockerfile
ADDED
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Use openmmlab/lmdeploy:latest-cu12 as the base image
|
2 |
+
# Note: Before using this Dockerfile, you should visit https://hub.docker.com/r/openmmlab/lmdeploy/tags
|
3 |
+
# to select a base image that's compatible with your specific GPU architecture.
|
4 |
+
# The 'latest-cu12' tag is used here as an example, but you should choose the most
|
5 |
+
# appropriate tag for your setup (e.g., cu11 for CUDA 11, cu12 for CUDA 12, etc.)
|
6 |
+
FROM openmmlab/lmdeploy:latest-cu12
|
7 |
+
|
8 |
+
# Set the working directory
|
9 |
+
WORKDIR /root
|
10 |
+
|
11 |
+
# Install Git
|
12 |
+
RUN apt-get update && apt-get install -y git && apt-get clean && rm -rf /var/lib/apt/lists/*
|
13 |
+
|
14 |
+
# Copy the mindsearch folder to the /root directory of the container
|
15 |
+
COPY mindsearch /root/mindsearch
|
16 |
+
|
17 |
+
# Install specified dependency packages
|
18 |
+
# Note: lmdeploy dependency is already included in the base image, no need to reinstall
|
19 |
+
RUN pip install --no-cache-dir \
|
20 |
+
duckduckgo_search==5.3.1b1 \
|
21 |
+
einops \
|
22 |
+
fastapi \
|
23 |
+
gradio \
|
24 |
+
janus \
|
25 |
+
pyvis \
|
26 |
+
sse-starlette \
|
27 |
+
termcolor \
|
28 |
+
uvicorn \
|
29 |
+
git+https://github.com/InternLM/lagent.git
|
docker/docker-compose.yaml
ADDED
@@ -0,0 +1,64 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
services:
|
2 |
+
backend:
|
3 |
+
container_name: mindsearch-backend
|
4 |
+
build:
|
5 |
+
context: ..
|
6 |
+
dockerfile: docker/backend.dockerfile
|
7 |
+
image: mindsearch/backend:latest
|
8 |
+
restart: unless-stopped
|
9 |
+
# Uncomment the following line to force using local build
|
10 |
+
# pull: never
|
11 |
+
ports:
|
12 |
+
- "8002:8002"
|
13 |
+
environment:
|
14 |
+
- PYTHONUNBUFFERED=1
|
15 |
+
- OPENAI_API_KEY=${OPENAI_API_KEY:-}
|
16 |
+
- OPENAI_API_BASE=${OPENAI_API_BASE:-https://api.openai.com/v1}
|
17 |
+
command: python -m mindsearch.app --lang ${LANG:-zh} --model_format ${MODEL_FORMAT:-internlm_server}
|
18 |
+
volumes:
|
19 |
+
- /root/.cache:/root/.cache
|
20 |
+
deploy:
|
21 |
+
resources:
|
22 |
+
reservations:
|
23 |
+
devices:
|
24 |
+
- driver: nvidia
|
25 |
+
count: 1
|
26 |
+
capabilities: [gpu]
|
27 |
+
# GPU support explanation:
|
28 |
+
# The current configuration has been tested with NVIDIA GPUs. If you use other types of GPUs, you may need to adjust the configuration.
|
29 |
+
# For AMD GPUs, you can try using the ROCm driver by modifying the configuration as follows:
|
30 |
+
# deploy:
|
31 |
+
# resources:
|
32 |
+
# reservations:
|
33 |
+
# devices:
|
34 |
+
# - driver: amd
|
35 |
+
# count: 1
|
36 |
+
# capabilities: [gpu]
|
37 |
+
#
|
38 |
+
# For other GPU types, you may need to consult the respective Docker GPU support documentation.
|
39 |
+
# In theory, any GPU supported by PyTorch should be configurable here.
|
40 |
+
# If you encounter issues, try the following steps:
|
41 |
+
# 1. Ensure the correct GPU drivers are installed on the host
|
42 |
+
# 2. Check if your Docker version supports your GPU type
|
43 |
+
# 3. Install necessary GPU-related libraries in the Dockerfile
|
44 |
+
# 4. Adjust the deploy configuration here to match your GPU type
|
45 |
+
#
|
46 |
+
# Note: After changing GPU configuration, you may need to rebuild the image.
|
47 |
+
|
48 |
+
frontend:
|
49 |
+
container_name: mindsearch-frontend
|
50 |
+
build:
|
51 |
+
context: ..
|
52 |
+
dockerfile: docker/frontend.dockerfile
|
53 |
+
image: mindsearch/frontend:latest
|
54 |
+
restart: unless-stopped
|
55 |
+
# Uncomment the following line to force using local build
|
56 |
+
# pull: never
|
57 |
+
ports:
|
58 |
+
- "8080:8080"
|
59 |
+
environment:
|
60 |
+
- NODE_ENV=production
|
61 |
+
- API_URL=http://0.0.0.0:8002
|
62 |
+
- SERVE_PORT=8080
|
63 |
+
depends_on:
|
64 |
+
- backend
|
docker/frontend.dockerfile
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Use Node.js 18 as the base image
|
2 |
+
FROM node:18-alpine AS build
|
3 |
+
|
4 |
+
# Set the working directory
|
5 |
+
WORKDIR /app
|
6 |
+
|
7 |
+
# Copy package.json and package-lock.json
|
8 |
+
COPY frontend/React/package*.json ./
|
9 |
+
|
10 |
+
# Install dependencies
|
11 |
+
RUN npm install
|
12 |
+
|
13 |
+
# Copy frontend source code
|
14 |
+
COPY frontend/React/ ./
|
15 |
+
|
16 |
+
# Build the application
|
17 |
+
RUN npm run build
|
18 |
+
|
19 |
+
# Use Node.js to serve static files
|
20 |
+
FROM node:18-alpine
|
21 |
+
|
22 |
+
WORKDIR /app
|
23 |
+
|
24 |
+
# Install serve package and gettext (for envsubst)
|
25 |
+
RUN apk add --no-cache gettext && \
|
26 |
+
npm install -g serve
|
27 |
+
|
28 |
+
# Copy build artifacts
|
29 |
+
COPY --from=build /app/dist ./dist
|
30 |
+
|
31 |
+
# Create start script
|
32 |
+
RUN echo '#!/bin/sh' > start.sh && \
|
33 |
+
echo 'find ./dist -type f -exec sed -i "s|http://127.0.0.1:8002|$API_URL|g" {} +' >> start.sh && \
|
34 |
+
echo 'serve -s dist -l $SERVE_PORT' >> start.sh && \
|
35 |
+
chmod +x start.sh
|
36 |
+
|
37 |
+
# Use the start script
|
38 |
+
CMD ["/bin/sh", "./start.sh"]
|
mindsearch/agent/models.py
CHANGED
@@ -37,7 +37,9 @@ internlm_hf = dict(type=HFTransformerCasualLM,
|
|
37 |
|
38 |
gpt4 = dict(type=GPTAPI,
|
39 |
model_type='gpt-4-turbo',
|
40 |
-
key=os.environ.get('OPENAI_API_KEY', 'YOUR OPENAI API KEY')
|
|
|
|
|
41 |
|
42 |
url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
|
43 |
qwen = dict(type=GPTAPI,
|
|
|
37 |
|
38 |
gpt4 = dict(type=GPTAPI,
|
39 |
model_type='gpt-4-turbo',
|
40 |
+
key=os.environ.get('OPENAI_API_KEY', 'YOUR OPENAI API KEY'),
|
41 |
+
openai_api_base=os.environ.get('OPENAI_API_BASE', 'https://api.openai.com/v1'),
|
42 |
+
)
|
43 |
|
44 |
url = 'https://dashscope.aliyuncs.com/api/v1/services/aigc/text-generation/generation'
|
45 |
qwen = dict(type=GPTAPI,
|