Spaces:
Running
Running
Edited README.md
Browse files
README.md
CHANGED
@@ -17,47 +17,231 @@ pinned: false
|
|
17 |
4. [API Endpoints and Usage](#api-endpoints-and-usage)
|
18 |
5. [Instructions for Building and Running the Docker Container](#instructions-for-building-and-running-the-docker-container)
|
19 |
6. [Deployment Details](#deployment-details)
|
20 |
-
7. [
|
|
|
21 |
|
22 |
---
|
23 |
|
24 |
## Project Overview
|
25 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
---
|
28 |
|
29 |
## Environment Setup
|
30 |
-
Steps to install dependencies, configure settings, and prepare the environment.
|
31 |
|
32 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
33 |
|
34 |
## Run the Project
|
35 |
-
Detailed instructions to execute the main script.
|
36 |
-
```commandline
|
37 |
-
python run.py
|
38 |
-
python -m fe.gradio_app
|
39 |
-
```
|
40 |
-
---
|
41 |
|
42 |
-
|
43 |
-
List of API endpoints, methods, input parameters, and example requests.
|
44 |
|
45 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
46 |
|
47 |
## Instructions for Building and Running the Docker Container
|
48 |
-
Steps to create and run the Docker container for the application.
|
49 |
|
50 |
-
|
51 |
-
|
52 |
-
docker-
|
53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
---
|
56 |
|
57 |
## Deployment Details
|
58 |
-
Explanation of the deployment process and platform-specific configurations.
|
59 |
|
60 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
61 |
|
62 |
## Information on How to Use the UI
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
4. [API Endpoints and Usage](#api-endpoints-and-usage)
|
18 |
5. [Instructions for Building and Running the Docker Container](#instructions-for-building-and-running-the-docker-container)
|
19 |
6. [Deployment Details](#deployment-details)
|
20 |
+
7. [Running Tests](#running-tests)
|
21 |
+
8. [Information on How to Use the UI](#information-on-how-to-use-the-ui)
|
22 |
|
23 |
---
|
24 |
|
25 |
## Project Overview
|
26 |
+
The Prompt Search Engine is designed to address the growing need for high-quality prompts used in AI-generated content,
|
27 |
+
particularly for models like Stable Diffusion. By leveraging a database of existing prompts,
|
28 |
+
this search engine helps users discover the most relevant and effective prompts, significantly enhancing the quality of generated images.
|
29 |
+
|
30 |
+
The main goal of the prompt search engine is to return the top n most similar prompts with respect to the input prompt query.
|
31 |
+
This way, we can generate higher quality images by providing better prompts for the Stable Diffusion models.
|
32 |
+
|
33 |
+
### Technology Used
|
34 |
+
|
35 |
+
This project leverages a modern tech stack to deliver efficient search functionality:
|
36 |
+
|
37 |
+
1. **FastAPI**: A high-performance web framework for building the backend API.
|
38 |
+
2. **Gradio**: A lightweight UI framework for creating the frontend interface.
|
39 |
+
3. **Hugging Face Spaces**: For hosting the application using Docker.
|
40 |
+
4. **Hugging Face Datasets**: Downloads and processes the `google-research-datasets/conceptual_captions` dataset at runtime.
|
41 |
+
5. **Uvicorn**: ASGI server for running the FastAPI application.
|
42 |
+
6. **Python**: Core language used for development and scripting.
|
43 |
|
44 |
---
|
45 |
|
46 |
## Environment Setup
|
|
|
47 |
|
48 |
+
To set up the environment for the Prompt Search Engine, follow these steps:
|
49 |
+
|
50 |
+
### Prerequisites
|
51 |
+
|
52 |
+
1. **Python**: Ensure Python >= 3.9 is installed. You can download it from [Python.org](https://www.python.org/downloads/).
|
53 |
+
2. **Docker**: Install Docker to containerize and deploy the application. Visit [Docker's official site](https://www.docker.com/get-started) for installation instructions.
|
54 |
+
3. **Conda (Optional)**: Install Miniconda or Anaconda for managing a virtual environment locally.
|
55 |
+
|
56 |
+
### Steps to Install Dependencies
|
57 |
+
|
58 |
+
1. Navigate to the project directory:
|
59 |
+
```bash
|
60 |
+
cd <project-directory>
|
61 |
+
```
|
62 |
+
|
63 |
+
2. Create and activate a Conda environment (optional):
|
64 |
+
```bash
|
65 |
+
conda create -n prompt_search_env python={version} -y
|
66 |
+
conda activate prompt_search_env
|
67 |
+
```
|
68 |
+
- Replace `{version}` with your desired Python version (e.g., 3.9).
|
69 |
+
|
70 |
+
3. Install dependencies inside the Conda environment using `pip`:
|
71 |
+
```bash
|
72 |
+
pip install -r requirements.txt
|
73 |
+
```
|
74 |
+
|
75 |
+
4. Review and update the `config.py` file to match your environment, such as specifying API keys or dataset paths.
|
76 |
|
77 |
## Run the Project
|
|
|
|
|
|
|
|
|
|
|
|
|
78 |
|
79 |
+
You can run the application locally using either a Conda environment or Docker:
|
|
|
80 |
|
81 |
+
- **Using Conda Environment:**
|
82 |
+
1. Start the backend API. Swagger documentation will be accessible at `http://0.0.0.0:8000/docs`:
|
83 |
+
```bash
|
84 |
+
python run.py
|
85 |
+
```
|
86 |
+
2. Run the frontend application:
|
87 |
+
```bash
|
88 |
+
python -m fe.gradio_app
|
89 |
+
```
|
90 |
+
The frontend will be accessible at `http://0.0.0.0:7860`.
|
91 |
+
|
92 |
+
- **Using Docker:**
|
93 |
+
Refer to the instructions in the next section for building and running the Docker container.
|
94 |
|
95 |
## Instructions for Building and Running the Docker Container
|
|
|
96 |
|
97 |
+
1. Build the Docker image:
|
98 |
+
```bash
|
99 |
+
docker build -t prompt-search-engine .
|
100 |
+
```
|
101 |
+
|
102 |
+
2. Run the Docker container:
|
103 |
+
```bash
|
104 |
+
docker run -p 8000:8000 -p 7860:7860 prompt-search-engine
|
105 |
+
```
|
106 |
+
|
107 |
+
- The backend API will be accessible at `http://0.0.0.0:8000/docs`.
|
108 |
+
- The frontend will be accessible at `http://0.0.0.0:7860`.
|
109 |
+
|
110 |
+
Your environment is now ready to use the Prompt Search Engine.
|
111 |
+
|
112 |
+
## API Endpoints and Usage
|
113 |
+
|
114 |
+
### `/search` (GET)
|
115 |
+
Endpoint for querying the search engine.
|
116 |
+
|
117 |
+
#### Parameters:
|
118 |
+
- `query` (str): The search query. **Required**.
|
119 |
+
- `n` (int): Number of results to return (default: 5). Must be greater than or equal to 1.
|
120 |
+
|
121 |
+
#### Example Request:
|
122 |
+
```bash
|
123 |
+
curl -X GET "http://0.0.0.0:8000/search?query=example+prompt&n=5"
|
124 |
```
|
125 |
+
|
126 |
+
#### Example Response:
|
127 |
+
```json
|
128 |
+
{
|
129 |
+
"query": "example prompt",
|
130 |
+
"results": [
|
131 |
+
{"score": 0.95, "prompt": "example similar prompt 1"},
|
132 |
+
{"score": 0.92, "prompt": "example similar prompt 2"}
|
133 |
+
]
|
134 |
+
}
|
135 |
+
```
|
136 |
+
|
137 |
---
|
138 |
|
139 |
## Deployment Details
|
|
|
140 |
|
141 |
+
### Overview
|
142 |
+
This section outlines the steps to deploy the **Prompt Search Engine** application using Docker and Hugging Face Spaces. The application comprises a backend (API) and a frontend (Gradio-based UI) that runs together in a single Docker container.
|
143 |
+
|
144 |
+
### Prerequisites
|
145 |
+
|
146 |
+
1. A [Hugging Face account](https://huggingface.co/).
|
147 |
+
2. Git installed locally.
|
148 |
+
3. Access to the project repository on GitHub.
|
149 |
+
4. Docker installed locally for testing.
|
150 |
+
5. A Hugging Face **Access Token** (needed for authentication).
|
151 |
+
|
152 |
+
### Deployment Steps
|
153 |
+
|
154 |
+
1. **Create a Hugging Face Space:**
|
155 |
+
- Log in to [Hugging Face Spaces](https://huggingface.co/spaces).
|
156 |
+
- Click on **Create Space**.
|
157 |
+
- Fill in the details:
|
158 |
+
- **Space Name**: Choose a name like `promptsearchengine`.
|
159 |
+
- **SDK**: Select `Docker`.
|
160 |
+
- **Visibility**: Choose between public or private.
|
161 |
+
- Click **Create Space** to generate a new repository.
|
162 |
+
|
163 |
+
2. **Create a Hugging Face Access Token:**
|
164 |
+
- Log in to [Hugging Face](https://huggingface.co/).
|
165 |
+
- Navigate to **Settings** > **Access Tokens**.
|
166 |
+
- Click **New Token**:
|
167 |
+
- **Name**: `Promptsearchengine Deployment`.
|
168 |
+
- **Role**: Select `Write`.
|
169 |
+
- Copy the token. You’ll need it for pushing to Hugging Face Spaces.
|
170 |
+
|
171 |
+
3. **Test the Application Locally:**
|
172 |
+
|
173 |
+
```bash
|
174 |
+
docker build -t promptsearchengine .
|
175 |
+
docker run -p 8000:8000 -p 7860:7860 promptsearchengine
|
176 |
+
```
|
177 |
+
|
178 |
+
- **Backend**: Test at `http://localhost:8000`.
|
179 |
+
- **Frontend**: Test at `http://localhost:7860`.
|
180 |
+
|
181 |
+
4. **Prepare the Project for Hugging Face Spaces:**
|
182 |
+
|
183 |
+
- Ensure the `Dockerfile` is updated for Hugging Face Spaces:
|
184 |
+
- Set environment variables for writable directories (e.g., `HF_HOME=/tmp/huggingface`).
|
185 |
+
- Ensure a valid `README.md` is present at the root with the Hugging Face configuration:
|
186 |
+
```markdown
|
187 |
+
---
|
188 |
+
title: Promptsearchengine
|
189 |
+
emoji: 🔍
|
190 |
+
colorFrom: blue
|
191 |
+
colorTo: indigo
|
192 |
+
sdk: docker
|
193 |
+
pinned: false
|
194 |
+
---
|
195 |
+
```
|
196 |
+
|
197 |
+
5. **Push the Project to Hugging Face Spaces:**
|
198 |
+
|
199 |
+
```bash
|
200 |
+
git remote add space https://huggingface.co/spaces/<your-username>/promptsearchengine
|
201 |
+
git push space main
|
202 |
+
```
|
203 |
+
|
204 |
+
6. **Monitor the Build Logs:**
|
205 |
+
|
206 |
+
- Navigate to your Space on Hugging Face.
|
207 |
+
- Monitor the "Logs" tab to ensure the build completes successfully.
|
208 |
+
|
209 |
+
### Testing the Deployment
|
210 |
+
|
211 |
+
Once deployed, test the application on `https://huggingface.co/spaces/<your-username>/promptsearchengine`.
|
212 |
+
|
213 |
+
|
214 |
+
## Running Tests
|
215 |
+
|
216 |
+
Execute all the tests by running in the terminal within your local project environment:
|
217 |
+
|
218 |
+
```bash
|
219 |
+
python -m pytest -vv tests/
|
220 |
+
```
|
221 |
+
|
222 |
+
### Test Structure
|
223 |
+
|
224 |
+
- **Unit Tests**: Focus on isolated functionality, like individual endpoints or methods.
|
225 |
+
- **Integration Tests**: Verify end-to-end behavior using real components.
|
226 |
+
|
227 |
|
228 |
## Information on How to Use the UI
|
229 |
+
|
230 |
+
The **Prompt Search Engine** interface is designed for simplicity and ease of use. Follow these steps to interact with the application:
|
231 |
+
|
232 |
+
1. **Enter Your Query**:
|
233 |
+
- In the "Enter your query" field, type a phrase or keywords for which you want to find related prompts.
|
234 |
+
2. **Set the Number of Results**:
|
235 |
+
- Use the "Number of top results" field to specify how many similar prompts you want to retrieve. Default is 5.
|
236 |
+
3. **Submit a Query**:
|
237 |
+
- Click the **Search** button to execute your query and display results in real-time.
|
238 |
+
4. **View Results**:
|
239 |
+
- The results will display in a table with the following columns:
|
240 |
+
- **Prompt**: The retrieved prompts that are most similar to your query.
|
241 |
+
- **Similarity**: The similarity score between your query and each retrieved prompt.
|
242 |
+
5. **Interpreting Results**:
|
243 |
+
- Higher similarity scores indicate a closer match to your query.
|
244 |
+
- Use these prompts to refine or inspire new input for your task.
|
245 |
+
|
246 |
+
The clean, dark theme is optimized for readability, making it easier to analyze and use the results effectively.
|
247 |
+
|