promptsearchengine / README.md
Jokica17's picture
Edited README.md
664b898
|
raw
history blame
8.41 kB
metadata
title: Search Engine
emoji: πŸ”
colorFrom: blue
colorTo: indigo
sdk: docker
pinned: false

Prompt Search Engine

Table of Contents

  1. Project Overview
  2. Environment Setup
  3. Run the Project
  4. API Endpoints and Usage
  5. Instructions for Building and Running the Docker Container
  6. Deployment Details
  7. Running Tests
  8. Information on How to Use the UI

Project Overview

The Prompt Search Engine is designed to address the growing need for high-quality prompts used in AI-generated content, particularly for models like Stable Diffusion. By leveraging a database of existing prompts, this search engine helps users discover the most relevant and effective prompts, significantly enhancing the quality of generated images.

The main goal of the prompt search engine is to return the top n most similar prompts with respect to the input prompt query. This way, we can generate higher quality images by providing better prompts for the Stable Diffusion models.

Technology Used

This project leverages a modern tech stack to deliver efficient search functionality:

  1. FastAPI: A high-performance web framework for building the backend API.
  2. Gradio: A lightweight UI framework for creating the frontend interface.
  3. Hugging Face Spaces: For hosting the application using Docker.
  4. Hugging Face Datasets: Downloads and processes the google-research-datasets/conceptual_captions dataset at runtime.
  5. Uvicorn: ASGI server for running the FastAPI application.
  6. Python: Core language used for development and scripting.

Environment Setup

To set up the environment for the Prompt Search Engine, follow these steps:

Prerequisites

  1. Python: Ensure Python >= 3.9 is installed. You can download it from Python.org.
  2. Docker: Install Docker to containerize and deploy the application. Visit Docker's official site for installation instructions.
  3. Conda (Optional): Install Miniconda or Anaconda for managing a virtual environment locally.

Steps to Install Dependencies

  1. Navigate to the project directory:

    cd <project-directory>
    
  2. Create and activate a Conda environment (optional):

    conda create -n prompt_search_env python={version} -y
    conda activate prompt_search_env
    
    • Replace {version} with your desired Python version (e.g., 3.9).
  3. Install dependencies inside the Conda environment using pip:

    pip install -r requirements.txt
    
  4. Review and update the config.py file to match your environment, such as specifying API keys or dataset paths.

Run the Project

You can run the application locally using either a Conda environment or Docker:

  • Using Conda Environment:

    1. Start the backend API. Swagger documentation will be accessible at http://0.0.0.0:8000/docs:
      python run.py
      
    2. Run the frontend application:
      python -m fe.gradio_app
      

    The frontend will be accessible at http://0.0.0.0:7860.

  • Using Docker: Refer to the instructions in the next section for building and running the Docker container.

Instructions for Building and Running the Docker Container

  1. Build the Docker image:

    docker build -t prompt-search-engine .
    
  2. Run the Docker container:

    docker run -p 8000:8000 -p 7860:7860 prompt-search-engine
    
    • The backend API will be accessible at http://0.0.0.0:8000/docs.
    • The frontend will be accessible at http://0.0.0.0:7860.

Your environment is now ready to use the Prompt Search Engine.

API Endpoints and Usage

/search (GET)

Endpoint for querying the search engine.

Parameters:

  • query (str): The search query. Required.
  • n (int): Number of results to return (default: 5). Must be greater than or equal to 1.

Example Request:

curl -X GET "http://0.0.0.0:8000/search?query=example+prompt&n=5"

Example Response:

{
    "query": "example prompt",
    "results": [
        {"score": 0.95, "prompt": "example similar prompt 1"},
        {"score": 0.92, "prompt": "example similar prompt 2"}
    ]
}

Deployment Details

Overview

This section outlines the steps to deploy the Prompt Search Engine application using Docker and Hugging Face Spaces. The application comprises a backend (API) and a frontend (Gradio-based UI) that runs together in a single Docker container.

Prerequisites

  1. A Hugging Face account.
  2. Git installed locally.
  3. Access to the project repository on GitHub.
  4. Docker installed locally for testing.
  5. A Hugging Face Access Token (needed for authentication).

Deployment Steps

  1. Create a Hugging Face Space:

    • Log in to Hugging Face Spaces.
    • Click on Create Space.
    • Fill in the details:
      • Space Name: Choose a name like promptsearchengine.
      • SDK: Select Docker.
      • Visibility: Choose between public or private.
    • Click Create Space to generate a new repository.
  2. Create a Hugging Face Access Token:

    • Log in to Hugging Face.
    • Navigate to Settings > Access Tokens.
    • Click New Token:
      • Name: Promptsearchengine Deployment.
      • Role: Select Write.
    • Copy the token. You’ll need it for pushing to Hugging Face Spaces.
  3. Test the Application Locally:

    docker build -t promptsearchengine . 
    docker run -p 8000:8000 -p 7860:7860 promptsearchengine
    
    • Backend: Test at http://localhost:8000.
    • Frontend: Test at http://localhost:7860.
  4. Prepare the Project for Hugging Face Spaces:

    • Ensure the Dockerfile is updated for Hugging Face Spaces:
      • Set environment variables for writable directories (e.g., HF_HOME=/tmp/huggingface).
    • Ensure a valid README.md is present at the root with the Hugging Face configuration: ```markdown

      title: Promptsearchengine emoji: πŸ” colorFrom: blue colorTo: indigo sdk: docker pinned: false

      
      
  5. Push the Project to Hugging Face Spaces:

    git remote add space https://huggingface.co./spaces/<your-username>/promptsearchengine
    git push space main
    
  6. Monitor the Build Logs:

    • Navigate to your Space on Hugging Face.
    • Monitor the "Logs" tab to ensure the build completes successfully.

Testing the Deployment

Once deployed, test the application on https://huggingface.co./spaces/<your-username>/promptsearchengine.

Running Tests

Execute all the tests by running in the terminal within your local project environment:

python -m pytest -vv tests/

Test Structure

  • Unit Tests: Focus on isolated functionality, like individual endpoints or methods.
  • Integration Tests: Verify end-to-end behavior using real components.

Information on How to Use the UI

The Prompt Search Engine interface is designed for simplicity and ease of use. Follow these steps to interact with the application:

  1. Enter Your Query:
    • In the "Enter your query" field, type a phrase or keywords for which you want to find related prompts.
  2. Set the Number of Results:
    • Use the "Number of top results" field to specify how many similar prompts you want to retrieve. Default is 5.
  3. Submit a Query:
    • Click the Search button to execute your query and display results in real-time.
  4. View Results:
    • The results will display in a table with the following columns:
      • Prompt: The retrieved prompts that are most similar to your query.
      • Similarity: The similarity score between your query and each retrieved prompt.
  5. Interpreting Results:
    • Higher similarity scores indicate a closer match to your query.
    • Use these prompts to refine or inspire new input for your task.

The clean, dark theme is optimized for readability, making it easier to analyze and use the results effectively.