Spaces:
Running
Running
How to Contribute Your Chatbot
Fork the Repository
- Go to our GitHub repository and fork it to your own GitHub account.
Clone Your Fork
- Clone your fork to your local machine.
Add Your Chatbot
- In the
app.py
file, add your chatbot's integration. - If using a Hugging Face model, specify the model ID in the appropriate section.
- In the
Test Your Chatbot
- Run the application locally and test your chatbot's functionality.
Submit a Pull Request
- Once satisfied with your chatbot integration, push your changes to your fork and submit a pull request to the main repository.
Review Process
- Your submission will be reviewed by our team. Please be available for any questions or required changes.
Example Code for Adding a Chatbot
from huggingface_hub import InferenceClient
client = InferenceClient("your-huggingface-model-id")
def respond(message, history, system_message, max_tokens, temperature, top_p):
# Your chatbot logic here
---
title: Chatbots
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 4.39.0
app_file: app.py
pinned: false
---
An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co./docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co./docs/api-inference/index).