Spaces:
Running
Running
File size: 1,382 Bytes
f67deb2 f71d0d9 b1c0080 f71d0d9 f67deb2 f71d0d9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
## How to Contribute Your Chatbot
1. **Fork the Repository**
- Go to our GitHub repository and fork it to your own GitHub account.
2. **Clone Your Fork**
- Clone your fork to your local machine.
3. **Add Your Chatbot**
- In the `app.py` file, add your chatbot's integration.
- If using a Hugging Face model, specify the model ID in the appropriate section.
4. **Test Your Chatbot**
- Run the application locally and test your chatbot's functionality.
5. **Submit a Pull Request**
- Once satisfied with your chatbot integration, push your changes to your fork and submit a pull request to the main repository.
6. **Review Process**
- Your submission will be reviewed by our team. Please be available for any questions or required changes.
## Example Code for Adding a Chatbot
```python
from huggingface_hub import InferenceClient
client = InferenceClient("your-huggingface-model-id")
def respond(message, history, system_message, max_tokens, temperature, top_p):
# Your chatbot logic here
---
title: Chatbots
emoji: 💬
colorFrom: yellow
colorTo: purple
sdk: gradio
sdk_version: 4.39.0
app_file: app.py
pinned: false
---
An example chatbot using [Gradio](https://gradio.app), [`huggingface_hub`](https://huggingface.co./docs/huggingface_hub/v0.22.2/en/index), and the [Hugging Face Inference API](https://huggingface.co./docs/api-inference/index). |