File size: 1,911 Bytes
9ee9b6b 0f48e11 ceda65c 0f48e11 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b 0e15755 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc 9ee9b6b b704dfc b3e445d 9ee9b6b b3e445d 9ee9b6b b3e445d b704dfc 9ee9b6b b3e445d 9ee9b6b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
---
tags:
- vision
- zero-shot-image-classification
- endpoints-template
library_name: generic
---
# Fork of [openai/clip-vit-base-patch32](https://huggingface.co./openai/clip-vit-base-patch32) for a `zero-sho-image-classification` Inference endpoint.
This repository implements a `custom` task for `zero-shot-image-classification` for 🤗 Inference Endpoints. The code for the customized pipeline is in the [pipeline.py](https://huggingface.co./philschmid/clip-zero-shot-image-classification/blob/main/pipeline.py).
To use deploy this model a an Inference Endpoint you have to select `Custom` as task to use the `pipeline.py` file. -> _double check if it is selected_
### expected Request payload
```json
{
"image": "/9j/4AAQSkZJRgABAQEBLAEsAAD/2wBDAAMCAgICAgMC....", // base64 image as bytes
"candiates":["sea","palace","car","ship"]
}
```
below is an example on how to run a request using Python and `requests`.
## Run Request
1. prepare an image.
```bash
!wget https://huggingface.co./datasets/mishig/sample_images/resolve/main/palace.jpg
```
2. run request
```python
import json
from typing import List
import requests as r
import base64
ENDPOINT_URL = ""
HF_TOKEN = ""
def predict(path_to_image: str = None, candiates: List[str] = None):
with open(path_to_image, "rb") as i:
b64 = base64.b64encode(i.read())
payload = {"inputs": {"image": b64.decode("utf-8"), "candiates": candiates}}
response = r.post(
ENDPOINT_URL, headers={"Authorization": f"Bearer {HF_TOKEN}"}, json=payload
)
return response.json()
prediction = predict(
path_to_image="palace.jpg", candiates=["sea", "palace", "car", "ship"]
)
```
expected output
```python
[{'label': 'palace', 'score': 0.9996134638786316},
{'label': 'car', 'score': 0.0002602009626571089},
{'label': 'ship', 'score': 0.00011758189066313207},
{'label': 'sea', 'score': 8.666840585647151e-06}]
```
|