Create repo card with prompt-templates library
Browse files
README.md
CHANGED
@@ -1,90 +1,9 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
library_name: prompt-templates
|
4 |
tags:
|
5 |
- prompts
|
6 |
-
-
|
7 |
---
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
This repo illustrates how you can use the `prompt_templates` library to load prompts from YAML files in open-weight model repositories.
|
12 |
-
Several open-weight models have been tuned on specific tasks with specific prompts.
|
13 |
-
For example, the InternVL2 vision language models are one of the very few VLMs that have been trained for zeroshot bounding box prediction for any object.
|
14 |
-
To elicit this capability, users need to use this special prompt: `Please provide the bounding box coordinate of the region this sentence describes: <ref>{region_to_detect}</ref>'`
|
15 |
-
|
16 |
-
These these kinds of task-specific special prompts are currently unsystematically reported in model cards, github repos, .txt files etc.
|
17 |
-
|
18 |
-
The prompt_templates library standardises the sharing of prompts in YAML files.
|
19 |
-
I recommend sharing these special prompts directly in the model repository of the respective model.
|
20 |
-
|
21 |
-
Below is an example for the InternVL2 model.
|
22 |
-
|
23 |
-
Note that this model card is not actively maintained and the latest documentation for `prompt_templates` is available at https://github.com/MoritzLaurer/prompt_templates
|
24 |
-
|
25 |
-
#### Prompt for extracting bounding boxes of specific objects of interest with InternVL2
|
26 |
-
```py
|
27 |
-
#!pip install prompt_templates
|
28 |
-
from prompt_templates import PromptTemplateLoader
|
29 |
-
|
30 |
-
# download image prompt template
|
31 |
-
prompt_template = PromptTemplateLoader.from_hub(repo_id="MoritzLaurer/open_models_special_prompts", filename="internvl2-bbox-prompt.yaml")
|
32 |
-
|
33 |
-
# populate prompt
|
34 |
-
image_url = "https://unsplash.com/photos/ZVw3HmHRhv0/download?ixid=M3wxMjA3fDB8MXxhbGx8NHx8fHx8fDJ8fDE3MjQ1NjAzNjl8&force=true&w=1920"
|
35 |
-
region_to_detect = "the bird"
|
36 |
-
messages = prompt_template.populate_template(image_url=image_url, region_to_detect=region_to_detect)
|
37 |
-
|
38 |
-
print(messages)
|
39 |
-
# out: [{'role': 'user'
|
40 |
-
# 'content': [{'type': 'image_url',
|
41 |
-
# 'image_url': {'url': 'https://unsplash.com/photos/ZVw3HmHRhv0/download?ixid=M3wxMjA3fDB8MXxhbGx8NHx8fHx8fDJ8fDE3MjQ1NjAzNjl8&force=true&w=1920'}},
|
42 |
-
# {'type': 'text',
|
43 |
-
# 'text': 'Please provide the bounding box coordinate of the region this sentence describes: <ref>the bird</ref>'}]
|
44 |
-
# }]
|
45 |
-
```
|
46 |
-
|
47 |
-
#### Prompt for extracting bounding boxes of any object in an image with InternVL2
|
48 |
-
```py
|
49 |
-
# download image prompt template
|
50 |
-
prompt_template = PromptTemplateLoader.from_hub(repo_id="MoritzLaurer/open_models_special_prompts", filename="internvl2-objectdetection-prompt.yaml")
|
51 |
-
|
52 |
-
# populate prompt
|
53 |
-
image_url = "https://unsplash.com/photos/ZVw3HmHRhv0/download?ixid=M3wxMjA3fDB8MXxhbGx8NHx8fHx8fDJ8fDE3MjQ1NjAzNjl8&force=true&w=1920"
|
54 |
-
messages = prompt_template.populate_template(image_url=image_url)
|
55 |
-
|
56 |
-
print(messages)
|
57 |
-
# [{'role': 'user',
|
58 |
-
# 'content': [{'type': 'image_url',
|
59 |
-
# 'image_url': {'url': 'https://unsplash.com/photos/ZVw3HmHRhv0/download?ixid=M3wxMjA3fDB8MXxhbGx8NHx8fHx8fDJ8fDE3MjQ1NjAzNjl8&force=true&w=1920'}},
|
60 |
-
# {'type': 'text',
|
61 |
-
# 'text': 'Please detect and label all objects in the following image and mark their positions.'}]}]
|
62 |
-
```
|
63 |
-
|
64 |
-
#### Using the prompt with an open inference container like vLLM or TGI
|
65 |
-
|
66 |
-
These populated prompts in the OpenAI messages format are then directly compatible with vLLM or TGI containers.
|
67 |
-
When you host one of these containers on a HF Endpoint, for example, you can call on the model with the OpenAI client or with the HF Interence Client.
|
68 |
-
|
69 |
-
```py
|
70 |
-
from openai import OpenAI
|
71 |
-
import os
|
72 |
-
|
73 |
-
ENDPOINT_URL = "https://tkuaxiztuv9pl4po.us-east-1.aws.endpoints.huggingface.cloud" + "/v1/"
|
74 |
-
|
75 |
-
# initialize the OpenAI client but point it to an endpoint running vLLM or TGI
|
76 |
-
client = OpenAI(
|
77 |
-
base_url=ENDPOINT_URL,
|
78 |
-
api_key=os.getenv("HF_TOKEN")
|
79 |
-
)
|
80 |
-
|
81 |
-
response = client.chat.completions.create(
|
82 |
-
model="/repository", # with vLLM deployed on HF endpoint, this needs to be /repository since there are the model artifacts stored
|
83 |
-
messages=messages,
|
84 |
-
)
|
85 |
-
|
86 |
-
response.choices[0].message.content
|
87 |
-
# out: 'the bird[[54, 402, 515, 933]]'
|
88 |
-
```
|
89 |
-
|
90 |
-
|
|
|
1 |
---
|
|
|
2 |
library_name: prompt-templates
|
3 |
tags:
|
4 |
- prompts
|
5 |
+
- prompt-templates
|
6 |
---
|
7 |
+
This repository was created with the `prompt-templates` library and contains
|
8 |
+
prompt templates in the `Files` tab.
|
9 |
+
For easily reusing these templates, see the [prompt-templates documentation](https://github.com/MoritzLaurer/prompt-templates).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|