Commit
·
112226e
1
Parent(s):
849ed3a
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,6 +1,5 @@
|
|
1 |
---
|
2 |
inference: false
|
3 |
-
pipeline_tag: visual-question-answering
|
4 |
---
|
5 |
|
6 |
<br>
|
@@ -55,6 +54,20 @@ predictor = huggingface_model.deploy(
|
|
55 |
)
|
56 |
```
|
57 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
58 |
|
59 |
## License
|
60 |
Llama 2 is licensed under the LLAMA 2 Community License,
|
|
|
1 |
---
|
2 |
inference: false
|
|
|
3 |
---
|
4 |
|
5 |
<br>
|
|
|
54 |
)
|
55 |
```
|
56 |
|
57 |
+
## Inference on SageMaker
|
58 |
+
Default `conv_mode` for llava-1.5 is setup as `llava_v1` to process `raw_prompt` into meaningful `prompt`. You can also setup `conv_mode` as `raw` to directly use `raw_prompt`.
|
59 |
+
```python
|
60 |
+
data = {
|
61 |
+
"image" : 'https://raw.githubusercontent.com/haotian-liu/LLaVA/main/images/llava_logo.png',
|
62 |
+
"question" : "Describe the image and color details.",
|
63 |
+
# "max_new_tokens" : 1024,
|
64 |
+
# "temperature" : 0.2,
|
65 |
+
# "conv_mode" : "llava_v1"
|
66 |
+
}
|
67 |
+
output = predictor.predict(data)
|
68 |
+
print(output)
|
69 |
+
```
|
70 |
+
Or use [SageMakerRuntime](https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime/client/invoke_endpoint.html#invoke-endpoint) to setup endpoint invoking client.
|
71 |
|
72 |
## License
|
73 |
Llama 2 is licensed under the LLAMA 2 Community License,
|