mjbuehler commited on
Commit
4cb69dc
·
verified ·
1 Parent(s): b99ea8e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -0
README.md CHANGED
@@ -13,6 +13,8 @@ tags:
13
  - bio-inspired
14
  - text-generation-inference
15
  - materials science
 
 
16
  pipeline_tag: image-text-to-text
17
  inference:
18
  parameters:
@@ -21,6 +23,7 @@ widget:
21
  - messages:
22
  - role: user
23
  content: <|image_1|>Can you describe what you see in the image?
 
24
  ---
25
  ## Model Summary
26
 
@@ -62,9 +65,16 @@ The raw input text is:
62
 
63
  ### Sample inference code
64
 
 
 
 
 
 
65
  This code snippets show how to get quickly started on a GPU:
66
 
67
  ```python
 
 
68
  DEVICE='cuda:0'
69
  model_id='lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k'
70
 
 
13
  - bio-inspired
14
  - text-generation-inference
15
  - materials science
16
+ base_model:
17
+ - meta-llama/Llama-3.2-11B-Vision-Instruct
18
  pipeline_tag: image-text-to-text
19
  inference:
20
  parameters:
 
23
  - messages:
24
  - role: user
25
  content: <|image_1|>Can you describe what you see in the image?
26
+
27
  ---
28
  ## Model Summary
29
 
 
65
 
66
  ### Sample inference code
67
 
68
+ Update your transformers installation if necessary:
69
+ ```
70
+ pip install -U transformers
71
+ ```
72
+
73
  This code snippets show how to get quickly started on a GPU:
74
 
75
  ```python
76
+ from transformers import MllamaForConditionalGeneration, AutoProcessor
77
+
78
  DEVICE='cuda:0'
79
  model_id='lamm-mit/Cephalo-Llama-3.2-11B-Vision-Instruct-128k'
80