myownskyW7
commited on
Commit
•
0dd8169
1
Parent(s):
4039eff
Update README.md
Browse files
README.md
CHANGED
@@ -25,8 +25,21 @@ pipeline_tag: visual-question-answering
|
|
25 |
**InternLM-XComposer2.5** excels in various text-image comprehension and composition applications, achieving GPT-4V level capabilities with merely 7B LLM backend. IXC2.5 is trained with 24K interleaved image-text contexts, it can seamlessly extend to 96K long contexts via RoPE extrapolation. This long-context capability allows IXC-2.5 to excel in tasks requiring extensive input and output contexts.
|
26 |
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
### Import from Transformers
|
29 |
-
To load the InternLM-XComposer2
|
30 |
```python
|
31 |
import torch
|
32 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
25 |
**InternLM-XComposer2.5** excels in various text-image comprehension and composition applications, achieving GPT-4V level capabilities with merely 7B LLM backend. IXC2.5 is trained with 24K interleaved image-text contexts, it can seamlessly extend to 96K long contexts via RoPE extrapolation. This long-context capability allows IXC-2.5 to excel in tasks requiring extensive input and output contexts.
|
26 |
|
27 |
|
28 |
+
## 4-Bit Model
|
29 |
+
We offer 4-bit quantized models via LMDeploy to reduce memory requirements. For a memory usage comparison, please refer to [here](example_code/4bit/README.md).
|
30 |
+
|
31 |
+
```python
|
32 |
+
from lmdeploy import TurbomindEngineConfig, pipeline
|
33 |
+
from lmdeploy.vl import load_image
|
34 |
+
engine_config = TurbomindEngineConfig(model_format='awq')
|
35 |
+
pipe = pipeline('internlm/internlm-xcomposer2d5-7b-4bit', backend_config=engine_config)
|
36 |
+
image = load_image('examples/dubai.png')
|
37 |
+
response = pipe(('describe this image', image))
|
38 |
+
print(response.text)
|
39 |
+
```
|
40 |
+
|
41 |
### Import from Transformers
|
42 |
+
To load the InternLM-XComposer2.5 model using Transformers, use the following code:
|
43 |
```python
|
44 |
import torch
|
45 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|