czczup commited on
Commit
bc71a12
1 Parent(s): 3b721e0

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -280,6 +280,32 @@ print(f'User: {question}')
280
  print(f'Assistant: {response}')
281
  ```
282
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
283
  ## License
284
 
285
  This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.
 
280
  print(f'Assistant: {response}')
281
  ```
282
 
283
+ ## Deployment
284
+
285
+ ### LMDeploy
286
+
287
+ LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by the MMRazor and MMDeploy teams.
288
+
289
+ ```sh
290
+ pip install lmdeploy
291
+ ```
292
+
293
+ You can run batch inference locally with the following python code:
294
+
295
+ ```python
296
+ from lmdeploy.vl import load_image
297
+ from lmdeploy import ChatTemplateConfig, pipeline
298
+
299
+ model = 'OpenGVLab/InternVL2-8B'
300
+ system_prompt = '我是书生·万象,英文名是InternVL,是由上海人工智能实验室及多家合作单位联合开发的多模态基础模型。人工智能实验室致力于原始技术创新,开源开放,共享共创,推动科技进步和产业发展。'
301
+ image = load_image('https://raw.githubusercontent.com/open-mmlab/mmdeploy/main/tests/data/tiger.jpeg')
302
+ chat_template_config = ChatTemplateConfig('internlm2-chat')
303
+ chat_template_config.meta_instruction = system_prompt
304
+ pipe = pipeline(model, chat_template_config=chat_template_config)
305
+ response = pipe(('describe this image', image))
306
+ print(response)
307
+ ```
308
+
309
  ## License
310
 
311
  This project is released under the MIT license, while InternLM is licensed under the Apache-2.0 license.