Update README.md
Browse files
README.md
CHANGED
@@ -22,10 +22,6 @@ P.S. Recently we have received a lot of inquiries on accelerating customized mod
|
|
22 |
|
23 |
****
|
24 |
|
25 |
-
## New Features (2023-06-20)
|
26 |
-
- We now support cuda version of both 11.X and 12.X
|
27 |
-
- lyraChatGLM has been further optimized, with faster model load speed from few minutes to less than 10s for non-int8 mode, and around 1 min for int8 mode!
|
28 |
-
|
29 |
## Model Card for lyraChatGLM
|
30 |
|
31 |
lyraChatGLM is currently the **fastest ChatGLM-6B** available. To the best of our knowledge, it is the **first accelerated version of ChatGLM-6B**.
|
@@ -35,7 +31,10 @@ The inference speed of lyraChatGLM has achieved **300x** acceleration u
|
|
35 |
Among its main features are:
|
36 |
- weights: original ChatGLM-6B weights released by THUDM.
|
37 |
- device: Nvidia GPU with Amperer architecture or Volta architecture (A100, A10, V100...).
|
38 |
-
- batch_size: compiled with dynamic batch size, maximum depends on device.
|
|
|
|
|
|
|
39 |
|
40 |
## Speed
|
41 |
- orginal version(fixed batch infer): commit id 1d240ba
|
|
|
22 |
|
23 |
****
|
24 |
|
|
|
|
|
|
|
|
|
25 |
## Model Card for lyraChatGLM
|
26 |
|
27 |
lyraChatGLM is currently the **fastest ChatGLM-6B** available. To the best of our knowledge, it is the **first accelerated version of ChatGLM-6B**.
|
|
|
31 |
Among its main features are:
|
32 |
- weights: original ChatGLM-6B weights released by THUDM.
|
33 |
- device: Nvidia GPU with Amperer architecture or Volta architecture (A100, A10, V100...).
|
34 |
+
- batch_size: compiled with dynamic batch size, maximum depends on device.
|
35 |
+
New Features (2023-06-20)
|
36 |
+
- We now support cuda version of both 11.X and 12.X
|
37 |
+
- lyraChatGLM has been further optimized, with faster model load speed from few minutes to less than 10s for non-int8 mode, and around 1 min for int8 mode!
|
38 |
|
39 |
## Speed
|
40 |
- orginal version(fixed batch infer): commit id 1d240ba
|