YongganFu commited on
Commit
d253bfd
·
verified ·
1 Parent(s): ac10307

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -12,7 +12,7 @@ Developed by Deep Learning Efficiency Research (DLER) team at NVIDIA Research.
12
  - Fuse attention heads and SSM heads within the same layer, offering parallel and complementary processing of the same inputs
13
 
14
  <div align="center">
15
- <img src="https://huggingface.co/nvidia/Hymba-1.5B/resolve/main/images/module.png" alt="Hymba Module" width="600">
16
  </div>
17
 
18
  - Introduce meta tokens that are prepended to the input sequences and interact with all subsequent tokens, thus storing important information and alleviating the burden of "forced-to-attend" in attention
@@ -20,7 +20,7 @@ Developed by Deep Learning Efficiency Research (DLER) team at NVIDIA Research.
20
  - Integrate with cross-layer KV sharing and global-local attention to further boost memory and computation efficiency
21
 
22
  <div align="center">
23
- <img src="https://huggingface.co/nvidia/Hymba-1.5B/resolve/main/images/macro_arch.png" alt="Hymba Model" width="600">
24
  </div>
25
 
26
 
@@ -32,7 +32,7 @@ Developed by Deep Learning Efficiency Research (DLER) team at NVIDIA Research.
32
  </div>
33
 
34
 
35
- - Hymba-1.5B-Instruct: Outperform all sub-2B public models.
36
 
37
 
38
  <div align="center">
 
12
  - Fuse attention heads and SSM heads within the same layer, offering parallel and complementary processing of the same inputs
13
 
14
  <div align="center">
15
+ <img src="https://huggingface.co/nvidia/Hymba-1.5B-Instruct/resolve/main/images/module.png" alt="Hymba Module" width="600">
16
  </div>
17
 
18
  - Introduce meta tokens that are prepended to the input sequences and interact with all subsequent tokens, thus storing important information and alleviating the burden of "forced-to-attend" in attention
 
20
  - Integrate with cross-layer KV sharing and global-local attention to further boost memory and computation efficiency
21
 
22
  <div align="center">
23
+ <img src="https://huggingface.co/nvidia/Hymba-1.5B-Instruct/resolve/main/images/macro_arch.png" alt="Hymba Model" width="600">
24
  </div>
25
 
26
 
 
32
  </div>
33
 
34
 
35
+ - Hymba-1.5B-Instruct: Outperform SOTA small LMs.
36
 
37
 
38
  <div align="center">