lcw99 commited on
Commit
09245b4
1 Parent(s): 60cc967

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -3
README.md CHANGED
@@ -1,6 +1,8 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
4
  ---
5
 
6
  # Model Card for Model ID
@@ -10,7 +12,7 @@ tags: []
10
 
11
  ### Model Description
12
 
13
- The Gemma Self-Attention Merged model is a large language model created by merging the self-attention layers of an English-based Gemma 7B model and a Korean-based Gemma 7B model. This merger allows the model to leverage the capabilities of both the English and Korean models, resulting in a more versatile and capable language model that can perform well on tasks involving both English and Korean text.
14
 
15
  The key features of this merged model include:
16
 
@@ -26,4 +28,4 @@ The key features of this merged model include:
26
 
27
  ### Model Sources [optional]
28
 
29
- - **Repository:** https://github.com/lcw99/merge-gemma-attn.git
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ language:
5
+ - ko
6
  ---
7
 
8
  # Model Card for Model ID
 
12
 
13
  ### Model Description
14
 
15
+ The Gemma Self-Attention Merged model is a large language model created by merging the self-attention layers of an [English-based Gemma 7B model](https://huggingface.co/google/gemma-1.1-7b-it) and a [Korean-based Gemma 7B model](beomi/gemma-ko-7b). This merger allows the model to leverage the capabilities of both the English and Korean models, resulting in a more versatile and capable language model that can perform well on tasks involving both English and Korean text.
16
 
17
  The key features of this merged model include:
18
 
 
28
 
29
  ### Model Sources [optional]
30
 
31
+ - **Repository:** https://github.com/lcw99/merge-gemma-attn.git