nilq commited on
Commit
ee05493
1 Parent(s): 2f4c6bd

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,7 +1,7 @@
1
  ---
2
  base_model:
3
- - nilq/mistral-1L-tiny
4
  - nilq/lua-mistral-1L-tiny
 
5
  library_name: transformers
6
  tags:
7
  - mergekit
@@ -20,8 +20,8 @@ This model was merged using the SLERP merge method.
20
  ### Models Merged
21
 
22
  The following models were included in the merge:
23
- * [nilq/mistral-1L-tiny](https://huggingface.co/nilq/mistral-1L-tiny)
24
  * [nilq/lua-mistral-1L-tiny](https://huggingface.co/nilq/lua-mistral-1L-tiny)
 
25
 
26
  ### Configuration
27
 
@@ -35,7 +35,7 @@ merge_method: slerp
35
  base_model: nilq/mistral-1L-tiny
36
  parameters:
37
  t:
38
- - value: 0.9
39
  dtype: float16
40
 
41
  ```
 
1
  ---
2
  base_model:
 
3
  - nilq/lua-mistral-1L-tiny
4
+ - nilq/mistral-1L-tiny
5
  library_name: transformers
6
  tags:
7
  - mergekit
 
20
  ### Models Merged
21
 
22
  The following models were included in the merge:
 
23
  * [nilq/lua-mistral-1L-tiny](https://huggingface.co/nilq/lua-mistral-1L-tiny)
24
+ * [nilq/mistral-1L-tiny](https://huggingface.co/nilq/mistral-1L-tiny)
25
 
26
  ### Configuration
27
 
 
35
  base_model: nilq/mistral-1L-tiny
36
  parameters:
37
  t:
38
+ - value: 0.6
39
  dtype: float16
40
 
41
  ```
mergekit_config.yml CHANGED
@@ -5,5 +5,5 @@ merge_method: slerp
5
  base_model: nilq/mistral-1L-tiny
6
  parameters:
7
  t:
8
- - value: 0.9
9
  dtype: float16
 
5
  base_model: nilq/mistral-1L-tiny
6
  parameters:
7
  t:
8
+ - value: 0.6
9
  dtype: float16
model-00001-of-00001.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:6d3dc17adf955f9448b99eabfb22e2b2f74d0775d24b8d10a2168b49e37aadae
3
  size 70258952
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:07ed871562f23015bd85857be52b149e709c91cca769988ae4387025e387ecd9
3
  size 70258952