Goekdeniz-Guelmez commited on
Commit
af19335
·
verified ·
1 Parent(s): b7cde87

Upload 6 files

Browse files
Files changed (2) hide show
  1. README.md +7 -6
  2. model.safetensors.index.json +0 -0
README.md CHANGED
@@ -1,14 +1,15 @@
1
  ---
2
- base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
3
  language:
4
  - en
5
  - de
6
  license: apache-2.0
7
- license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
8
- pipeline_tag: text-generation
9
  tags:
10
  - chat
11
  - mlx
 
 
 
 
12
  model-index:
13
  - name: Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
14
  results:
@@ -106,9 +107,9 @@ model-index:
106
  name: Open LLM Leaderboard
107
  ---
108
 
109
- # mlx-community/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-4-bit
110
 
111
- The Model [mlx-community/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-4-bit](https://huggingface.co/mlx-community/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-4-bit) was converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-4-bit](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4) using mlx-lm version **0.18.2**.
112
 
113
  ## Use with mlx
114
 
@@ -119,7 +120,7 @@ pip install mlx-lm
119
  ```python
120
  from mlx_lm import load, generate
121
 
122
- model, tokenizer = load("mlx-community/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-4-bit")
123
 
124
  prompt="hello"
125
 
 
1
  ---
 
2
  language:
3
  - en
4
  - de
5
  license: apache-2.0
 
 
6
  tags:
7
  - chat
8
  - mlx
9
+ - mlx-my-repo
10
+ base_model: Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
11
+ license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
12
+ pipeline_tag: text-generation
13
  model-index:
14
  - name: Josiefied-Qwen2.5-14B-Instruct-abliterated-v4
15
  results:
 
107
  name: Open LLM Leaderboard
108
  ---
109
 
110
+ # Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-Q4-mlx
111
 
112
+ The Model [Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-Q4-mlx](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-Q4-mlx) was converted to MLX format from [Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4](https://huggingface.co/Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4) using mlx-lm version **0.19.2**.
113
 
114
  ## Use with mlx
115
 
 
120
  ```python
121
  from mlx_lm import load, generate
122
 
123
+ model, tokenizer = load("Goekdeniz-Guelmez/Josiefied-Qwen2.5-14B-Instruct-abliterated-v4-Q4-mlx")
124
 
125
  prompt="hello"
126
 
model.safetensors.index.json CHANGED
The diff for this file is too large to render. See raw diff