OwenArli commited on
Commit
f8256e1
1 Parent(s): 763af4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -4
README.md CHANGED
@@ -13,8 +13,6 @@ DPO fine tuning method using the following datasets:
13
  - https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
14
 
15
 
16
- We are happy for anyone to try it out and give some feedback and we will have the model up on https://awanllm.com on our LLM API if it is popular.
17
-
18
 
19
  Instruct format:
20
  ```
@@ -32,6 +30,6 @@ Instruct format:
32
 
33
  Quants:
34
 
35
- FP16: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1
36
 
37
- GGUF: https://huggingface.co/AwanLLM/Awanllm-Llama-3-8B-Instruct-DPO-v0.1-GGUF
 
13
  - https://huggingface.co/datasets/jondurbin/truthy-dpo-v0.1
14
 
15
 
 
 
16
 
17
  Instruct format:
18
  ```
 
30
 
31
  Quants:
32
 
33
+ FP16: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1
34
 
35
+ GGUF: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Instruct-DPO-v0.1-GGUF