nazimali commited on
Commit
f5f3030
โ€ข
1 Parent(s): deec710

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -16,7 +16,7 @@ tags:
16
  - trl
17
  ---
18
 
19
- Experimenting with pre-training Arabic language + finetuning on instructions using the quantized model `mistralai/Mistral-7B-v0.3` from `unsloth`. First time trying pre-training, expect issues and low quality outputs. The repo contains the merged, quantized model and GGUF format.
20
 
21
  ### Example usage
22
 
@@ -35,7 +35,7 @@ inference_prompt = """ููŠู…ุง ูŠู„ูŠ ุชุนู„ูŠู…ุงุช ุชุตู ู…ู‡ู…ุฉ. ุงูƒุชุจ
35
 
36
  llm = Llama.from_pretrained(
37
  repo_id="nazimali/mistral-7b-v0.3-instruct-arabic",
38
- filename="gguf/Q4_K_M.gguf",
39
  )
40
 
41
  llm.create_chat_completion(
@@ -53,7 +53,7 @@ llm.create_chat_completion(
53
  ```shell
54
  ./llama-cli \
55
  --hf-repo "nazimali/mistral-7b-v0.3-instruct-arabic" \
56
- --hf-file gguf/Q4_K_M.gguf \
57
  -p "ุงู„ุณู„ุงู… ุนู„ูŠูƒู…ุŒ ู‡ูŠุง ู†ู…ูˆุก" \
58
  --conversation
59
  ```
 
16
  - trl
17
  ---
18
 
19
+ Experimenting with pre-training Arabic language + finetuning on instructions using the quantized model `mistralai/Mistral-7B-v0.3` from `unsloth`. First time trying pre-training, expect issues and low quality outputs. The repo contains the merged, quantized model and a GGUF format.
20
 
21
  ### Example usage
22
 
 
35
 
36
  llm = Llama.from_pretrained(
37
  repo_id="nazimali/mistral-7b-v0.3-instruct-arabic",
38
+ filename="Q8_0.gguf",
39
  )
40
 
41
  llm.create_chat_completion(
 
53
  ```shell
54
  ./llama-cli \
55
  --hf-repo "nazimali/mistral-7b-v0.3-instruct-arabic" \
56
+ --hf-file Q8_0.gguf \
57
  -p "ุงู„ุณู„ุงู… ุนู„ูŠูƒู…ุŒ ู‡ูŠุง ู†ู…ูˆุก" \
58
  --conversation
59
  ```