chore: update README.md content
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ The Ghost 8B Beta model outperforms prominent models such as Llama 3.1 8B Instru
|
|
48 |
|
49 |
### Updates
|
50 |
|
51 |
-
|
52 |
|
53 |
### Thoughts
|
54 |
|
@@ -77,18 +77,19 @@ We believe that it is possible to optimize language models that are not too larg
|
|
77 |
|
78 |
We create many distributions to give you the best access options that best suit your needs.
|
79 |
|
80 |
-
| Version | Model card
|
81 |
-
| ------- |
|
82 |
| BF16 | [π€ HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608) |
|
83 |
| GGUF | [π€ HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-gguf) |
|
84 |
| AWQ | [π€ HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-awq) |
|
|
|
85 |
|
86 |
### License
|
87 |
|
88 |
The Ghost 8B Beta model is released under the [Ghost Open LLMs LICENSE](https://ghost-x.org/ghost-open-llms-license), [Llama 3 LICENSE](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE).
|
89 |
|
90 |
-
|
91 |
-
|
92 |
|
93 |
Additionally, it would be great if you could mention or credit the model when it benefits your work.
|
94 |
|
@@ -304,6 +305,30 @@ For direct use with `unsloth`, you can easily get started with the following ste
|
|
304 |
print(results)
|
305 |
```
|
306 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
307 |
### Instructions
|
308 |
|
309 |
Here are specific instructions and explanations for each use case.
|
|
|
48 |
|
49 |
### Updates
|
50 |
|
51 |
+
- **16 Aug 2024**: The model has been released to version 160824, expanding support from 9 languages ββto 16 languages. The model has improved math, reasoning, and following instructions better than the previous version.
|
52 |
|
53 |
### Thoughts
|
54 |
|
|
|
77 |
|
78 |
We create many distributions to give you the best access options that best suit your needs.
|
79 |
|
80 |
+
| Version | Model card |
|
81 |
+
| ------- | ------------------------------------------------------------------------ |
|
82 |
| BF16 | [π€ HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608) |
|
83 |
| GGUF | [π€ HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-gguf) |
|
84 |
| AWQ | [π€ HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-awq) |
|
85 |
+
| MLX | [π€ HuggingFace](https://huggingface.co/ghost-x/ghost-8b-beta-1608-mlx) |
|
86 |
|
87 |
### License
|
88 |
|
89 |
The Ghost 8B Beta model is released under the [Ghost Open LLMs LICENSE](https://ghost-x.org/ghost-open-llms-license), [Llama 3 LICENSE](https://huggingface.co/meta-llama/Meta-Llama-3-8B/blob/main/LICENSE).
|
90 |
|
91 |
+
- For individuals, the model is free to use for personal and research purposes.
|
92 |
+
- For commercial use of Ghost 8B Beta, it's also free, but please contact us for confirmation. You can email us at "lamhieu.vk [at] gmail.com" with a brief introduction of your project. If possible, include your logo so we can feature it as a case study. We will confirm your permission to use the model. For commercial use as a service, no email confirmation is needed, but we'd appreciate a notification so we can keep track and potentially recommend your services to partners using the model.
|
93 |
|
94 |
Additionally, it would be great if you could mention or credit the model when it benefits your work.
|
95 |
|
|
|
305 |
print(results)
|
306 |
```
|
307 |
|
308 |
+
#### Use with MLX
|
309 |
+
|
310 |
+
For direct use with `mlx`, you can easily get started with the following steps.
|
311 |
+
|
312 |
+
- Firstly, you need to install unsloth via the command below with `pip`.
|
313 |
+
|
314 |
+
```bash
|
315 |
+
pip install mlx-lm
|
316 |
+
```
|
317 |
+
|
318 |
+
- Right now, you can start using the model directly.
|
319 |
+
```python
|
320 |
+
from mlx_lm import load, generate
|
321 |
+
|
322 |
+
model, tokenizer = load("ghost-x/ghost-8b-beta-1608-mlx")
|
323 |
+
messages = [
|
324 |
+
{"role": "system", "content": ""},
|
325 |
+
{"role": "user", "content": "Why is the sky blue ?"},
|
326 |
+
]
|
327 |
+
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
328 |
+
response = generate(model, tokenizer, prompt=prompt, verbose=True)
|
329 |
+
```
|
330 |
+
|
331 |
+
|
332 |
### Instructions
|
333 |
|
334 |
Here are specific instructions and explanations for each use case.
|