Updates README since the llama.py script has moved in mlx-examples
#7
by
lazarustda
- opened
README.md
CHANGED
@@ -30,7 +30,7 @@ export HF_HUB_ENABLE_HF_TRANSFER=1
|
|
30 |
huggingface-cli download --local-dir CodeLlama-7b-Python-mlx mlx-llama/CodeLlama-7b-Python-mlx
|
31 |
|
32 |
# Run example
|
33 |
-
python mlx-examples/llama/llama.py --prompt "def fibonacci(n):" CodeLlama-7b-Python-mlx/ CodeLlama-7b-Python-mlx/tokenizer.model --max-tokens 200
|
34 |
```
|
35 |
|
36 |
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|
|
|
30 |
huggingface-cli download --local-dir CodeLlama-7b-Python-mlx mlx-llama/CodeLlama-7b-Python-mlx
|
31 |
|
32 |
# Run example
|
33 |
+
python mlx-examples/llms/llama/llama.py --prompt "def fibonacci(n):" CodeLlama-7b-Python-mlx/ CodeLlama-7b-Python-mlx/tokenizer.model --max-tokens 200
|
34 |
```
|
35 |
|
36 |
Please, refer to the [original model card](https://github.com/facebookresearch/codellama/blob/main/MODEL_CARD.md) for details on CodeLlama.
|