Update README.md
Browse files
README.md
CHANGED
@@ -23,6 +23,22 @@ Install llama.cpp through brew (works on Mac and Linux)
|
|
23 |
brew install llama.cpp
|
24 |
|
25 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
Invoke the llama.cpp server or the CLI.
|
27 |
|
28 |
### CLI:
|
|
|
23 |
brew install llama.cpp
|
24 |
|
25 |
```
|
26 |
+
|
27 |
+
# Or compile it to take advantage of Nvidia CUDA hardware:
|
28 |
+
|
29 |
+
```bash
|
30 |
+
git clone https://github.com/ggerganov/llama.cpp.git
|
31 |
+
cd llama*
|
32 |
+
# look at docs for other hardware builds or to make sure none of this has changed.
|
33 |
+
|
34 |
+
cmake -B build -DGGML_CUDA=ON
|
35 |
+
CMAKE_ARGS="-DGGML_CUDA=on" cmake --build build --config Release # -j6 (optional: use a number less than the number of cores)
|
36 |
+
|
37 |
+
# If your version of gcc is > 12 and it gives errors, use conda to install gcc-12 and activate it.
|
38 |
+
# Run the above cmake commands again.
|
39 |
+
# Then run conda deactivate and re-run the last line once more to link the build outside of conda.
|
40 |
+
```
|
41 |
+
|
42 |
Invoke the llama.cpp server or the CLI.
|
43 |
|
44 |
### CLI:
|