LlamaCpp-Server-Ui / README.md
xxx777xxxASD's picture
Update README.md
c27002a verified
|
raw
history blame
883 Bytes
metadata
license: wtfpl
  1. Go to llama.cpp and download one of those folders image/png

  2. If you're about to use CUDA - check the version your card supports(12.2 for any RTX) and download one of those folders image/png

  3. Unpack them in one folder and rename it to "LlamaCPP", put this folder in the same folder where .py/.exe file are/is image/png

  4. Launch main.py/main.exe file image/png