avemio-digital commited on
Commit
e12835c
·
verified ·
1 Parent(s): 3ffd2d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -6,7 +6,7 @@ tags:
6
  - feature-extraction
7
  - llama-cpp
8
  - gguf-my-repo
9
- base_model: avemio/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI
10
  base_model_relation: merge
11
  widget:
12
  - source_sentence: 'search_query: i love autotrain'
@@ -16,16 +16,16 @@ widget:
16
  - 'search_query: i love autotrain'
17
  pipeline_tag: sentence-similarity
18
  datasets:
19
- - avemio/GRAG-EMBEDDING-TRIPLES-HESSIAN-AI
20
  license: mit
21
  language:
22
  - de
23
  - en
24
  ---
25
 
26
- # avemio-digital/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF
27
- This model was converted to GGUF format from [`avemio/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI`](https://huggingface.co/avemio/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
- Refer to the [original model card](https://huggingface.co/avemio/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI) for more details on the model.
29
 
30
  ## Use with llama.cpp
31
  Install llama.cpp through brew (works on Mac and Linux)
@@ -38,12 +38,12 @@ Invoke the llama.cpp server or the CLI.
38
 
39
  ### CLI:
40
  ```bash
41
- llama-cli --hf-repo avemio-digital/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file grag-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
42
  ```
43
 
44
  ### Server:
45
  ```bash
46
- llama-server --hf-repo avemio-digital/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file grag-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -c 2048
47
  ```
48
 
49
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
@@ -60,9 +60,9 @@ cd llama.cpp && LLAMA_CURL=1 make
60
 
61
  Step 3: Run inference through the main binary.
62
  ```
63
- ./llama-cli --hf-repo avemio-digital/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file grag-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
64
  ```
65
  or
66
  ```
67
- ./llama-server --hf-repo avemio-digital/GRAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file grag-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -c 2048
68
  ```
 
6
  - feature-extraction
7
  - llama-cpp
8
  - gguf-my-repo
9
+ base_model: avemio/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI
10
  base_model_relation: merge
11
  widget:
12
  - source_sentence: 'search_query: i love autotrain'
 
16
  - 'search_query: i love autotrain'
17
  pipeline_tag: sentence-similarity
18
  datasets:
19
+ - avemio/German-RAG-EMBEDDING-TRIPLES-HESSIAN-AI
20
  license: mit
21
  language:
22
  - de
23
  - en
24
  ---
25
 
26
+ # avemio-digital/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF
27
+ This model was converted to GGUF format from [`avemio/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI`](https://huggingface.co/avemio/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
28
+ Refer to the [original model card](https://huggingface.co/avemio/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI) for more details on the model.
29
 
30
  ## Use with llama.cpp
31
  Install llama.cpp through brew (works on Mac and Linux)
 
38
 
39
  ### CLI:
40
  ```bash
41
+ llama-cli --hf-repo avemio-digital/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
42
  ```
43
 
44
  ### Server:
45
  ```bash
46
+ llama-server --hf-repo avemio-digital/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -c 2048
47
  ```
48
 
49
  Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
 
60
 
61
  Step 3: Run inference through the main binary.
62
  ```
63
+ ./llama-cli --hf-repo avemio-digital/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -p "The meaning to life and the universe is"
64
  ```
65
  or
66
  ```
67
+ ./llama-server --hf-repo avemio-digital/German-RAG-BGE-M3-MERGED-x-SNOWFLAKE-ARCTIC-HESSIAN-AI-Q8_0-GGUF --hf-file German-RAG-bge-m3-merged-x-snowflake-arctic-hessian-ai-q8_0.gguf -c 2048
68
  ```