Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,43 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
# bge-large-en-v1.5-GGUF for llama.cpp
|
6 |
+
|
7 |
+
This repository contains a converted version of the BAAI/bge-large-en-v1.5 model for text embeddings, specifically prepared for use with the `llama.cpp` or Python `llama-cpp-python` library.
|
8 |
+
|
9 |
+
**Original Model:** [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5)
|
10 |
+
|
11 |
+
**Conversion Details:**
|
12 |
+
|
13 |
+
* The conversion was performed using `llama.cpp's convert-hf-to-gguf.py` script.
|
14 |
+
* This conversion optimizes the model for the `llama.cpp`.
|
15 |
+
|
16 |
+
**Usage:**
|
17 |
+
|
18 |
+
This model can be loaded and used for text embedding tasks using the `llama-cpp-python` library. Here's an example:
|
19 |
+
|
20 |
+
```python
|
21 |
+
from llama import Model
|
22 |
+
|
23 |
+
# Load the converted model
|
24 |
+
model = Model.load("rbehzadan/bge-large-en-v1.5-ggml-f16")
|
25 |
+
|
26 |
+
# Encode some text
|
27 |
+
text = "This is a sample sentence."
|
28 |
+
encoded_text = model.embed(text)
|
29 |
+
```
|
30 |
+
|
31 |
+
**Important Notes:**
|
32 |
+
|
33 |
+
* This converted model might have slight performance variations compared to the original model due to the conversion process.
|
34 |
+
* Ensure you have the `llama-cpp-python` library installed for this model to function.
|
35 |
+
|
36 |
+
**License:**
|
37 |
+
|
38 |
+
The license for this model is inherited from the original BAAI/bge-large-en-v1.5 model (refer to the original model's repository for details).
|
39 |
+
|
40 |
+
**Contact:**
|
41 |
+
|
42 |
+
Feel free to create an issue in this repository for any questions or feedback.
|
43 |
+
|