|
--- |
|
license: mit |
|
--- |
|
# bge-large-en-v1.5-GGUF for llama.cpp |
|
|
|
This repository contains a converted version of the BAAI/bge-large-en-v1.5 model for text embeddings, specifically prepared for use with the `llama.cpp` or Python `llama-cpp-python` library. |
|
|
|
**Original Model:** [BAAI/bge-large-en-v1.5](https://huggingface.co./BAAI/bge-large-en-v1.5) |
|
|
|
**Conversion Details:** |
|
|
|
* The conversion was performed using `llama.cpp's convert-hf-to-gguf.py` script. |
|
* This conversion optimizes the model for the `llama.cpp`. |
|
|
|
**Usage:** |
|
|
|
This model can be loaded and used for text embedding tasks using the `llama-cpp-python` library. Here's an example: |
|
|
|
```python |
|
from llama import Model |
|
|
|
# Load the converted model |
|
model = Model.load("rbehzadan/bge-large-en-v1.5-ggml-f16") |
|
|
|
# Encode some text |
|
text = "This is a sample sentence." |
|
encoded_text = model.embed(text) |
|
``` |
|
|
|
**Important Notes:** |
|
|
|
* This converted model might have slight performance variations compared to the original model due to the conversion process. |
|
* Ensure you have the `llama-cpp-python` library installed for this model to function. |
|
|
|
**License:** |
|
|
|
The license for this model is inherited from the original BAAI/bge-large-en-v1.5 model (refer to the original model's repository for details). |
|
|
|
**Contact:** |
|
|
|
Feel free to create an issue in this repository for any questions or feedback. |