Commit
·
207aa3b
1
Parent(s):
fbbeaf6
Update README
Browse files
README.md
CHANGED
@@ -29,9 +29,9 @@ Part of a collection of query expansion models available in different architectu
|
|
29 |
- [Llama-3.2-3B](https://huggingface.co/s-emanuilov/query-expansion-Llama-3.2-3B)
|
30 |
|
31 |
### GGUF variants
|
32 |
-
- [Qwen2.5-3B-GGUF](https://huggingface.co/
|
33 |
-
- [Qwen2.5-7B-GGUF](https://huggingface.co/
|
34 |
-
- [Llama-3.2-3B-GGUF](https://huggingface.co/
|
35 |
|
36 |
Each GGUF model is available in several quantization formats: F16, Q8_0, Q5_K_M, Q4_K_M, Q3_K_M
|
37 |
|
@@ -42,4 +42,12 @@ It could be useful for:
|
|
42 |
- Advanced RAG systems
|
43 |
- Search enhancement
|
44 |
- Query preprocessing
|
45 |
-
- Low-latency query expansion
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
- [Llama-3.2-3B](https://huggingface.co/s-emanuilov/query-expansion-Llama-3.2-3B)
|
30 |
|
31 |
### GGUF variants
|
32 |
+
- [Qwen2.5-3B-GGUF](https://huggingface.co/s-emanuilov/query-expansion-Qwen2.5-3B-GGUF)
|
33 |
+
- [Qwen2.5-7B-GGUF](https://huggingface.co/s-emanuilov/query-expansion-Qwen2.5-7B-GGUF)
|
34 |
+
- [Llama-3.2-3B-GGUF](https://huggingface.co/s-emanuilov/query-expansion-Llama-3.2-3B-GGUF)
|
35 |
|
36 |
Each GGUF model is available in several quantization formats: F16, Q8_0, Q5_K_M, Q4_K_M, Q3_K_M
|
37 |
|
|
|
42 |
- Advanced RAG systems
|
43 |
- Search enhancement
|
44 |
- Query preprocessing
|
45 |
+
- Low-latency query expansion
|
46 |
+
|
47 |
+
## Citation
|
48 |
+
|
49 |
+
If you find my work helpful, feel free to give me a citation.
|
50 |
+
|
51 |
+
```
|
52 |
+
|
53 |
+
```
|