Update README.md
Browse files
README.md
CHANGED
@@ -8,33 +8,33 @@ tags:
|
|
8 |
- sentence-transformers
|
9 |
library_name: transformers
|
10 |
---
|
11 |
-
##
|
12 |
|
13 |
-
**
|
14 |
- 出色的中文、英文重排序能力。
|
15 |
- 出色的中英跨语言重排序能力。
|
16 |
- 支持长文本(最长8192token)。
|
17 |
|
18 |
-
|
19 |
|
20 |
欢迎关注 UltraRAG 系列:
|
21 |
|
22 |
-
- 检索模型:[
|
23 |
-
- 重排模型:[
|
24 |
- 领域自适应RAG框架:[UltraRAG](https://github.com/openbmb/UltraRAG)
|
25 |
|
26 |
-
**
|
27 |
|
28 |
- Exceptional Chinese and English re-ranking capabilities.
|
29 |
- Outstanding cross-lingual re-ranking capabilities between Chinese and English.
|
30 |
- Long-text support (up to 8192 tokens).
|
31 |
|
32 |
-
|
33 |
|
34 |
We also invite you to explore the UltraRAG series:
|
35 |
|
36 |
-
- Retrieval Model: [
|
37 |
-
- Re-ranking Model: [
|
38 |
- Domain Adaptive RAG Framework: [UltraRAG](https://github.com/openbmb/UltraRAG)
|
39 |
|
40 |
|
@@ -52,7 +52,7 @@ We also invite you to explore the UltraRAG series:
|
|
52 |
|
53 |
本模型支持指令,输入格式如下:
|
54 |
|
55 |
-
|
56 |
|
57 |
```
|
58 |
<s>Instruction: {{ instruction }} Query: {{ query }}</s>{{ document }}
|
@@ -72,7 +72,7 @@ For example:
|
|
72 |
|
73 |
也可以不提供指令,即采取如下格式:
|
74 |
|
75 |
-
|
76 |
|
77 |
```
|
78 |
<s>Query: {{ query }}</s>{{ document }}
|
@@ -96,7 +96,7 @@ transformers==4.37.2
|
|
96 |
from transformers import AutoModelForSequenceClassification
|
97 |
import torch
|
98 |
|
99 |
-
model_name = "OpenBMB/
|
100 |
|
101 |
# model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True,attn_implementation="flash_attention_2", torch_dtype=torch.float16).to("cuda")
|
102 |
model.eval()
|
@@ -120,7 +120,7 @@ from sentence_transformers import CrossEncoder
|
|
120 |
from transformers import LlamaTokenizer
|
121 |
import torch
|
122 |
|
123 |
-
model_name = "OpenBMB/
|
124 |
model = CrossEncoder(model_name,max_length=1024,trust_remote_code=True, automodel_args={"torch_dtype": torch.float16})
|
125 |
# You can also use the following code to use flash_attention_2
|
126 |
#model = CrossEncoder(model_name,max_length=1024,trust_remote_code=True, automodel_args={"attn_implementation":"flash_attention_2","torch_dtype": torch.float16})
|
@@ -157,7 +157,7 @@ INSTRUCTION = "Query:"
|
|
157 |
query = f"{INSTRUCTION} {query}"
|
158 |
|
159 |
array = AsyncEngineArray.from_args(
|
160 |
-
[EngineArgs(model_name_or_path = "OpenBMB/
|
161 |
)
|
162 |
|
163 |
async def rerank(engine: AsyncEmbeddingEngine):
|
@@ -172,7 +172,7 @@ asyncio.run(rerank(array[0])) # [(RerankReturnType(relevance_score=0.017917344,
|
|
172 |
|
173 |
```python
|
174 |
from FlagEmbedding import FlagReranker
|
175 |
-
model_name = "OpenBMB/
|
176 |
model = FlagReranker(model_name, use_fp16=True, query_instruction_for_rerank="Query: ", trust_remote_code=True)
|
177 |
# You can hack the __init__() method of the FlagEmbedding BaseReranker class to use flash_attention_2 for faster inference
|
178 |
# self.model = AutoModelForSequenceClassification.from_pretrained(
|
@@ -211,7 +211,7 @@ We re-rank top-100 docments from `bge-large-zh-v1.5` in C-MTEB/Retrieval and fro
|
|
211 |
| bge-reranker-v2-gemma | 71.74 | 60.71 |
|
212 |
| bge-reranker-v2.5-gemma2 | - | 63.67 |
|
213 |
| MiniCPM-Reranker | 76.79 | 61.32 |
|
214 |
-
|
|
215 |
|
216 |
### 中英跨语言重排序结果 CN-EN Cross-lingual Re-ranking Results
|
217 |
|
@@ -226,14 +226,14 @@ We re-rank top-100 documents from `bge-m3` (Dense).
|
|
226 |
| bge-reranker-v2-m3 | 69.75 | 40.98 | 49.67 |
|
227 |
| gte-multilingual-reranker-base | 68.51 | 38.74 | 45.3 |
|
228 |
| MiniCPM-Reranker | 71.73 | 43.65 | 50.59 |
|
229 |
-
|
|
230 |
|
231 |
## 许可证 License
|
232 |
|
233 |
- 本仓库中代码依照 [Apache-2.0 协议](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE)开源。
|
234 |
-
-
|
235 |
-
-
|
236 |
|
237 |
* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
|
238 |
-
* The usage of
|
239 |
-
* The models and weights of
|
|
|
8 |
- sentence-transformers
|
9 |
library_name: transformers
|
10 |
---
|
11 |
+
## MiniCPM-Reranker-Light
|
12 |
|
13 |
+
**MiniCPM-Reranker-Light** 是面壁智能与清华大学自然语言处理实验室(THUNLP)、东北大学信息检索小组(NEUIR)共同开发的中英双语言文本重排序模型,有如下特点:
|
14 |
- 出色的中文、英文重排序能力。
|
15 |
- 出色的中英跨语言重排序能力。
|
16 |
- 支持长文本(最长8192token)。
|
17 |
|
18 |
+
MiniCPM-Reranker-Light 基于 [MiniCPM-1B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16) 训练,结构上采取双向注意力。采取多阶段训练方式,共使用包括开源数据、机造数据、闭源数据在内的约 500 万条训练数据。
|
19 |
|
20 |
欢迎关注 UltraRAG 系列:
|
21 |
|
22 |
+
- 检索模型:[MiniCPM-Embedding-Light](https://huggingface.co/openbmb/MiniCPM-Embedding-Light)
|
23 |
+
- 重排模型:[MiniCPM-Reranker-Light](https://huggingface.co/openbmb/MiniCPM-Reranker-Light)
|
24 |
- 领域自适应RAG框架:[UltraRAG](https://github.com/openbmb/UltraRAG)
|
25 |
|
26 |
+
**MiniCPM-Reranker-Light** is a bilingual & cross-lingual text re-ranking model developed by ModelBest Inc. , THUNLP and NEUIR , featuring:
|
27 |
|
28 |
- Exceptional Chinese and English re-ranking capabilities.
|
29 |
- Outstanding cross-lingual re-ranking capabilities between Chinese and English.
|
30 |
- Long-text support (up to 8192 tokens).
|
31 |
|
32 |
+
MiniCPM-Reranker-Light is trained based on [MiniCPM-1B-sft-bf16](https://huggingface.co/openbmb/MiniCPM-1B-sft-bf16) and incorporates bidirectional attention in its architecture. The model underwent multi-stage training using approximately 6 million training examples, including open-source, synthetic, and proprietary data.
|
33 |
|
34 |
We also invite you to explore the UltraRAG series:
|
35 |
|
36 |
+
- Retrieval Model: [MiniCPM-Embedding-Light](https://huggingface.co/openbmb/MiniCPM-Embedding-Light)
|
37 |
+
- Re-ranking Model: [MiniCPM-Reranker-Light](https://huggingface.co/openbmb/MiniCPM-Reranker-Light)
|
38 |
- Domain Adaptive RAG Framework: [UltraRAG](https://github.com/openbmb/UltraRAG)
|
39 |
|
40 |
|
|
|
52 |
|
53 |
本模型支持指令,输入格式如下:
|
54 |
|
55 |
+
MiniCPM-Reranker-Light supports instructions in the following format:
|
56 |
|
57 |
```
|
58 |
<s>Instruction: {{ instruction }} Query: {{ query }}</s>{{ document }}
|
|
|
72 |
|
73 |
也可以不提供指令,即采取如下格式:
|
74 |
|
75 |
+
MiniCPM-Reranker-Light also works in instruction-free mode in the following format:
|
76 |
|
77 |
```
|
78 |
<s>Query: {{ query }}</s>{{ document }}
|
|
|
96 |
from transformers import AutoModelForSequenceClassification
|
97 |
import torch
|
98 |
|
99 |
+
model_name = "OpenBMB/MiniCPM-Reranker-Light"
|
100 |
|
101 |
# model = AutoModelForSequenceClassification.from_pretrained(model_name, trust_remote_code=True,attn_implementation="flash_attention_2", torch_dtype=torch.float16).to("cuda")
|
102 |
model.eval()
|
|
|
120 |
from transformers import LlamaTokenizer
|
121 |
import torch
|
122 |
|
123 |
+
model_name = "OpenBMB/MiniCPM-Reranker-Light"
|
124 |
model = CrossEncoder(model_name,max_length=1024,trust_remote_code=True, automodel_args={"torch_dtype": torch.float16})
|
125 |
# You can also use the following code to use flash_attention_2
|
126 |
#model = CrossEncoder(model_name,max_length=1024,trust_remote_code=True, automodel_args={"attn_implementation":"flash_attention_2","torch_dtype": torch.float16})
|
|
|
157 |
query = f"{INSTRUCTION} {query}"
|
158 |
|
159 |
array = AsyncEngineArray.from_args(
|
160 |
+
[EngineArgs(model_name_or_path = "OpenBMB/MiniCPM-Reranker-Light", engine="torch", dtype="float16", bettertransformer=False, trust_remote_code=True, model_warmup=False)]
|
161 |
)
|
162 |
|
163 |
async def rerank(engine: AsyncEmbeddingEngine):
|
|
|
172 |
|
173 |
```python
|
174 |
from FlagEmbedding import FlagReranker
|
175 |
+
model_name = "OpenBMB/MiniCPM-Reranker-Light"
|
176 |
model = FlagReranker(model_name, use_fp16=True, query_instruction_for_rerank="Query: ", trust_remote_code=True)
|
177 |
# You can hack the __init__() method of the FlagEmbedding BaseReranker class to use flash_attention_2 for faster inference
|
178 |
# self.model = AutoModelForSequenceClassification.from_pretrained(
|
|
|
211 |
| bge-reranker-v2-gemma | 71.74 | 60.71 |
|
212 |
| bge-reranker-v2.5-gemma2 | - | 63.67 |
|
213 |
| MiniCPM-Reranker | 76.79 | 61.32 |
|
214 |
+
| MiniCPM-Reranker-Light | 76.19 | 61.34 |
|
215 |
|
216 |
### 中英跨语言重排序结果 CN-EN Cross-lingual Re-ranking Results
|
217 |
|
|
|
226 |
| bge-reranker-v2-m3 | 69.75 | 40.98 | 49.67 |
|
227 |
| gte-multilingual-reranker-base | 68.51 | 38.74 | 45.3 |
|
228 |
| MiniCPM-Reranker | 71.73 | 43.65 | 50.59 |
|
229 |
+
| MiniCPM-Reranker-Light | 71.34 | 46.04 | 51.86 |
|
230 |
|
231 |
## 许可证 License
|
232 |
|
233 |
- 本仓库中代码依照 [Apache-2.0 协议](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE)开源。
|
234 |
+
- MiniCPM-Reranker-Light 模型权重的使用则需要遵循 [MiniCPM 模型协议](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md)。
|
235 |
+
- MiniCPM-Reranker-Light 模型权重对学术研究完全开放。如需将模型用于商业用途,请填写[此问卷](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g)。
|
236 |
|
237 |
* The code in this repo is released under the [Apache-2.0](https://github.com/OpenBMB/MiniCPM/blob/main/LICENSE) License.
|
238 |
+
* The usage of MiniCPM-Reranker-Light model weights must strictly follow [MiniCPM Model License.md](https://github.com/OpenBMB/MiniCPM/blob/main/MiniCPM%20Model%20License.md).
|
239 |
+
* The models and weights of MiniCPM-Reranker-Light are completely free for academic research. After filling out a ["questionnaire"](https://modelbest.feishu.cn/share/base/form/shrcnpV5ZT9EJ6xYjh3Kx0J6v8g) for registration, MiniCPM-Reranker-Light weights are also available for free commercial use.
|