Datasets:
mteb
/

File size: 1,528 Bytes
46c0fca
 
 
 
db0aadb
 
af02824
b403eef
af02824
10d3ffa
db0aadb
 
 
 
 
 
 
 
 
 
 
 
 
10d3ffa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
---
benchmark: mteb
type: evaluation
submission_name: MTEB
---

> [!NOTE]  
> Previously, it was possible to submit model results to MTEB by adding them to the metadata of the model card on huggingface. However, this is no longer possible as we want to ensure that we can match the results with the model implementation. If you want to add your model, please follow the [guide](https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md) on how to do so.

This repository contains the results of the embedding benchmark evaluated using the package `mteb`.


| Reference           |                                                                                          |
| ------------------- | ---------------------------------------------------------------------------------------- |
| ๐Ÿฆพ **[Leaderboard]** | An up to date leaderboard of embedding models                                            |
| ๐Ÿ“š **[mteb]**        | Guides and instructions on how to use `mteb`, including running, submitting scores, etc. |
| ๐Ÿ™‹ **[Questions]**   | Questions about the results                                                              |
| ๐Ÿ™‹ **[Issues]**      | Issues or bugs you have found                                                            |


[Leaderboard]: https://huggingface.co./spaces/mteb/leaderboard
[mteb]: https://github.com/embeddings-benchmark/mteb
[Questions]: https://github.com/embeddings-benchmark/mteb/discussions
[Issues]: https://github.com/embeddings-benchmark/mteb/issues