--- benchmark: mteb type: evaluation submission_name: MTEB --- > [!NOTE] > Previously, it was possible to submit model results to MTEB by adding them to the metadata of the model card on huggingface. However, this is no longer possible as we want to ensure that we can match the results with the model implementation. If you want to add your model, please follow the [guide](https://github.com/embeddings-benchmark/mteb/blob/main/docs/adding_a_model.md) on how to do so. This repository contains the results of the embedding benchmark evaluated using the package `mteb`. | Reference | | | ------------------- | ---------------------------------------------------------------------------------------- | | 🦾 **[Leaderboard]** | An up to date leaderboard of embedding models | | 📚 **[mteb]** | Guides and instructions on how to use `mteb`, including running, submitting scores, etc. | | 🙋 **[Questions]** | Questions about the results | | 🙋 **[Issues]** | Issues or bugs you have found | [Leaderboard]: https://huggingface.co./spaces/mteb/leaderboard [mteb]: https://github.com/embeddings-benchmark/mteb [Questions]: https://github.com/embeddings-benchmark/mteb/discussions [Issues]: https://github.com/embeddings-benchmark/mteb/issues