Datasets:
mteb
/

results / README.md
Samoed's picture
docs: Update README.md (#115)
b403eef unverified
metadata
benchmark: mteb
type: evaluation
submission_name: MTEB

Previously, it was possible to submit model results to MTEB by adding them to the metadata of the model card on huggingface. However, this is no longer possible as we want to ensure that we can match the results with the model implementation. If you want to add your model, please follow the guide on how to do so.

This repository contains the results of the embedding benchmark evaluated using the package mteb.

Reference
๐Ÿฆพ Leaderboard An up to date leaderboard of embedding models
๐Ÿ“š mteb Guides and instructions on how to use mteb, including running, submitting scores, etc.
๐Ÿ™‹ Questions Questions about the results
๐Ÿ™‹ Issues Issues or bugs you have found