Solon Embeddings — large 0.1
SOTA Open source french embedding model.
Instructions :
Add "query : " before the query to retrieve to increase performance of retrieval.
No instructions needed for passages.
Model | Mean Score |
---|---|
OrdalieTech/Solon-embeddings-large-0.1 | 0.7490 |
cohere/embed-multilingual-v3 | 0.7402 |
OrdalieTech/Solon-embeddings-base-0.1 | 0.7306 |
openai/ada-002 | 0.7290 |
cohere/embed-multilingual-light-v3 | 0.6945 |
antoinelouis/biencoder-camembert-base-mmarcoFR | 0.6826 |
dangvantuan/sentence-camembert-large | 0.6756 |
voyage/voyage-01 | 0.6753 |
intfloat/multilingual-e5-large | 0.6660 |
intfloat/multilingual-e5-base | 0.6597 |
Sbert/paraphrase-multilingual-mpnet-base-v2 | 0.5975 |
dangvantuan/sentence-camembert-base | 0.5456 |
EuropeanParliament/eubert_embedding_v1 | 0.5063 |
These results have been obtained through 9 french benchmarks on a variety of text similarity tasks (classification, reranking, STS) :
- AmazonReviewsClassification (MTEB)
- MassiveIntentClassification (MTEB)
- MassiveScenarioClassification (MTEB)
- MTOPDomainClassification (MTEB)
- MTOPIntentClassification (MTEB)
- STS22 (MTEB)
- MiraclFRRerank (Miracl)
- OrdalieFRSTS (Ordalie)
- OrdalieFRReranking (Ordalie)
We created OrdalieFRSTS and OrdalieFRReranking to enhance the benchmarking capabilities of French STS and reranking assessments.
(evaluation script available here : github.com/OrdalieTech/mteb)
- Downloads last month
- 54,783
Spaces using OrdalieTech/Solon-embeddings-large-0.1 11
Evaluation results
- Recall@500 on mMARCO-frvalidation set self-reported92.700
- Recall@100 on mMARCO-frvalidation set self-reported82.700
- Recall@10 on mMARCO-frvalidation set self-reported55.500
- MAP@10 on mMARCO-frvalidation set self-reported29.400
- nDCG@10 on mMARCO-frvalidation set self-reported35.800
- MRR@10 on mMARCO-frvalidation set self-reported29.900
- v_measure on MTEB AlloProfClusteringP2Ptest set self-reported64.169
- v_measure on MTEB AlloProfClusteringS2Stest set self-reported38.171
- map on MTEB AlloprofRerankingtest set self-reported64.877
- mrr on MTEB AlloprofRerankingtest set self-reported66.391