Model Card for st-polish-kartonberta-base-alpha-v1
This sentence transformer model is designed to convert text content into a 768-float vector space, ensuring an effective representation. It aims to be proficient in tasks involving sentence / document similarity.
The model has been released in its alpha version. Numerous potential enhancements could boost its performance, such as adjusting training hyperparameters or extending the training duration (currently limited to only one epoch). The main reason is limited GPU.
Model Description
- Developed by: Bartłomiej Orlik, https://www.linkedin.com/in/bartłomiej-orlik/
- Model type: RoBERTa Sentence Transformer
- Language: Polish
- License: LGPL-3.0
- Trained from model: sdadas/polish-roberta-base-v2: https://huggingface.co./sdadas/polish-roberta-base-v2
How to Get Started with the Model
Use the code below to get started with the model.
Using Sentence-Transformers
You can use the model with sentence-transformers:
pip install -U sentence-transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer('OrlikB/st-polish-kartonberta-base-alpha-v1')
text_1 = 'Jestem wielkim fanem opakowań tekturowych'
text_2 = 'Bardzo podobają mi się kartony'
embeddings_1 = model.encode(text_1, normalize_embeddings=True)
embeddings_2 = model.encode(text_2, normalize_embeddings=True)
similarity = embeddings_1 @ embeddings_2.T
print(similarity)
Using HuggingFace Transformers
from transformers import AutoTokenizer, AutoModel
import torch
import numpy as np
def encode_text(text):
encoded_input = tokenizer(text, padding=True, truncation=True, return_tensors='pt', max_length=512)
with torch.no_grad():
model_output = model(**encoded_input)
sentence_embeddings = model_output[0][:, 0]
sentence_embeddings = torch.nn.functional.normalize(sentence_embeddings, p=2, dim=1)
return sentence_embeddings.squeeze().numpy()
cosine_similarity = lambda a, b: np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
tokenizer = AutoTokenizer.from_pretrained('OrlikB/st-polish-kartonberta-base-alpha-v1')
model = AutoModel.from_pretrained('OrlikB/st-polish-kartonberta-base-alpha-v1')
model.eval()
text_1 = 'Jestem wielkim fanem opakowań tekturowych'
text_2 = 'Bardzo podobają mi się kartony'
embeddings_1 = encode_text(text_1)
embeddings_2 = encode_text(text_2)
print(cosine_similarity(embeddings_1, embeddings_2))
*Note: You can use the encode_text function for demonstration purposes. For the best experience, it's recommended to process text in batches.
Evaluation
MTEB for Polish Language
Rank | Model | Model Size (GB) | Embedding Dimensions | Sequence Length | Average (26 datasets) | Classification Average (7 datasets) | Clustering Average (1 datasets) | Pair Classification Average (4 datasets) | Retrieval Average (11 datasets) | STS Average (3 datasets) |
---|---|---|---|---|---|---|---|---|---|---|
1 | multilingual-e5-large | 2.24 | 1024 | 514 | 58.25 | 60.51 | 24.06 | 84.58 | 47.82 | 67.52 |
2 | st-polish-kartonberta-base-alpha-v1 | 0.5 | 768 | 514 | 56.92 | 60.44 | 32.85 | 87.92 | 42.19 | 69.47 |
3 | multilingual-e5-base | 1.11 | 768 | 514 | 54.18 | 57.01 | 18.62 | 82.08 | 42.5 | 65.07 |
4 | multilingual-e5-small | 0.47 | 384 | 512 | 53.15 | 54.35 | 19.64 | 81.67 | 41.52 | 66.08 |
5 | st-polish-paraphrase-from-mpnet | 0.5 | 768 | 514 | 53.06 | 57.49 | 25.09 | 87.04 | 36.53 | 67.39 |
6 | st-polish-paraphrase-from-distilroberta | 0.5 | 768 | 514 | 52.65 | 58.55 | 31.11 | 87 | 33.96 | 68.78 |
More Information
I developed this model as a personal scientific initiative.
I plan to start the development on a new ST model. However, due to limited computational resources, I suspended further work to create a larger or enhanced version of current model.
- Downloads last month
- 1,661
Spaces using OrlikB/st-polish-kartonberta-base-alpha-v1 2
Evaluation results
- v_measure on MTEB 8TagsClusteringtest set self-reported32.852
- accuracy on MTEB AllegroReviewstest set self-reported40.189
- f1 on MTEB AllegroReviewstest set self-reported34.711
- map_at_1 on MTEB ArguAna-PLtest set self-reported30.939
- map_at_10 on MTEB ArguAna-PLtest set self-reported47.468
- map_at_100 on MTEB ArguAna-PLtest set self-reported48.303
- map_at_1000 on MTEB ArguAna-PLtest set self-reported48.308
- map_at_3 on MTEB ArguAna-PLtest set self-reported43.220
- map_at_5 on MTEB ArguAna-PLtest set self-reported45.616
- mrr_at_1 on MTEB ArguAna-PLtest set self-reported31.863