|
--- |
|
base_model: BEE-spoke-data/bert-plus-L8-v1.0-allNLI_matryoshka |
|
library_name: sentence-transformers |
|
pipeline_tag: sentence-similarity |
|
tags: |
|
- sentence-transformers |
|
- feature-extraction |
|
- sentence-similarity |
|
- transformers |
|
- 4k |
|
- '4096' |
|
- document embedding |
|
- synthetic data |
|
license: apache-2.0 |
|
datasets: |
|
- pszemraj/synthetic-text-similarity |
|
language: |
|
- en |
|
--- |
|
|
|
# BEE-spoke-data/bert-plus-L8-v1.0-syntheticSTS-4k |
|
|
|
<a href="https://colab.research.google.com/gist/pszemraj/492e96baa289ba2f8326369153f3fd34/inference_bert_synthsts.ipynb"> |
|
<img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> |
|
</a> |
|
|
|
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. |
|
|
|
- Continued-tune of [BEE-spoke-data/bert-plus-L8-v1.0-allNLI_matryoshka](https://hf.co/BEE-spoke-data/bert-plus-L8-v1.0-allNLI_matryoshka) |
|
- ctx 4096 on synthetic text similarity dataset of `text1`, `text2`, `label` |
|
- Matryoshka dims: [768, 512, 256, 128, 64] |
|
|
|
<!--- Describe your model here --> |
|
|
|
## Usage (Sentence-Transformers) |
|
|
|
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: |
|
|
|
``` |
|
pip install -U sentence-transformers |
|
``` |
|
|
|
Then you can use the model like this: |
|
|
|
```python |
|
from sentence_transformers import SentenceTransformer |
|
sentences = ["This is an example sentence", "Each sentence is converted"] |
|
|
|
model = SentenceTransformer('BEE-spoke-data/bert-plus-L8-v1.0-syntheticSTS-4k') |
|
embeddings = model.encode(sentences) |
|
print(embeddings) |
|
``` |
|
|
|
|
|
|
|
## Usage (HuggingFace Transformers) |
|
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. |
|
|
|
```python |
|
from transformers import AutoTokenizer, AutoModel |
|
import torch |
|
|
|
|
|
#Mean Pooling - Take attention mask into account for correct averaging |
|
def mean_pooling(model_output, attention_mask): |
|
token_embeddings = model_output[0] #First element of model_output contains all token embeddings |
|
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() |
|
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) |
|
|
|
|
|
# Sentences we want sentence embeddings for |
|
sentences = ['This is an example sentence', 'Each sentence is converted'] |
|
|
|
# Load model from HuggingFace Hub |
|
tokenizer = AutoTokenizer.from_pretrained('BEE-spoke-data/bert-plus-L8-v1.0-syntheticSTS-4k') |
|
model = AutoModel.from_pretrained('BEE-spoke-data/bert-plus-L8-v1.0-syntheticSTS-4k') |
|
|
|
# Tokenize sentences |
|
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') |
|
|
|
# Compute token embeddings |
|
with torch.no_grad(): |
|
model_output = model(**encoded_input) |
|
|
|
# Perform pooling. In this case, mean pooling. |
|
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) |
|
|
|
print("Sentence embeddings:") |
|
print(sentence_embeddings) |
|
``` |
|
|
|
## Training |
|
|
|
The model was trained with the parameters: |
|
|
|
**Loss**: |
|
|
|
`sentence_transformers.losses.MatryoshkaLoss.MatryoshkaLoss` with parameters: |
|
``` |
|
{'loss': 'CosineSimilarityLoss', 'matryoshka_dims': [768, 512, 256, 128, 64], 'matryoshka_weights': [1, 1, 1, 1, 1], 'n_dims_per_step': -1} |
|
``` |
|
|
|
See more details at [the training run on wandb](https://wandb.ai/pszemraj/test-sbert-v3-api/runs/suv4fd2p) |
|
|
|
--- |