ODeNy commited on
Commit
1f86fe4
·
verified ·
1 Parent(s): 9486e37

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -181,7 +181,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [s
181
  ### Model Description
182
  - **Model Type:** Sentence Transformer
183
  - **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
184
- - **Maximum Sequence Length:** 256 tokens
185
  - **Output Dimensionality:** 768 dimensions
186
  - **Similarity Function:** Cosine Similarity
187
  <!-- - **Training Dataset:** Unknown -->
@@ -198,7 +198,7 @@ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [s
198
 
199
  ```
200
  SentenceTransformer(
201
- (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
202
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
203
  )
204
  ```
@@ -297,7 +297,7 @@ You can finetune this model on your own dataset.
297
  | | sentence1 | sentence2 | score |
298
  |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
299
  | type | string | string | float |
300
- | details | <ul><li>min: 19 tokens</li><li>mean: 37.07 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 66.53 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
301
  * Samples:
302
  | sentence1 | sentence2 | score |
303
  |:----------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
@@ -323,7 +323,7 @@ You can finetune this model on your own dataset.
323
  | | sentence1 | sentence2 | score |
324
  |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
325
  | type | string | string | float |
326
- | details | <ul><li>min: 19 tokens</li><li>mean: 38.15 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 67.62 tokens</li><li>max: 256 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
327
  * Samples:
328
  | sentence1 | sentence2 | score |
329
  |:-----------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
@@ -473,7 +473,7 @@ You can finetune this model on your own dataset.
473
  ### Training Logs
474
  | Epoch | Step | Training Loss | Validation Loss | spearman_cosine |
475
  |:---------:|:--------:|:-------------:|:---------------:|:---------------:|
476
- | 0.3793 | 256 | - | 5.9158 | 0.8422 |
477
  | 0.7407 | 500 | 5.9128 | - | - |
478
  | 0.7585 | 512 | - | 5.6544 | 0.8537 |
479
  | 1.1378 | 768 | - | 5.9536 | 0.8595 |
@@ -487,7 +487,7 @@ You can finetune this model on your own dataset.
487
  | 3.0341 | 2048 | - | 6.3380 | 0.8682 |
488
  | 3.4133 | 2304 | - | 6.9139 | 0.8676 |
489
  | 3.7037 | 2500 | 4.6428 | - | - |
490
- | 3.7926 | 2560 | - | 6.7426 | 0.8676 |
491
 
492
  * The bold row denotes the saved checkpoint.
493
 
 
181
  ### Model Description
182
  - **Model Type:** Sentence Transformer
183
  - **Base model:** [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) <!-- at revision 75c57757a97f90ad739aca51fa8bfea0e485a7f2 -->
184
+ - **Maximum Sequence Length:** 128 tokens
185
  - **Output Dimensionality:** 768 dimensions
186
  - **Similarity Function:** Cosine Similarity
187
  <!-- - **Training Dataset:** Unknown -->
 
198
 
199
  ```
200
  SentenceTransformer(
201
+ (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
202
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
203
  )
204
  ```
 
297
  | | sentence1 | sentence2 | score |
298
  |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
299
  | type | string | string | float |
300
+ | details | <ul><li>min: 19 tokens</li><li>mean: 37.07 tokens</li><li>max: 116 tokens</li></ul> | <ul><li>min: 14 tokens</li><li>mean: 66.53 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
301
  * Samples:
302
  | sentence1 | sentence2 | score |
303
  |:----------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
 
323
  | | sentence1 | sentence2 | score |
324
  |:--------|:------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|:--------------------------------------------------------------|
325
  | type | string | string | float |
326
+ | details | <ul><li>min: 19 tokens</li><li>mean: 38.15 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 11 tokens</li><li>mean: 67.62 tokens</li><li>max: 128 tokens</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.5</li><li>max: 1.0</li></ul> |
327
  * Samples:
328
  | sentence1 | sentence2 | score |
329
  |:-----------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------|
 
473
  ### Training Logs
474
  | Epoch | Step | Training Loss | Validation Loss | spearman_cosine |
475
  |:---------:|:--------:|:-------------:|:---------------:|:---------------:|
476
+ | 0.3793 | 128 | - | 5.9158 | 0.8422 |
477
  | 0.7407 | 500 | 5.9128 | - | - |
478
  | 0.7585 | 512 | - | 5.6544 | 0.8537 |
479
  | 1.1378 | 768 | - | 5.9536 | 0.8595 |
 
487
  | 3.0341 | 2048 | - | 6.3380 | 0.8682 |
488
  | 3.4133 | 2304 | - | 6.9139 | 0.8676 |
489
  | 3.7037 | 2500 | 4.6428 | - | - |
490
+ | 3.7926 | 1280 | - | 6.7426 | 0.8676 |
491
 
492
  * The bold row denotes the saved checkpoint.
493