tsdae_pro_mbert / README.md
T-Blue's picture
Add new SentenceTransformer model.
00058f9 verified
metadata
base_model: google-bert/bert-base-uncased
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:97043
  - loss:DenoisingAutoEncoderLoss
widget:
  - source_sentence: ढचणच𑀟च𑀟
    sentences:
      - ढ𑀢ढल𑀢𑁣ब𑀪चध𑀫ण ढचणच𑀟च𑀟 𑀞नलच𑀠च𑀟च𑀤च𑀪पच𑀯
      - ' णच 𑀪𑀢𑀞𑁦 𑀱च𑀟𑀟च𑀟 𑀠न𑀞च𑀠𑀢𑀟 𑀫च𑀪 𑀤न𑀱च 𑀭थ𑁢𑀰𑀯'
      - ' च त𑀢𑀞𑀢𑀟 𑀠च𑀘चल𑀢𑀳च𑀪𑀠च𑀟च𑀤च𑀪पच𑀯'
  - source_sentence: त𑁣𑀠
    sentences:
      - ' 𑀲𑀪𑁦𑁦𑀣𑁣𑀠 𑀫𑁣न𑀳𑁦 पच ढच𑀢𑀱च 𑀳न𑀣च𑀟 𑀠चप𑀳चण𑀢 𑀠च𑀲𑀢 झच𑀳झच𑀟त𑀢 च प𑀳च𑀞च𑀟𑀢𑀟 ब𑀱च𑀠𑀟चप𑁣त𑀢𑀟 𑀣च𑀟𑀟𑀢णच च 𑀳𑀫𑁦𑀞च𑀪च पच ठ𑀧𑀭ठ𑀯'
      - 𑀖𑀖फ𑀮𑀦 𑁣𑀪𑁣𑀠𑁣 𑀝ठ𑀗𑀯
      - त𑁣𑀠 𑀯
  - source_sentence: 𑀣च𑀟णच𑀟 𑀝𑀭थथ𑀬षठ𑀧𑀧ठ𑀮
    sentences:
      - ' 𑀣च𑀟णच𑀟 𑀝𑀭थथ𑀬षठ𑀧𑀧ठ𑀮 ध𑀪𑁣𑀲𑀯'
      - 𑀳त𑀯
      - ' 𑀳𑀫𑀢 ञच 𑀟𑁦 बच लच𑀲पच𑀟च𑀪 त𑁣ल𑀯'
  - source_sentence: 𑀠च𑀟च𑀤च𑀪पच𑀯
    sentences:
      - ' धच𑀪𑀞𑁦𑀪𑀦 लचनणच𑀟 ढ𑁣𑀳प𑁣𑀟𑀯'
      - ब𑀪𑁦चपषधण𑀪च𑀠𑀢𑀣𑀯
      - 𑀠च𑀟च𑀤च𑀪पच𑀯
  - source_sentence: 𑀫च𑀢𑀲𑀢 𑀳न𑀪𑁦𑀟𑀦 
    sentences:
      - ' 𑀳𑀫त𑀫𑁦𑀪ढचप𑀢न𑀞 पच 𑀫च𑀢𑀲𑀢 ञच𑀦 𑀳न𑀪𑁦𑀟𑀦 च त𑀢𑀞𑀢𑀟 𑀭थ𑀖𑀗𑀯'
      - 𑀯
      - 𑀯

SentenceTransformer based on google-bert/bert-base-uncased

This is a sentence-transformers model finetuned from google-bert/bert-base-uncased. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: google-bert/bert-base-uncased
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 768 tokens
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("T-Blue/tsdae_pro_mbert")
# Run inference
sentences = [
    '𑀫च𑀢𑀲𑀢 𑀳न𑀪𑁦𑀟𑀦 च',
    ' 𑀳𑀫त𑀫𑁦𑀪ढचप𑀢न𑀞 पच 𑀫च𑀢𑀲𑀢 ञच𑀦 𑀳न𑀪𑁦𑀟𑀦 च त𑀢𑀞𑀢𑀟 𑀭थ𑀖𑀗𑀯',
    '𑀯',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Training Details

Training Dataset

Unnamed Dataset

  • Size: 97,043 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 1000 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 3 tokens
    • mean: 5.12 tokens
    • max: 30 tokens
    • min: 3 tokens
    • mean: 9.06 tokens
    • max: 56 tokens
  • Samples:
    sentence_0 sentence_1
    च𑀞𑀱च𑀢 च𑀞𑀱च𑀢 𑀭ठ𑀯
    ठ𑀧𑀧𑁢𑀯 ठ𑀧𑀧𑁢𑀯
    𑁢𑀗𑀯 𑁢𑀗𑀯
  • Loss: DenoisingAutoEncoderLoss

Training Hyperparameters

Non-Default Hyperparameters

  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • num_train_epochs: 5
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: no
  • prediction_loss_only: True
  • per_device_train_batch_size: 16
  • per_device_eval_batch_size: 16
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: False
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step Training Loss
0.0824 500 1.1372
0.1649 1000 0.8075
0.2473 1500 0.7708
0.3297 2000 0.7464
0.4121 2500 0.7286
0.4946 3000 0.7187
0.5770 3500 0.7089
0.6594 4000 0.6942
0.7418 4500 0.7022
0.8243 5000 0.6939
0.9067 5500 0.6859
0.9891 6000 0.6807
1.0715 6500 0.6841
1.1540 7000 0.6764
1.2364 7500 0.6705
1.3188 8000 0.6712
1.4013 8500 0.6683
1.4837 9000 0.6662
1.5661 9500 0.6635
1.6485 10000 0.655
1.7310 10500 0.6667
1.8134 11000 0.6533
1.8958 11500 0.6564
1.9782 12000 0.646
2.0607 12500 0.6522
2.1431 13000 0.6466
2.2255 13500 0.6464
2.3079 14000 0.647
2.3904 14500 0.6408
2.4728 15000 0.6415
2.5552 15500 0.6397
2.6377 16000 0.6303
2.7201 16500 0.6465
2.8025 17000 0.6287
2.8849 17500 0.6358
2.9674 18000 0.6247
3.0498 18500 0.6318
3.1322 19000 0.627
3.2146 19500 0.6222
3.2971 20000 0.6262
3.3795 20500 0.6197
3.4619 21000 0.6234
3.5443 21500 0.6193
3.6268 22000 0.6088
3.7092 22500 0.624
3.7916 23000 0.6089
3.8741 23500 0.6184
3.9565 24000 0.6047
4.0389 24500 0.6066
4.1213 25000 0.6082
4.2038 25500 0.5999
4.2862 26000 0.6046
4.3686 26500 0.6038
4.4510 27000 0.5978
4.5335 27500 0.5948
4.6159 28000 0.5887
4.6983 28500 0.6031
4.7807 29000 0.5823
4.8632 29500 0.5953
4.9456 30000 0.5793

Framework Versions

  • Python: 3.10.12
  • Sentence Transformers: 3.0.1
  • Transformers: 4.42.4
  • PyTorch: 2.3.1+cu121
  • Accelerate: 0.33.0
  • Datasets: 2.18.0
  • Tokenizers: 0.19.1

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

DenoisingAutoEncoderLoss

@inproceedings{wang-2021-TSDAE,
    title = "TSDAE: Using Transformer-based Sequential Denoising Auto-Encoderfor Unsupervised Sentence Embedding Learning",
    author = "Wang, Kexin and Reimers, Nils and Gurevych, Iryna", 
    booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021",
    month = nov,
    year = "2021",
    address = "Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    pages = "671--688",
    url = "https://arxiv.org/abs/2104.06979",
}