allNLI-sbert / README.md
pszemraj's picture
Librarian Bot: Add language metadata for dataset (#2)
1a224da verified
metadata
language:
  - en
license: odc-by
size_categories:
  - 100K<n<1M
task_categories:
  - sentence-similarity
dataset_info:
  - config_name: default
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
      - name: label
        dtype: string
    splits:
      - name: train
        num_bytes: 144780011.33594054
        num_examples: 942069
      - name: validation
        num_bytes: 3020947.173540986
        num_examples: 19657
      - name: test
        num_bytes: 3020793.490518473
        num_examples: 19656
    download_size: 72629620
    dataset_size: 150821752
  - config_name: float-labels
    features:
      - name: sentence1
        dtype: string
      - name: sentence2
        dtype: string
      - name: label
        dtype: float64
    splits:
      - name: train
        num_bytes: 138755142
        num_examples: 942069
      - name: validation
        num_bytes: 3034127
        num_examples: 19657
      - name: test
        num_bytes: 3142127
        num_examples: 19656
    download_size: 72653539
    dataset_size: 144931396
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
  - config_name: float-labels
    data_files:
      - split: train
        path: float-labels/train-*
      - split: validation
        path: float-labels/validation-*
      - split: test
        path: float-labels/test-*

this is literally the allNLI example dataset but parsed and reformatted as HF datasets parquet.

token counts

sentence1 column

bert-base-uncased token count:

         token_count
count  942069.000000
mean       20.834934
std        12.953432
min         3.000000
25%        13.000000
50%        17.000000
75%        25.000000
max       428.000000
  • Total count: 19.63 M tokens google/bigbird-roberta-base token count:
         token_count
count  942069.000000
mean       20.678186
std        12.618819
min         3.000000
25%        13.000000
50%        17.000000
75%        25.000000
max       407.000000
  • Total count: 19.48 M tokens

sentence2 column

bert-base-uncased token count:

         token_count
count  942069.000000
mean       12.058493
std         4.507284
min         0.000000
25%         9.000000
50%        11.000000
75%        14.000000
max        77.000000
  • Total count: 11.36 M tokens

google/bigbird-roberta-base token count:

         token_count
count  942069.000000
mean       12.003818
std         4.423798
min         0.000000
25%         9.000000
50%        11.000000
75%        14.000000
max        79.000000
  • Total count: 11.31 M tokens