Datasets:

ArXiv:
License:
nino_metatrain / README.md
bknyaz's picture
Upload dataset
a536114 verified
|
raw
history blame
4.72 kB
metadata
license: mit
size_categories:
  - 100K<n<1M
pretty_name: 'n'
dataset_info:
  - config_name: c10-16
    features:
      - name: data
        sequence: float16
      - name: test_loss
        dtype: float16
      - name: test_acc
        dtype: float16
      - name: train_loss
        dtype: float16
      - name: train_acc
        dtype: float16
    splits:
      - name: '0'
        num_bytes: 73741472
        num_examples: 2513
      - name: '1'
        num_bytes: 73741472
        num_examples: 2513
      - name: '2'
        num_bytes: 73741472
        num_examples: 2513
    download_size: 208172765
    dataset_size: 221224416
  - config_name: default
    features:
      - name: data
        sequence: float16
      - name: test_loss
        dtype: float16
      - name: test_acc
        dtype: float16
      - name: train_loss
        dtype: float16
      - name: train_acc
        dtype: float16
    splits:
      - name: '0'
        num_bytes: 77328384
        num_examples: 2688
      - name: '1'
        num_bytes: 77328384
        num_examples: 2688
      - name: '2'
        num_bytes: 77328384
        num_examples: 2688
    download_size: 218320869
    dataset_size: 231985152
  - config_name: fm-16
    features:
      - name: data
        sequence: float16
      - name: test_loss
        dtype: float16
      - name: test_acc
        dtype: float16
      - name: train_loss
        dtype: float16
      - name: train_acc
        dtype: float16
    splits:
      - name: '0'
        num_bytes: 77328384
        num_examples: 2688
      - name: '1'
        num_bytes: 77328384
        num_examples: 2688
      - name: '2'
        num_bytes: 77328384
        num_examples: 2688
    download_size: 218320869
    dataset_size: 231985152
  - config_name: lm1b-2-32
    features:
      - name: data
        sequence: float16
      - name: train_loss
        dtype: float16
    splits:
      - name: '0'
        num_bytes: 413283816
        num_examples: 124
      - name: '1'
        num_bytes: 413283816
        num_examples: 124
      - name: '2'
        num_bytes: 413283816
        num_examples: 124
    download_size: 1163916640
    dataset_size: 1239851448
  - config_name: lm1b-3-24
    features:
      - name: data
        sequence: float16
      - name: train_loss
        dtype: float16
    splits:
      - name: '0'
        num_bytes: 310611816
        num_examples: 124
      - name: '1'
        num_bytes: 310611816
        num_examples: 124
      - name: '2'
        num_bytes: 310611816
        num_examples: 124
    download_size: 874816124
    dataset_size: 931835448
configs:
  - config_name: c10-16
    data_files:
      - split: '0'
        path: c10-16/0-*
      - split: '1'
        path: c10-16/1-*
      - split: '2'
        path: c10-16/2-*
  - config_name: default
    data_files:
      - split: '0'
        path: data/0-*
      - split: '1'
        path: data/1-*
      - split: '2'
        path: data/2-*
  - config_name: fm-16
    data_files:
      - split: '0'
        path: fm-16/0-*
      - split: '1'
        path: fm-16/1-*
      - split: '2'
        path: fm-16/2-*
  - config_name: lm1b-2-32
    data_files:
      - split: '0'
        path: lm1b-2-32/0-*
      - split: '1'
        path: lm1b-2-32/1-*
      - split: '2'
        path: lm1b-2-32/2-*
  - config_name: lm1b-3-24
    data_files:
      - split: '0'
        path: lm1b-3-24/0-*
      - split: '1'
        path: lm1b-3-24/1-*
      - split: '2'
        path: lm1b-3-24/2-*

The dataset is being prepared and uploaded

This is the dataset of trained neural network checkpoints used to meta-train the NiNo model from https://github.com/SamsungSAILMontreal/nino/.

It contains 1000 models in total:

  • 300 small convnets with 3 layers and 16, 32 and 32 channels (14,378 parameters in each model), trained on FashionMNIST (FM-16)
  • 300 small convnets with 3 layers and 16, 32 and 32 channels (14,666 parameters in each model), trained on CIFAR10 (C10-16)
  • 200 small GPT2-based transformers with 3 layers, 24 hidden units and 3 heads (1,252,464 parameters in each model), trained on LM1B (LM1B-3-24)
  • 200 small GPT2-based transformers with 2 layers, 32 hidden units and 2 heads (1,666,464 parameters in each model), trained on LM1B (LM1B-2-32)

Each model contains multiple checkpoints:

  • 2688 checkpoints per each model in FM-16 (corresponding to every 4 steps of Adam)
  • 2513 checkpoints per each model in C10-16 (corresponding to every 4 steps of Adam)
  • 124 checkpoints per each model in LM1B-3-24 (corresponding to every 200 steps of Adam)
  • 124 checkpoints per each model in LM1B-2-32 (corresponding to every 200 steps of Adam)

In total, there are 1,609,900 model checkpoints.

The dataset also contains the training loss for each checkpoint, for FM-16 and C10-16 it also contains training accuracy, test loss, test accuracy.

The dataset corresponds to the first 4 columns (in-distribution tasks) in Table 1 below.

This Table is from the Accelerating Training with Neuron Interaction and Nowcasting Networks paper, see https://arxiv.org/abs/2409.04434 for details.