soniajoseph's picture
Update README.md
0d95e38 verified
|
raw
history blame
1.73 kB
metadata
language: en
tags:
  - clip
  - vision
  - transformers
  - interpretability
  - sparse autoencoder
  - sae
  - mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
  - type: explained_variance
    value: 82.6
    pretty_name: Explained Variance %
    range:
      min: 0
      max: 100
  - type: l0
    value: 57.329
    pretty_name: L0

CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:0.0001

Explained Variance Sparsity

Training Details

  • Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
  • Layer: 10
  • Component: hook_mlp_out

Model Architecture

  • Input Dimension: 768
  • SAE Dimension: 49,152
  • Expansion Factor: x64 (vanilla architecture)
  • Activation Function: ReLU
  • Initialization: encoder_transpose_decoder
  • Context Size: 50 tokens

Performance Metrics

  • L1 Coefficient: 0.0001
  • L0 Sparsity: 57.3289
  • Explained Variance: 0.8263 (82.63%)

Training Configuration

  • Learning Rate: 0.0004
  • LR Scheduler: Cosine Annealing with Warmup (200 steps)
  • Epochs: 10
  • Gradient Clipping: 1.0
  • Device: NVIDIA Quadro RTX 8000

Experiment Tracking:

Citation

@misc{2024josephsparseautoencoders,
    title={Sparse Autoencoders for CLIP-ViT-B-32},
    author={Joseph, Sonia},
    year={2024},
    publisher={Prisma-Multimodal},
    url={https://huggingface.co./Prisma-Multimodal},
    note={Layer 10, hook_mlp_out, Run ID: 849j1ijv}
}