CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:5e-05
Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 9
- Component: hook_mlp_out
Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
Performance Metrics
- L1 Coefficient: 5e-05
- L0 Sparsity: 195.4962
- Explained Variance: 0.8862 (88.62%)
Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
Experiment Tracking:
- Weights & Biases Run ID: fv5729c1
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/fv5729c1/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
Citation
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co./Prisma-Multimodal},
note={Layer 9, hook_mlp_out, Run ID: fv5729c1}
}
- Downloads last month
- 6
Inference API (serverless) does not yet support torch models for this pipeline type.