Transformers
English
Inference Endpoints
File size: 1,091 Bytes
e6bcdbf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
license: mit
datasets:
- togethercomputer/RedPajama-Data-V2
language:
- en
library_name: transformers
---

This is a set of sparse autoencoders (SAEs) trained on [Llama 3.1 8B](https://huggingface.co./meta-llama/Meta-Llama-3.1-8B) using the 10B sample of the [RedPajama v2 corpus](https://huggingface.co./datasets/togethercomputer/RedPajama-Data-V2), which comes out to roughly 8.5B tokens using the Llama 3 tokenizer. The SAEs are organized by hookpoint, and can be loaded using the EleutherAI [`sae` library](https://github.com/EleutherAI/sae).

Unlike [EleutherAI/sae-llama-3.1-8b-32x](https://huggingface.co./EleutherAI/sae-llama-3.1-8b-32x), these SAEs were trained with the MultiTopK loss, which allows them to be used at varying sparsity levels at inference time. For more information, see OpenAI's description of the loss in [this paper](https://cdn.openai.com/papers/sparse-autoencoders.pdf).

With the `sae` library installed, you can access an SAE like this:
```python
from sae import Sae

sae = Sae.load_from_hub("EleutherAI/sae-llama-3.1-8b-64x", hookpoint="layers.23.mlp")
```