luciaquirke
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -1 +1,5 @@
|
|
1 |
-
These transcoders were trained on the outputs of the first 15 MLPs in deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B. We used 10 billion tokens from FineWeb edu deduped at a context length of 2048. The number of latents is 65,536 and a linear skip connection is included.
|
|
|
|
|
|
|
|
|
|
1 |
+
These transcoders were trained on the outputs of the first 15 MLPs in deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B. We used 10 billion tokens from FineWeb edu deduped at a context length of 2048. The number of latents is 65,536 and a linear skip connection is included.
|
2 |
+
|
3 |
+
Fraction of variance unexplained ranges from 0.01 to 0.37.
|
4 |
+
|
5 |
+
The fineweb-edu-dedup sample is EleutherAI/fineweb-edu-dedup-10b.
|