These transcoders were trained on the outputs of the first 15 MLPs in deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B. We used 10 billion tokens from FineWeb edu deduped at a context length of 2048. The number of latents is 65,536 and a linear skip connection is included.