-
Lumiere: A Space-Time Diffusion Model for Video Generation
Paper • 2401.12945 • Published • 86 -
Long-form factuality in large language models
Paper • 2403.18802 • Published • 24 -
ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion
Paper • 2403.18818 • Published • 25 -
TC4D: Trajectory-Conditioned Text-to-4D Generation
Paper • 2403.17920 • Published • 16
Collections
Discover the best community collections!
Collections including paper arxiv:1905.05950
-
Prompt-to-Prompt Image Editing with Cross Attention Control
Paper • 2208.01626 • Published • 2 -
BERT Rediscovers the Classical NLP Pipeline
Paper • 1905.05950 • Published • 2 -
A Multiscale Visualization of Attention in the Transformer Model
Paper • 1906.05714 • Published • 2 -
Analyzing Transformers in Embedding Space
Paper • 2209.02535 • Published • 3
-
Neural networks behave as hash encoders: An empirical study
Paper • 2101.05490 • Published • 2 -
A Multiscale Visualization of Attention in the Transformer Model
Paper • 1906.05714 • Published • 2 -
BERT Rediscovers the Classical NLP Pipeline
Paper • 1905.05950 • Published • 2 -
Using Explainable AI and Transfer Learning to understand and predict the maintenance of Atlantic blocking with limited observational data
Paper • 2404.08613 • Published • 1
-
JoMA: Demystifying Multilayer Transformers via JOint Dynamics of MLP and Attention
Paper • 2310.00535 • Published • 2 -
Interpretability in the Wild: a Circuit for Indirect Object Identification in GPT-2 small
Paper • 2211.00593 • Published • 2 -
Rethinking Interpretability in the Era of Large Language Models
Paper • 2402.01761 • Published • 21 -
Does Circuit Analysis Interpretability Scale? Evidence from Multiple Choice Capabilities in Chinchilla
Paper • 2307.09458 • Published • 10
-
Analyzing Transformers in Embedding Space
Paper • 2209.02535 • Published • 3 -
Locating and Editing Factual Associations in GPT
Paper • 2202.05262 • Published • 1 -
A Multiscale Visualization of Attention in the Transformer Model
Paper • 1906.05714 • Published • 2 -
BERT Rediscovers the Classical NLP Pipeline
Paper • 1905.05950 • Published • 2