text
stringlengths 0
16.9k
| page_start
int64 0
825
| page_end
int64 0
825
| source_file
stringclasses 99
values |
---|---|---|---|
REROUTING LLM R OUTERS
A PREPRINT
Avital Shafran
The Hebrew University
of Jerusalem
Roei Schuster
Wild Moose
Thomas Ristenpart
Cornell Tech
Vitaly Shmatikov
Cornell Tech
ABSTRACT
LLM routers aim to balance quality and cost of generation by classifying queries and routing them to
a cheaper or more expensive LLM depending on their complexity. Routers represent one type of what
we call LLM control planes: systems that orchestrate use of one or more LLMs. In this paper, we
investigate routers’ adversarial robustness.
We first define LLM control plane integrity, i.e., robustness of LLM orchestration to adversarial in-
puts, as a distinct problem in AI safety. Next, we demonstrate that an adversary can generate query-
independent token sequences we call “confounder gadgets” that, when added to any query, cause LLM
routers to send the query to a strong LLM.
Our quantitative evaluation shows that this attack is successful both in white-box and black-box settings
against a variety of open-source and commercial routers, and that confounding queries do not affect
the quality of LLM responses. Finally, we demonstrate that gadgets can be effective while maintaining
low perplexity, thus perplexity-based filtering is not an effective defense. We finish by investigating
alternative defenses.
1 Introduction
Large language models (LLMs) exhibit remarkable capabilities on many tasks. Today, hundreds of open-source and
proprietary LLMs are available at different prices, ranging from expensive, state-of-the-art models to cheaper, smaller,
less capable ones. LLM operators typically provide API access to their models (especially higher-quality models) on a
pay-per-query basis. This imposes non-trivial costs on LLM-based applications and systems.
Developers who want to integrate LLMs into their applications must therefore consider both utility and cost. They want
to maximize the quality of responses to their queries while minimizing the cost. The two objectives conflict with each
other: larger models tend to generate higher-quality answers but charge more per query. For example, at the time of
this writing, GPT-3.5-turbo costs $0.5/$1.5 per 1M input/output tokens, GPT-4o-mini $0.15/$0.6, GPT-4o $2.5/$10,
o1-preview $15/$60. The difference in quality between models is not uniform across queries. For some queries, even a
cheap model can generate an acceptable response. More complex queries require an expensive model to obtain a quality
answer.
A natural solution to balancing performance and economic considerations is to take advantage of the availability of mul-
tiple LLMs at different price-performance points. Recently proposed LLM routingsystems [5, 12, 27, 47, 53] orchestrate
two or more LLMs and adaptively route each query to the cheapest LLM they deem likely to generate a response of
sufficient quality. In the two-LLM case, let Ms be an expensive, high-quality model and Mw a weaker, lower-grade one.
Given query q, the routing algorithm R(·) applies a classifier to q that outputs 0 if Mw is sufficient for answering q, or 1
if Ms is required. The system then routes q accordingly.
LLM routing is an example of a general class of systems we call LLM control planes, which orchestrate the use of multiple
LLMs to process inputs, as further described in Section 2.
Our contributions. First, we introduce LLM control plane integrityas a novel problem in AI safety. Recently proposed
LLM control-plane algorithms are learned, calibrated classifiers (see Section 2). Their inputs are queries from potentially
adversarial users. Robustness of control-plane algorithms to adversarial queries is a new problem, distinct from adversarial
robustness of the underlying LLMs.
arXiv:2501.01818v1 [cs.CR] 3 Jan 2025 | 0 | 0 | arxiv1.pdf |
Figure 1: LLM routers classify queries and route complex ones to an expensive/strong model, others to a cheaper/weak
model. To control costs, LLM routers can be calibrated to maintain (for an expected workload) a specific ratio between
queries sent to the strong and weak models.
To initiate the study of this problem, we show that existing LLM routing algorithms are not adversarially robust. We
design, implement, and evaluate a method that generates query-independent adversarial token sequences we call “con-
founder gadgets.” If a gadget is added to any query, this query is routed to the strong model with high probability. Next,
we show that this attack is effective even in the transfer setting where the adversary does not have full knowledge of the
target LLM router (it is black-box), but has access to another router (e.g., an internally trained surrogate). We also evaluate
the integrity of commercial LLM routers, showing that they can be confounded as well.
Third, we investigate defenses. Our basic method generates gadgets that have anomalously high perplexity. Confounded
queries are thus easily distinguished from normal queries and can be filtered out by the routing system. Unfortunately, this
defense can be evaded by an adversary who incorporates a low-perplexity objective into the gadget generation algorithm,
producing gadgets that have low perplexity—and yet are effective at re-routing queries to the strong model. We also
discuss higher-level defenses, such as identifying users whose queries are routed to the strong model with abnormal
frequency.
Routing attacks can be deployed for various adversarial objectives, e.g., to ensure that the adversary always obtains the
highest-quality answer regardless of the target applications’s internal routing policies and cost constraints, or to mali-
ciously inflate the target’s LLM costs. As LLM control planes grow in importance and sophistication, we hope that this
work will motivate further research on their adversarial robustness.
2 LLM Control Planes and Routing
Inference using large language models (LLMs) is traditionally monolithic: a single model is applied to an input or se-
quence of inputs. This methodology can be sub-optimal for various reasons. State-of-the-art models are often expensive,
with API access to LLMs costing as much as several dollars for each query. Elsewhere, distinct LLMs may excel at dif-
ferent tasks, and selectively using them may improve overall quality on a diverse workload. Finally, combining multiple
LLMs, even all trained for similar tasks, may become increasingly prevalent as performance improvements of individual
LLMs plateaus [8–10].
Researchers and practitioners are therefore now developing inference architectures that use multiple LLMs to answer
queries. These LLMs are orchestrated by what we call an LLM control plane (borrowing the terminology from network-
ing [13]). The control plane may route queries or parts of queries to different LLMs, derive new strings to query to
underlying LLMs, combine answers from underlying LLMs, and more.
LLM routers. A prominent example of this emerging class of LLM control planes are LLM routers [27, 41, 47, 53, 59].
LLM routers decide which of the two (or, sometimes, more) LLMs to use to answer a query. In prescriptive routing,
the router applies some lightweight classifier to the input query that determines which underlying LLM to utilize for a
response. The classifier is itself a learned function that scores the complexity of the query. Deployments can then configure
a score threshold for when to route a query to the more expensive LLM. This threshold can be tuned using representative
workloads to achieve a desired cost-performance trade-off. Figure 1 shows the basic workflow of binary LLM routers.
Non-prescriptive routing [15, 20, 68] uses the responses from one or more underlying LLMs to determine which response
to return to the user. For example, FrugalGPT [20] submits the query to a sequence of models (ordered by price) called a
cascade, stopping when it obtains a response classified by the router as sufficient.
2 | 1 | 1 | arxiv1.pdf |
In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of
responses [31, 45, 57, 58].
The LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes
do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying
models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly,
but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference
costs by using fewer and/or less complex underlying models.
Applications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several
commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a
few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The
service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant
cost savings: up to 98% in the case of Martian [5], and 10× in the case of NotDiamond [7].
3 LLM Control Plane Integrity
In this section, we define LLM control plane integrity. Informally, it means that decisions made about underlying LLM
queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus
on one class of control plane: predictive LLM routing as used to manage cost.
Formalizing control planes. An LLM control plane Rω is a potentially randomized algorithm. It is parameterized by
a string ω, called the parameters. It utilizes some number n of LLMs denoted by M. We will mostly focus on the
case of n = 2, and, for reasons that will be clear in a moment, use Ms (“strong”) and Mw (“weak”) to denote the two
underlying LLMs. Then inference on an input x ∈ Xfor some set X of allowed queries is performed by computing
a response via y ←$ RM
ω (x). Here we use ←$ to denote running R with fresh random coins; we use ← when R is
deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control
planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a
function of a sequence of queries and responses.
LLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying
LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive
control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example,
predictive binary routers use relatively simple classifiers to determine which of Ms or Mw should be used to respond to a
query.
Inference flow. Given a set of LLMs M, a control plane Rω, and an input x, an LLM inference flow is the sequence of
LLM invocations Mij (zj) for 1 ≤ j ≤ m and ij ∈ {w, s} made when executing RM
ω (x). Here m is the total number of
LLM invocations, and z1, . . . , zm are the queries made to the underlying LLMs. Should R be randomized, the sequence
and its length are random variables. An inference flow can be written as a transcript
T = (i1, z1), (i2, z2), . . . ,(im, zm)
of pairs of model indexes ij ∈ {w, s} and model inputs zj. Note that for simplicity we ignore the potential for paral-
lelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ {(w, x), (s, x)}. We write
submitting a sequence of inferences ⃗ x= ⃗ x1, . . . , ⃗ xq to a control plane as
RM
ω (⃗ x) = (RM
ω (⃗ x1), . . . , RM
ω (⃗ xq))
where note that each invocation could result in multiple underlying LLM invocations. In the binary router case, however,
each invocation results in a single LLM invocation.
An inference flow policy dictates the control plane designer’s intention regarding use of the underlying models. For
example, an application may want to ensure that only a small fraction of queries go to the expensive model Ms. We can
define this as a predicate over a sequence of transcripts. In our binary router example, the policy can be more simply
defined as a predicate P over (input, model) pairs (⃗ x1, i1), . . . ,(⃗ xq, iq) since this fully defines the sequence of transcripts.
For example, a policy might specify that the strong model is used in at most an ϵ fraction of inferences:
P((⃗ x1, i1), . . . ,(⃗ xq, iq)) =
qX
j=1
I(ij)
q ≤ ϵ
3 | 2 | 2 | arxiv1.pdf |
where I(ij) = 1 if ij = s and I(ij) = 0 if ij = w. In other words, the predicate is that the fraction of queries routed to the
strong model is bounded by ϵ.
Control plane integrity. A control plane integrity adversaryis a randomized algorithm A that seeks to maliciously guide
inference flow.
In an unconstrained LLM control plane integrity attack, the adversary A seeks to generate inputs ⃗ x= ⃗ x1, . . . , ⃗ xq such
that running RM
ω (⃗ x) generates a transcript for which P((x1, i1), . . . ,(xq, iq)) = 0. This attack could be launched by an
adversary who wants to maximize inference costs for a victim application using an LLM router.
A harder setting requires input adaptation, where the adversary is given inputs x1, . . . , xq and it must find new inputs
ˆx1, . . . ,ˆxq for which the transcript resulting fromP((ˆx1, i1), . . . ,(ˆxq, iq)) = 0. There will be some competing constraint,
such as that xj and ˆxj are very similar for each j, or that the outputs yj ←$ RM
ω (xj) and ˆyj ←$ RM
ω (ˆxj) are close. In the
routing context, the adversary’s goal is to increase the fraction of queries that get routed to the strong model, in order to
improve the overall quality of responses, drive up the victim application’s inference costs, or both.
Relationship to evasion attacks. Evasion attacks [25, 43, 60] against an inference system (also called adversarial exam-
ples [32, 48, 49]) would, in our setting, seek to find a small modification∆ to an input x such that RM
ω (x + ∆) ̸= RM
ω (x)
where addition is appropriately defined based on input type (e.g., slight changes to text).
Our attack setting is not the same. The control plane integrity adversary seeks to maliciously control the inferenceflow, not
necessarily the output of inference. In an unconstrained attack, the adversary does not care what outputs are generated.
In the input adaptation attack, the adversary seeks to craft inputs that modify the inference flow yet do not change the
responses of the strong underlying LLM to the extent possible. Looking ahead, we will use evasion techniques in our
adaptation attacks against learned control plane routers, but, importantly, not the overall inference.
In the other direction, undermining LLM control plane integrity could be a stepping stone toward evasion attacks. For
example, if RM
ω is used to classify malicious content by combining LLMs each tuned to different types of harm categories,
then modifying inputs to force inference flows away from appropriate models could aid evasion. We leave evaluation of
how control-plane integrity attacks can enable evasion to future work.
Threat models. Within the context of control plane integrity attacks against LLM routers, we identify several threat
models that differ in terms of the adversary’s goals and their knowledge about the target control planeRM
ω .
In terms of goals, an adversary may seek to inflate the costs of a victim application that utilizes an LLM control plane.
As a kind of denial-of-service attack, such cost inflation would penalize the application developer who expects routing
to control costs. Another adversarial goal could be arbitrage: consider an application that charges X dollars per query,
whereas directly using Ms costs Y > X. The application’s lower rate X makes economic sense assuming it uses a router
to route the bulk of queries to a cheaper model Mw. An input adaptation attack in this setting can gain (indirect) access to
Ms, obtaining an arbitrage advantage of Y − X per query. To be effective, this arbitrage adversary would want to ensure
that adaptations do not lower response quality (i.e., it extracts all the value out of rerouting to Ms). As before, the victim
in this case is the application that relies on routing to lower its costs (unsuccessfully, under this attack).
We now discuss adversarial capabilities. We assume that our victim application’s prompt includes a substring that can be
controlled by the adversary. This represents many real-world apps such as chatbots, coding assistants, writing assistants,
and others, that insert user inputs into an LLM prompt. In crafting adversarial portions of prompts, an adversary may have
various levels of knowledge about the victim application’s router. We consider the following knowledge settings:
• White-box setting: The adversary knows the control plane algorithm and its parameters ω.
• Black-box (transfer) setting: The adversary does not know the control plane algorithm R and ω for the target model,
but knows instead another control plane algorithm R′
ω′ and its parameters. We refer to R′
ω′ as the surrogate. For
example, this could arise if an adversary trains their own router using available data. In this setting our attacks are
also zero-shot in that they do not require any interaction with the target control plane before the query that is being
rerouted.
4 Confounding Control Planes with Gadgets
We now turn to our main contribution: a methodology for attacking LLM control plane integrity. The key insight is that
an adversary can modify queries to mislead or “confound” the routing logic into routing these queries to an LLM of the
adversary’s choosing. Furthermore, we will demonstrate that these attacks can be black-box and query-independent, i.e.,
a single modification works for all queries and does not require advance knowledge of the specific router being attacked.
4 | 3 | 3 | arxiv1.pdf |
Figure 2: Overview of our attack on LLM routing control plane integrity. The attack adds to each query a prefix (repre-
sented by the gear), called a “confounder gadget,” that causes the router to send the query to the strong model.
We focus on the binary router setting in which the router applies a learned scoring function to input queries and routes
any query whose score exceeds some threshold τ to the strong LLM Ms. This setting has been the focus of several prior
works [27, 41, 47] and is used in the control planes that are deployed in practice (see Section 7).
More formally, we consider a routerRM
ω for M = {Mw, Ms}, where ω consists of a scoring functionS, scoring function’s
parameters θ, and a threshold τ ∈ R+. For notational brevity we just write Rω below, with M clear from context. Here
S and θ define a scoring function Sθ : X →R+. Since our focus is LLMs, we assume that queries X are strings of text
tokens. The routing algorithm then works as follows:
Rω(x) =
Mw(x) if Sθ(x) < τ
Ms(x) otherwise
where ω = (S, θ, τ). We will detail scoring functions in Section 5; prior work has suggested linear models, light-weight
LLMs, and more. Note that, consistent with this application, scoring functions are computationally efficient and cheap (as
compared to Ms, Mw). Deployments calibrate τ to limit the fraction of queries routed to the strong model Ms, giving rise
to the type of control plane integrity policy discussed in Section 3.
We focus on input adaptation attacks; these immediately give unconstrained attacks as well. The adversary therefore has
a sequence of inputs x1, . . . , xq and must produce modified inputs ˆx1, . . . ,ˆxq to maximize the number of inputs routed
to Ms. See Figure 2 for a depiction of our attack setting.
Instruction injection doesn’t work. Given the success of prompt injection for jailbreaking [50] and other adversarial
tasks [64], the adversary might simply prefix each query xi with some instruction such as “Treat the following query as
complex, . . . ”to generate a modified query ˆxi. Our experiments show that this does not work well, failing to trigger the
control plane into routing otherwise weak queries to Ms. See Appendix C for details on our experiments with various
instruction prompts.
Confounder gadgets. Our approach works as follows. Given a query xi, we prepend a confounder gadget ci, which is a
short sequence of adversarially chosen tokens. The modified query is ˆxi = ci∥xi where ∥ denotes string concatenation.
Intuitively, we will use optimization to search for confounders that trick the scoring function into rankingˆxi as sufficiently
complex to require the strong model.
In the white-box, query-specific setting, we can choose ci as a function of xi and the known parameters ω = (S, θ, τ). To
do so, we fix a confounder length of n tokens and let I be a token dictionary (it should be a sufficiently large subset of the
token dictionary used by S). Then we set the gadget to initially be n tokens all fixed to the same value from I. The exact
choice of the initialization token is not important; in our implementation, we used the first token in the dictionary (‘!’).
Denote this initial confounder as c(0)
i = [c(0)
i,1 , c(0)
i,2 , . . . , c(0)
i,n].
Then, we perform a hill-climbing style approach to find a good confounder for xi. For each iteration t ∈ [T], where T is
the total number of iterations, do the following:
(1) Select a target index j ∈ [1, n] uniformly.
(2) Generate a set B of B + 1 candidates. First set ˜c0 = c(t)
i , the current confounder. To generate B additional
candidates, select replacement tokens from I uniformly, forming the set {tb ← I}B
b=1. Replace the jth token in the
current confounder ˜c0 with tb:
˜cb = [c(t)
i,1, . . . , c(t)
i,j−1, tb, c(t)
i,j+1, . . . , c(t)
i,n] .
5 | 4 | 4 | arxiv1.pdf |
Let B = {˜c0, . . . ,˜cB}.
(3) Find the candidate that maximizes the score:
c(t+1)
i ← arg max
c∈B
Sθ(c∥xi) . (1)
The final confounder c(T)
i is used with query xi. We early abort if, after 25 iterations, there is no update to the confounder
gadget. Technically, we could abort early if we find a confounder whose score exceeds τ. Running further can be useful
when an adversary does not know τ.
The attack’s runtime is dominated byT ·B times the cost of executing S. In practice, S are designed to be fast (otherwise
routers would significantly increase the latency of applications that use them). We report precise timings later; in summary,
the attack is fast because we can set T to be relatively small and still find high-scoring confounders.
Due to the randomness in index and token selection, the method converges to different, yet similarly effective, confounder
gadgets on each run. Our evaluation will thus measure average performance over multiple gadgets.
Query-independent confounders. One downside of the per-query approach is that the adversary must repeat, for each
query, the search for a good confounder. In practice, the adversary might prefer a query-independent attack. Our con-
founder gadget approach extends to this setting readily: perform the search routine above for an empty query. In other
words, just ignore xi in the query-dependent attack above, replacing Sθ(c∥xi) in Eq. 1 with Sθ(c). This finds a sin-
gle query-independent confounder c that can be prefixed to all queries, i.e., ˆxi = c∥xi. We will show that this works
surprisingly well.
It is tempting to assume the reason a query-independent confounder works well is that a good scoring function should be
roughly monotonic in query extensions, i.e., one might expect thatSθ(c∥x) ≥ Sθ(c) for almost any suffixx. This intuition
is not correct. In our experiments, we found that Sθ(c∥x) < Sθ(c) for many x and some of the routers discussed below.
Nevertheless, by ensuring that Sθ(c) is pretty high (set the number of iterationsT higher) the resulting query-independent
confounder works well. That is, we at least get that Sθ(c∥x) > Sθ(x).
The black-box setting: confounders that transfer. Finally, the attacks so far are in the white-box setting, where the
attacker can optimize directly against Sθ. While in some cases routing control planes will be public knowledge, in others,
including the proprietary control planes we explore in Section 7, they are hidden. This gives rise to the black-box setting.
While an attacker might seek to perform model extraction attacks [43, 65] to learn θ, we instead explore attacks that
transfer from one router to another.
In more detail, we assume the adversary has access to a router R′
ω′ , called the surrogate, that is trained on data similar to
that used for the target router. Then the attack is the same as above, except that we use the surrogate’s scoring function
S′
θ′ instead of the target’s Sθ. Again, we will see that this works surprisingly well: the query-independent confounders
found for the surrogate transfer to successfully reroute queries against the target router.
Putting it all together. In summary, our methodology for input adaptation attacks is:
(1) (Preprocessing) Develop a single query-independent confounder gadget c, using either the target router or surrogate
to score the confounder.
(2) (Input adaptation) For each query xi, submit ˆxi = c∥xi instead to obtain a response ˆyi.
The confounder is applied to all queries, i.e., the adversary does not need to guess whether the original query would
have been routed to the weak or strong model. In the rest of the paper, we demonstrate the confounders rarely result in
“downgrades,” i.e., rerouting of queries from the strong to weak model.
We have experimented with variations of this approach that don’t work quite as well, for example adding c as a suffix
instead of a prefix. See Appendix B for details.
5 Open-Source Routers: Experimental Setup
To evaluate efficacy of confounder gadgets generated using the method from Section 4, we perform experiments with
several LLM routers. This section explains our experimental setup for the open-source routers proposed in the research
literature [47]; results of this evaluation appear in Section 6. In Section 7, we discuss experiments with proprietary,
commercial routers. Figure 3 shows the summary of our experimental setup.
6 | 5 | 5 | arxiv1.pdf |
Routers Notation
Similarity-weighted ranking RSW
Matrix factorization RMF
BERT classifier RCLS
LLM scoring RLLM
LLM pair Strong (Ms) Weak (Mw)
1 Llama-3.1-8B 4-bit Mixtral 8x7B
2 Llama-3.1-8B Mistral-7B-Instruct-v0.3
3 Llama-3.1-8B Llama-2-7B-chat-hf
4 GPT-4-1106-preview 4-bit Mixtral 8x7B
Benchmark Description
MT-Bench [71] 160 open-ended questions
MMLU [35] 14,042 multi-choice questions
GSM8K [24] 1,319 grade-school math problems
Figure 3: Summary of our setup for routers, underlying LLMs, and benchmark datasets used in the experiments.
In all experiments, we assume that the adversary’s goal is to reroute queries to the strong model. In Appendix E, we
evaluate efficacy of the attack when the goal is to reroute to the weak model.
Target routers. We focus our evaluation on the four prescriptive routing algorithms proposed by Ong et al. [47],
which provides open-source code and trained parameters, and does so for a representative variety of routing ap-
proaches: similarity-based classification [41, 59], an MLP constructed via matrix factorization [59], BERT-based clas-
sification [27, 53, 59], and a fine-tuned LLM.
The routers we evaluate were trained in a supervised fashion using a set of reference (training) queries whose performance
score on each of the considered models is known. The scores were computed from a collection of human pairwise rankings
of model answers for each of the queries. We note that while the routers we consider are all learned using this training
set, there is no reason to believe a non-learning-based approach (e.g., rule based) to routing would be more adversarially
robust.
We now outline the routing methods considered in this work. See Ong et al. [47] for their full implementation details.
Similarity-weighted ranking: The first method is based on the Bradley-Terry (BT) model [17]. For a given user query,
this model derives a function to compute the probability of the weak model being preferred over the strong model. The
probability-function expressions all share parameters, which are optimized to minimize the sum of cross-entropy losses
over the training-set queries, where each element in the sum is weighted by the respective query’s similarity with the
user’s query (computed as embeddings cosine similarity, with the embedding derived using OpenAI’s text-embedding-3-
small [6]). We denote this method as RSW .
Matrix factorization: The second method is based on matrix factorization. The training queries are used to train a bilinear
function mapping a model’s embedding and a query’s embedding to a score corresponding to how well the model performs
on the query. Routing is done by computing the score of the input query for each model, and choosing the highest-scoring
model. We denote this method as RMF .
BERT classifier: The third method involves fine-tuning a classifier, based on the BERT-base architecture [26], to predict
which of the two models produces a better response for the given query or whether they do equally well (a tie). The
routing decision is based on the probability of the weak model providing a better response versus the strong model or the
tie. We denote this method as RCLS .
LLM classifier: The last method is based on asking an LLM to provide a score in the range 1–5 of how an AI expert
would struggle to respond to a given query based on the query’s complexity. For this, Ong et al. fine-tuned a Llama-3-8B
model [4] using their reference set of queries and corresponding scores. We denote this method as RLLM .
Underlying LLMs. In [47], Ong et al. trained the routers with GPT-4-1106-preview [14] as the strong model and Mixtral
8x7B [39] as the weak model. They report successful generalization between the underlying LLMs, stating that their
routers trained for a particular strong-weak LLM pair can be used with other strong-weak LLM pairs.
To allow our evaluation to scale, we use as the strong model Ms the open-sourced Llama-3.1-8B [3] and as Mw the
4-bit quantized version of Mixtral 8x7B (for efficiency reasons). This reduced the cost of our experiments by avoiding
expensive GPT API calls and lowering the computational costs of Mixtral. Unless mentioned otherwise, all of our results
7 | 6 | 6 | arxiv1.pdf |
will be evaluated with respect to this pair, which we refer to as LLM pair 1. We performed more limited experiments with
the original strong, weak model pair (LLM pair 4) and had similar success in rerouting.
We additionally performed experiments with two further weaker models, in order to better evaluate the case where weak
models produce much lower-quality responses for queries (compared to the strong model). In particular, we define LLM
pair 2 as the strong model plus Mistral-7B-Instruct-v0.3 [38] and LLM pair 3 as the strong model plus Llama-2-7B-chat-
hf [63]. The weaker models in pairs 2 and 3 were chosen to represent smaller (Mistral 7B) and older-generation (Llama-2)
models: according to the Chatbot Arena LLM ranking leaderboard [1, 21], Llama-3.1-8B is ranked in the 58th place,
Mixtral 8x7B at the 88th place, Mistral-7B at the 108th place, and Llama-2-7B at the 125th place.
The LLM strong-weak pairs with which we performed experiments are summarized in Figure 3.
Evaluation datasets. We will evaluate our attacks using three standard LLM benchmarks as workloads: MT-Bench [71],
a dataset of 160 open-ended questions, MMLU [35], a dataset of 14,042 multi-choice questions, and GSM8K [24], a
dataset of 1,319 grade-school math problems. Note that Ong et al. [47] flagged that some data points are “contaminated”,
i.e., they are too similar to the ones used in their training of the routers. We use these datasets without these contaminated
elements, resulting in 72 MT-bench queries, 14,037 MMLU queries, and 1,307 GSM8K queries.
For MMLU and GSM8K, we will require that the LLMs respond in a predefined format so we can parse and compare
the responses to ground-truth answers. To facilitate this, we prepended formatting instructions to the query, inserted as
a prefix before the gadget in the case of confounded queries. In other words, a confounded query ends up defined as
ˆxi = instr∥c∥xi for instruction template instr, confounder gadget c, and original query xi. Thus in this case we model
a scenario where the adversary only controls a part of the prompt rather than the entire prompt. See Appendix B for
formatting examples and ablations.
Router calibration. For each workload, we must calibrate each router by setting the threshold τ to achieve some target
fraction ϵ of queries routed to the strong model. Note that the calibration process we use is agnostic to the underlying
LLM pair. We therefore must define 12 distinct thresholds, one for each router, dataset pair. For our experiments here,
we set ϵ = 0.5, meaning the goal is to have about half the queries routed to the strong model. This reflects an application
developer that seeks to control for costs, even if it may mean sacrificing some performance for some workloads.
To calibrate for MT-bench, we use the Chatbot Arena [21] dataset as the calibration set, computing the threshold using
the 55 K queries for which Ong et al. precomputed the scoring function outputs. To calibrate for MMLU and GSM8K,
we select 1,000 queries uniformly at random and uses these to set thresholds. Looking ahead, we do not use these queries
during evaluation of the attacks.
Note that it important that the distribution of calibration queries be similar to the distribution of the target workload (and,
in our experiments, the test queries). We observed that the Chatbot Arena-based threshold did not transfer well to MMLU
and GSM8K, resulting in the majority of queries (≈ 98%) routed to the strong model.
6 Rerouting Open-Source Routers
We now empirically evaluate our rerouting attack against the open-source routers described in the previous section. Unless
otherwise specified, our evaluation focuses on the query-independent attack setting where the attacker first finds a fixed
set of gadgets and then uses them to attack arbitrarily many queries. This is the conservative setting, and query-specific
gadgets — which carry a higher computational cost — generally work better.
In Appendix C we evaluate optimization-free alternatives for generating our confounding gadgets, and show they signifi-
cantly underperform our optimization-based approach.
White-box confounder gadget generation. Following our attack framework described in Section 4, we construct a
query-independent control-plane gadget designed to confuse each router. We start with the white-box setting, setting the
batch size to B = 32 and the number of iterations to T = 100, ignoring thresholds. We generate four sets of n = 10
gadgets, i.e., ten for each router. Examples of generated gadgets can be found in Appendix A.
When reporting scores below, we therefore report the average over the n gadgets used with all 72 MT-bench queries, 100
randomly selected MMLU queries, and 100 randomly selected GSM8K queries. None of these testing queries were used
in the training of the routers or their calibration.
Runtime and convergence. Figure 4 shows the convergence rates for 10 different gadgets, against different routing
algorithms. The overall average number of iterations before convergence is 58. Generation against RSW converges the
8 | 7 | 7 | arxiv1.pdf |
0 20 40 60
Iterations
0.220
0.225
0.230
0.235
0.240
0.245Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9
(a) RSW
0 20 40 60
Iterations
0.2
0.4
0.6
0.8Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9 (b) RMF
0 20 40 60
Iterations
0.5
0.6
0.7
0.8
0.9Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9 (c) RCLS
0 20 40 60
Iterations
0.4
0.5
0.6
0.7
0.8Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9 (d) RLLM
Figure 4: Convergence of gadget generation against different routing algorithms.
RSW RMF RCLS RLLM
Upgrade Strong Upgrade Strong Upgrade Strong Upgrade Strong
MT-Bench 100 ± 0 81 → 100 ± 0 100 ± 0 58 → 100 ± 0 100 ± 0 67 → 100 ± 0 73 ± 5 57 → 88 ± 2
MMLU 90 ± 1 43 → 94 ± 1 78 ± 4 53 → 90 ± 2 100 ± 0 47 → 100 ± 0 95 ± 1 53 → 98 ± 1
GSM8K 98 ± 0 52 → 99 ± 0 100 ± 0 54 → 100 ± 0 100 ± 0 56 → 100 ± 0 94 ± 3 53 → 97 ± 1
Table 1: The white-box attack’s rerouting success rate. “Upgrade” is the percentage of “Weak” queries successfully
rerouted to the strong model by adding a confounder gadget. “Strong” shows the change in the percentage of queries that
are routed to the strong LLM Ms before and after our rerouting attack.
fastest (50 iterations on average), RMF the slowest (66 iterations on average). Interestingly, the score of RSW does not
increase much during optimization but is still sufficient for a successful attack.
Runtime varies significantly when generating gadgets against different routing methods. On a machine with one A40 GPU,
4 CPUs, and 180G RAM, a single iteration takes 36.9 s, 8.4 s, 0.8 s, and 6.9 s for the RSW , RMF , RCLS , and RLLM
routers, respectively. On average, it takes around 31 minutes to generate a gadget for theRSW router, 9 minutes forRMF ,
50s for RCLS , and 6 minutes for RLLM .
Rerouting success rates. Recall that our attack adds the same confounder gadget to all queries. We start by evaluating the
reroute success rates: What fraction of confounded queries does the target router send to the strong model Ms? We show
the results for our attack in Table 1. Focusing first on the columns labeled ‘Strong’, the notation “ X → Y ± Z” relays
that for unmodified queries, X% are routed to the strong model Ms, and that for the n = 10 confounders we achieved an
average of Y % of queries sent to Ms. The standard error is Z. Note that while calibration targets ϵ = 0.5, we see some
natural variance in X for the test sets.
To breakdown success further, we additionally report the upgrade rate, which focuses on the percentage of queries that
were (a) originally routed to the weak model, and (b) routed to the strong model after they were modified with the
confounder gadget. Because in our attacks few queries get “downgraded” (confounders cause them to be rerouted to the
weak model instead of strong), the upgrade rate dictates the success rate.
As can be seen, the gadgets reroute almost all weak queries to the strong model. In most cases we see 100% success, or
close to it. The worst case still achieves 88% rerouting success, boosting the fraction of queries sent to the strong LLM by
1.5x. Rerouting fails only for some queries that even after confounding are sent to the weak model: the fixed gadget did
not sufficiently increase the router’s estimate of those queries’ complexity. This is the only source of error for the attack:
no queries in these experiments got “downgraded”, i.e., a query that would otherwise be sent to Ms ends up rerouted to
Mw. This also means that adding the confounder to every single query does not have negative impact on rerouting efficacy.
We report standard error values for both the upgrade rates and the total percentage of queries routed to the strong model.
The maximal standard error is in the low single digits, indicating similar success rates across gadgets.
Quality of attack responses. We now turn to evaluating the quality of the responses generated by the attack. Note that
because we have calibrated the routers to target ϵ = 0 .5, our attacks can improve response quality by rerouting to the
stronger model. In the other direction, our attacks add confounder gadgets which might degrade response quality.
9 | 8 | 8 | arxiv1.pdf |
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 13.8 12 .3 ± 0.2 12 .6 12 .3 ± 0.2 13 .1 12 .1 ± 0.2 12 .7 12 .7 ± 0.4
MMLU 20.4 20 .1 ± 0.1 20 .0 20 .3 ± 0.1 20 .2 20 .5 ± 0.1 21 .0 19 .6 ± 0.1
GSM8K 17.1 15 .1 ± 0.3 17 .0 15 .2 ± 0.3 17 .0 15 .0 ± 0.2 16 .4 15 .2 ± 0.3
Table 2: Average perplexity of responses to the original and confounded queries, in the white-box setting for LLM pair 1.
Response perplexity does not change significantly when adding the confounder gadget.
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 8.4 8 .3 ± 0.0 8.4 8 .4 ± 0.0 8.4 8 .3 ± 0.0 8.3 8 .2 ± 0.1
MMLU 61 66 ± 0 64 64 ± 1 63 65 ± 0 67 66 ± 0
GSM8K 46 64 ± 1 50 67 ± 1 50 63 ± 1 44 64 ± 1
Table 3: Average benchmark-specific scores of responses to the original and confounded queries, in the white-box setting
for LLM pair 1. Rerouting to the strong model improves quality of responses as long as there is a significant gap between
the benchmark performance of the weak and strong LLMs.
As a first measure of response quality, we compare the perplexity scores for unmodified responses and confounded query
responses. Text perplexity [37] is a well-known method for approximating “naturalness” of text sequences. Perplexity
can be computed using an LLM, we use GPT-2 [51] for this purpose as it is a standard choice [16, 69];1 Table 2 shows the
results. As can be seen, adding the confounder gadget to queries does not significantly change response perplexity. To the
extent that it does, it usually somewhat decreases response perplexity, i.e., makes it more “natural”. That said, perplexity
is a coarse measure of “naturalness,” and it does not measure whether the response is correct. In particular, responses of
strong and weak LLMs tend to have similar perplexities. We further discuss this issue in Appendix D.
We thus also evaluate using the following benchmark-specific metrics to assess response quality:
• MT-bench: We score the responses on a scale of 1–10 using an LLM-as-a-judge methodology [71]. We use
GPT-4o [2] as the judge and ask it to provide a score given a pair of a query and a corresponding response.
• MMLU: We parse the responses and compare the answer to the ground truth. In cases where the response did not
fit any known multi-choice format, we marked the response as a mistake. We report accuracy as the percentage of
responses that match the ground truth.
• GSM8K: similar to MMLU except questions are math rather than multiple choice, thus we parse the answers accord-
ing to the expected format.
Table 3 shows that, according to these metrics, in most cases responses to the confounded queries are no worse, and in
some cases even better, than responses to the original queries. We attribute the improvement on the GSM8K benchmark
to the fact that the strong model performs significantly better than the weak model on this benchmark (57% vs. 33%). On
the MT-bench and MMLU benchmarks, strong and weak models have comparable performance (8.5 vs. 7.6 for MT-bench
and 66% vs. 64% for MMLU), thus routing does not degrade quality of responses and, consequently, the attack cannot
improve it.
To further demonstrate that the attack improves the quality of responses when there is a significant gap between the weak
and strong LLMs, we perform an additional evaluation with Mistral-7B-Instruct-v0.3 [38] and Llama-2-7B-chat-hf [63]
as the weak LLMs (LLM pairs 2 and 3). Mistral-7B achieves 7.4, 57%, and 25% on MT-bench, MMLU, and GSM8K,
respectively. Llama-2-7B achieves 6.4, 44%, and 21%. Table 4 shows that the rerouting attack improves quality of
responses when either of these LLMs is the weak model, and in particular for the weaker Llama-2-7B model.
LLM responses are sometimes affected by the confounder gadget. In some cases, the LLM responded with, for example,
“I can’t answer that question as it appears to be a jumbled mix of characters”. Still, the response continued with “However,
I can help you with the actual question you’re asking,” followed by the actual answer. We observed very few cases where
an LLM refused to answer due to the presence of the gadget. In most cases, the response did not mention anything
1Some responses had abnormally high perplexity values (> 100), which we found do not correlate with quality, but these variations
disproportionately contribute to the average. We thus filter out such high-perplexity responses as outliers in both benign and attack
settings. We provide examples of filtered responses in Appendix D.
10 | 9 | 9 | arxiv1.pdf |
RSW RMF RCLS RLLM
Orig. Conf. Orig. Conf. Orig. Conf. Orig. Conf.
LLM pair 2
MT-Bench 8.5 8 .3 ± 0.0 8.4 8 .3 ± 0.1 8.4 8 .4 ± 0.1 8.4 8 .3 ± 0.1
MMLU 55 64 ± 1 63 64 ± 0 58 66 ± 1 62 66 ± 0
GSM8K 46 64 ± 1 51 67 ± 1 49 63 ± 1 38 63 ± 2
LLM pair 3
MT-Bench 8.4 8 .3 ± 0.0 8.1 8 .3 ± 0.1 8.3 8 .4 ± 0.1 8.1 8 .2 ± 0.1
MMLU 51 64 ± 1 57 63 ± 1 52 66 ± 1 59 66 ± 1
GSM8K 40 64 ± 1 44 67 ± 1 45 63 ± 1 37 64 ± 1
Table 4: Average benchmark-specific scores of responses to the original and confounded queries with Mistral-7B-Instruct-
v0.3 (LLM pair 2) or Llama-2-7B-chat-hf (LLM pair 3) as the weak model, in the white-box setting. Results further
emphasize that the rerouting attack improves quality of responses when there is a significant gap between the weak and
strong LLMs.
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
MT-Bench 99±1 88 ±5 45 ±5 100±0 96 ±2 39 ±3 100±0 79 ±9 51 ±5 100±0 83 ±5 85 ±7
MMLU 66±5 44 ±11 81 ±3 82±4 56 ±7 74 ±2 64±6 16 ±7 80 ±5 53±4 20 ±5 46 ±11
GSM8K 99±1 72 ±11 63 ±4 92±2 88 ±3 62 ±4 76±6 60 ±9 65 ±8 60±8 70 ±7 73 ±10
Table 5: Average upgrade rates for our attack in the black-box setting. This is the average percentage of queries rerouted
from the weak to strong model under the target router due to a confounder gadget generated using the surrogate. The
average downgrade rate (i.e., strong-to-weak rerouting) is 1.2% across all routers. Upgrade rates are lower than in the
white-box setting but still high, indicating that the attack transfers.
abnormal about the query. Intuitively, this reflects the fact that while LLMs are built to be robust to noisy inputs, the
router itself is not.
In summary, the attack is highly successful at rerouting queries from the weak to the strong model. Overall, quality
improves if there is a significant gap between the strong and weak LLMs used by the router. Either way, confounding has
no negative impact on the quality of responses.
Black-box attack results. Next, we consider the black-box attack, where the attacker does not know the algorithm
used by the target router. We assume that the attacker has access to another, surrogate router that it can use to generate
confounder gadgets. In effect, we evaluate transferability of the attack from a known, white-box router to unknown,
black-box routers.
Table 5 shows the results for all combinations of surrogate (denoted by ˆR) and target routers. For conciseness we focus
on the upgrade and downgrade rates for the remainder of this work. Upgrade rates are lower than in the white-box setting
but still high, indicating that the attack transfers. The LLM-based routing algorithm RLLM has the lowest rates, perhaps
because it is the most complex of the four. The downgrade rate is 0 in most cases and is 1.2% on average.
Table 6 shows that the black-box attack does not increase the average perplexity of responses as generated by LLM
pair 1. Table 7 shows that the attack does not decrease benchmark-specific scores, other than some small decrease in
some cases for the MMLU benchmark. For GSM8K, similar to the behaviour observed in the white-box setting, we see
an improvement with our attack due to the performance difference between the strong and weak models for this task. This
indicates that confounding affects only the routing, not the quality of responses. When the weak model is significantly
weaker than the strong model, i.e., LLM pairs 2 and 3, the attack can improve the quality of responses significantly.
Query-specific gadgets. By default, our gadget generation method is query-independent and the same gadget can be used
to reroute any query. An adversary with more resources may instead generate a dedicated gadget for each query (using
the same algorithm).
Table 8 and Table 9 show the results for the white-box and black-box settings, respectively. (Here, percentage numbers
are not averaged and there is no standard error since we used a single gadget per query.) The white-box results are nearly
perfect; the black-box results are often better but sometimes somewhat worse than those for query-independent gadgets.
We conjecture that this is due to some level of overfitting.
11 | 10 | 10 | arxiv1.pdf |
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
MT-Bench 0.4 0 .8 0 .6 1.4 0 .7 0 .3 1.7 0 .3 0 .7 0.8 −0.6 0 .0
MMLU 0.1 0 .8 1 .1 0.2 0 .2 1 .1 0.3 0 .8 0 .9 1.3 1 .2 0 .9
GSM8K 1.9 1 .7 0 .6 1.6 1 .7 0 .2 1.7 1 .0 0 .4 1.3 1 .3 1 .7
Table 6: Differences between average perplexity of responses to the original and confounded queries, in the black-box
setting, when the confounder gadget was generated for a different surrogate router than the target, for LLM pair 1. Positive
values indicate a lower average perplexity (more natural) of responses to the confounded queries; higher values are better
for the attacker. Standard errors were omitted for readability but are0.2 on average. As in the white-box setting, the attack
does not increase the average response perplexity.
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
LLM pair 1
MT-Bench −0.1 −0.1 0 .0 −0.1 −0.1 0 .0 −0.1 0 .0 0 .1 −0.2 −0.1 −0.2
MMLU −0.1 0 .3 −0.2 4.8 1 .0 0 .5 2.5 −1.3 −0.8 2.6 −0.9 0 .3
GSM8K 14.9 9 .6 15 .2 18.6 13 .8 14 .7 13.4 6 .8 12 .6 13.6 11 .3 10 .4
LLM pair 2
MT-Bench −0.1 −0.1 −0.1 −0.2 −0.2 −0.2 −0.1 −0.1 0 .0 −0.2 −0.2 −0.2
MMLU 1.6 4 .0 4 .2 7.9 5 .0 4 .4 5.0 −2.9 3 .2 5.2 −0.9 3 .8
GSM8K 13.6 8 .7 18 .5 18.9 14 .4 18 .3 13.1 4 .0 15 .5 11.3 8 .4 10 .8
LLM pair 3
MT-Bench 0.2 0 .0 0 .1 −0.1 −0.1 0 .0 0.0 0 .2 0 .2 −0.1 0 .1 −0.1
MMLU 5.0 6 .8 5 .8 11.3 9 .1 4 .7 8.1 −3.7 4 .8 7.8 0 .1 7 .2
GSM8K 20.5 13 .4 20 .9 24.3 18 .6 21 .6 17.9 11 .2 18 .9 16.7 15 .2 14 .2
Table 7: Differences between average benchmark specific scores of responses to the original and confounded queries,
when the confounder gadget was generated for a different surrogate router than the target (black-box setting) for three
LLM pairs. Positive values indicate a higher average score for responses to the confounded queries; higher values are
better for the attacker. Results are averaged across gadgets. Standard errors were omitted for readability and are on
average 0.1, 0.8, and 1.8 for MT-bench, MMLU and GSM8K, respectively. Aligned with the white-box setting, results
show almost no decrease in performance, and improvement when there is a performance gap for the LLM pair.
Results for LLM pair 4. As discussed in Section 5, we replace the strong model that was used by Ong et al. [47], GPT-4-
1106-preview (rank 28 in the Chatbot Arena leaderboard [1, 21]), with the open-sourced Llama-3.1-8B (rank 58) to reduce
the costs of our extensive set of evaluations. In this section we perform a smaller-scale evaluation of the quality-enhancing
attack performance when using GPT as the strong model, i.e., LLM pair 4. We evaluate this setting using three of the
n = 10 confounder gadgets for each router.
Table 10 shows the results across benchmarks in the white-box setting. Compared to the pair 1 setting (Table 3), the attack
results in a higher increase in benchmark performance. This further demonstrates higher attack effect on response quality
when the performance gap between the weak and strong models is higher.
7 Rerouting Commercial Routers
We evaluate our rerouting attack on several commercial routers: Unify [12], NotDiamond [7], OpenRouter [11], and
Martian [5]. These routers are available through black-box APIs. Therefore, we use our black-box attack with the
40 gadgets optimized for the open-sourced routers RSW , RMF , RCLS , and RLLM (10 per router). We perform this
evaluation using the MT-bench benchmark.
Unify. This router lets users specify a list of models from different providers and a metric configuration for routing
decisions. The available metrics are quality, time to first token, inter-token latency, and cost. The user can specify the
weight for each metric. Time, latency, and cost metrics are static and precomputed. The quality metric is computed for
12 | 11 | 11 | arxiv1.pdf |
RSW RMF RCLS RLLM
MT-Bench 100 100 100 100
MMLU 100 96 100 100
GSM8K 100 100 100 100
Table 8: Upgrade rates for query-specific gadgets, in the white-box setting. Results are nearly perfect, i.e. nearly all
confounded queries are routed to the strong model.
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
MT-Bench 100 83 71 100 83 48 100 73 52 100 67 83
MMLU 96 57 89 95 43 83 74 13 83 77 11 30
GSM8K 100 68 74 100 73 68 81 65 70 88 54 64
Table 9: Upgrade rates for query-specific gadgets, in the black-box setting. In most cases results are better than in the
query-independent setting, at the cost of a more resource intensive process.
each query using a neural scoring function that was trained on prompts from several open datasets (e.g., Open Hermes [62])
and labeled using an LLM-as-a-judge [71].
For our evaluation, we configure the router to choose between GPT-4o [2] as the strong model and Mixtral 8x7B [39] as
the weak model. We focus on the cost and quality metrics, and set the weight of time and latency to 0 so that they are
not factored into routing decisions. We manually calibrate the weights to 1 for the quality metric and 0.02 for the cost
metric. These weights result in 49% of the original, unmodified queries being routed to the strong model and 51% to the
weak model, resulting in a total cost of $0.13 for the 72 MT-bench queries. Adding confounder gadgets generated for the
four open-sourced evaluated routers results in upgrade rates of 79%, 88%, 91%, and 89%, respectively, averaged across
10 gadgets. The downgrade rate is zero in all cases. In terms of costs, the addition of the confounder gadget increased
the cost to $0.22, $0.23, $0.22, and $0.21, respectively, averaged across 10 gadgets. In other words, the rerouting attack
increased the cost of processing the queries, on average, by a factor of 1.7×.
NotDiamond. This router lets users route their queries to a list of predefined models. Available objectives are to maximize
quality, or balance quality and cost, or balance quality and latency. The exact details of the routing logic are not specified.
We focus on cost-aware routing, for which the API docs state that “NotDiamond will automatically determine when a
query is simple enough to use a cheaper model without degrading the quality of the response.” NotDiamond provides a
router selection tool which gives the routing decision for a particular query without forwarding the query to the chosen
model (thereby incurring no costs). We use this for our evaluation—of course a real attack would target the NotDiamond
API when used for actual routing.
Similar to the Unify experiments, we set GPT-4o as the strong model and Mixtral-8x7b as the weak model. Cost-aware
routing routes 82% of the original queries to the strong model,18% to the weak model. Confounded queries generated for
RSW , RMF , RCLS , and RLLM achieve upgrade rates of 21%, 18%, 21%, and 15%, respectively. The downgrade rates
are 1–3%.
As opposed to our calibrated routers, NotDiamond aggressively routes to the stronger model even for unmodified queries
in most settings. We tried several strong/weak model pairs including GPT-4o/Mistral-7B-Instruct-v0.2, GPT-4o/GPT-4o-
mini, and Claude-3-Opus/Claude-3-Sonnet, and observed a similar 20%–80% split between strong and weak.
When we changed the strong model to OpenAI’s o1-mini and kept Mixtral-8x7b as the weak model, 54% of the original
queries were routed to the strong model, 46% to the weak model. In this setting, confounder gadgets yield 13–16%
upgrade rates and, on average, 3–6% downgrade rates. We conclude that while the attack is still effective, NotDiamond is
more robust than Unify.
OpenRouter. This framework offers a unified interface for LLMs, and additionally offers a system that routes users’
queries between three specific models: Llama-3-70b, Claude-3.5-Sonnet, and GPT-4o. Queries are routed “depending on
their size, subject, and complexity,” as described in the documentation.2
With OpenRouter, 96% of the original queries are routed to Llama, 4% to GPT, and none to Claude. Based on the pricing
and number of input-output tokens, the queries’ total cost is $0.03 for processing all evaluated queries. After adding
2https://openrouter.ai/openrouter/auto
13 | 12 | 12 | arxiv1.pdf |
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 9.2 9 .2 ± 0.0 9.1 9 .3 ± 0.0 9.2 9 .1 ± 0.0 8.9 9 .1 ± 0.1
MMLU 76 84 ± 1 76 81 ± 0 76 84 ± 0 78 84 ± 1
GSM8K 62 86 ± 0 65 88 ± 1 68 90 ± 2 66 85 ± 2
Table 10: Benchmark-specific average scores of responses to the original and confounded queries with GPT-4-1106-
preview as the strong model (LLM pair 4), in the white-box setting. Results demonstrate a higher increase in performance
with respect to the LLM pair 1 setting, due to the larger performance gap between the models.
confounder gadgets, queries originally routed to GPT are still routed to GPT and no queries are ever routed to Claude. For
queries originally routed to Llama, some gadgets result in all of them being rerouted to GPT, and some have no impact.
Specifically, 4 out of the 10 gadgets we optimized using RSW caused all queries to be rerouted to GPT,2/10 using RMF ,
and 3/10 using RLLM . None of the gadgets optimized using RCLS had any impact on routing. In terms of costs, having
all queries being rerouted to GPT results with an average cost of $0.25, a greater than 8× increase over the cost of the
original queries. Given the lack of documentation of the routing algorithm being used, we are unsure what explains the
variability across gadgets.
Martian. This router is supposed to let the user provide a list of models and to specify the maximum amount the user is
willing to pay for a query or for 1M tokens. Unfortunately, as of November 14, 2024, the router appears to ignore the list
models provided by the user, and forwards the input to the same LLM regardless of it. We tested this in settings including
one, two, or multiple models. While responses do not specify which LLM was used, they were identical across settings,
so we excluded Martian from our evaluation. We notified Martian about the seemingly buggy behavior.
8 Defenses
Defenses against rerouting should be cheap. If the per-query cost of the defense is comparable to the per-query cost of a
strong LLM, deploying the defense will defeat the main purpose of LLM routing, which is to reduce the cost of responding
to queries.
Perplexity-based filtering. As explained in Section 6, perplexity is a measure of how “natural” the text looks. Perplexity-
based filtering has been suggested in many contexts as a defense against adversarial text inputs [16, 36]. This defense
computes the perplexity of multiple “trusted” texts, then compares it with the perplexity of the suspicious text. If the latter
is significantly higher, or above some predefined threshold, the text is considered adversarial. Specifically, we assume the
defender has access to a set of unmodified queries. The defender computes their perplexity values and uses these values
to establish a threshold. Given a new query, the defender checks if its perplexity exceeds the threshold. If so, the query
is flagged as adversarial. The defender can then decide how to handle such queries. Options include rejecting them or
routing them all to the weak model. Computing the perplexity of a query can be cheap to do, e.g., using GPT-2 as we do
in this work; this makes it viable for use as a defense that doesn’t undermine the benefits of routing.
To evaluate the effectiveness of such a defense against our attack, we compare the perplexity values of original and
confounded queries. Figure 5 presents histograms of perplexity values for both the original evaluated GSM8K queries and
their corresponding confounded versions, generated using one of the confounder gadgets, sampled uniformly at random.
Additionally, the figure displays the ROC curve for the defense that detects confounded queries by checking if their
perplexity exceeds a threshold. As can be seen, the confounded queries exhibit significantly higher perplexity values,
making them readily distinguishable from the original queries. For instance, in the case of the RSW router, setting the
threshold value at55 yields a false-positive rate of3% and a true-positive rate of97%. Results are similar for other gadgets
and benchmarks and were omitted due to space constraints.
Unfortunately, this defense can be evaded if an adversary incorporates a perplexity constraint into the gadget generation
process. To demonstrate the feasibility of this evasion strategy, we modify gadget generation to maximize the score of the
routing algorithm R and simultaneously aligning the the gadget’s perplexity to some predefined perplexity value. In more
detail, in each iteration t ∈ [T], we uniformly sample a target index j ∈ [1, n] and generate a set B of B + 1candidates as
explained in Section 4. We then modify Eq. 1 such that we now find the candidate that maximizes the difference between
the router’s score and the perplexity constraint for the confounder:
c(t+1) ← arg max
c∈B
Sθ(c∥xi) − α · |PPL(c) − ρ|
,
14 | 13 | 13 | arxiv1.pdf |
0 50 100 150 200 250 300
Perplexity
0
20
40
60
80Count
Original
Confounded
(a) RSW
20 40 60 80 100 120 140
Perplexity
0
10
20
30
40
50Count
Original
Confounded (b) RMF
50 100 150 200
Perplexity
0
10
20
30
40
50Count
Original
Confounded (c) RCLS
20 40 60 80 100
Perplexity
0
10
20
30
40
50Count
Original
Confounded (d) RLLM
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 1.00
(e) RSW
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.4
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 0.97 (f) RMF
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.4
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 0.99 (g) RCLS
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 0.95 (h) RLLM
Figure 5: Perplexity of the original queries in the GSM8K benchmark compared to the perplexity of confounded queries
using a single uniformly sampled gadget. We additionally present the ROC curve of the defense that detects confounded
queries by checking if they cross a perplexity threshold, and it’s corresponding ROCAUC score. Confounded queries have
significantly higher perplexity values, and are thus easy to recognize and filter out.
where PPL(·) denotes the perplexity function computed using GPT-2, the value ρ denotes a target perplexity value to
which we want gadgets’ perplexity to be close, and the value α is a balancing coefficient. For the experiments below, we
set ρ to be the average perplexity value of 100 uniformly sampled queries3 from the GSM8K benchmark.
Figure 6 shows the results when setting α = 0.01, for the GSM8K benchmark and one confounder gadget. The results
demonstrate that modified queries can no longer be easily distinguished from normal queries by their perplexity alone.
For instance, in the case of the RSW router, setting the threshold value at 55 as before, no confounded queries are flagged
as anomalous, meaning the true-positive rate is zero. We note that there is some variability across gadgets. The average
ROCAUC scores of the defense across ten gadgets with standard deviation indicated parenthetically, are 0.66 (±0.04),
0.69 (±0.02), 0.71 (±0.02), and 0.69 (±0.03) for the RSW , RMF , RCLS , and RLLM routers, respectively.
At the same time, optimizing for low perplexity does not significantly impact the attack success rate. Table 11 compares
the average upgrade rates (over n = 10 gadgets) of the original perplexity-agnostic optimization approach from Section 4
and the perplexity-minimizing one described above. The attack efficacy might be improvable further by adjusting α to
find a sweet spot that avoids the defense effectively while ensuring high rerouting success rate.
The attack is not particularly sensitive to the choice of queries used to obtain the calibration value ρ. Although ρ was
computed using GSM8K queries, we observe similar performance when evaluating on the MT-bench and MMLU bench-
marks, with average ROCAUC scores of0.50 (±0.01), 0.51 (±0.01), 0.52 (±0), and 0.51 (±0.01) for MT-bench, and0.52
(±0.03), 0.54 (±0.02), 0.55 (±0.01), and 0.53 (±0.02) for MMLU. One might also try removing the calibration value al-
together, instead simply minimizing the gadget’s perplexity value. However, this can result with an “overshooting” effect,
where the perplexity value is significantly lower than that of normal queries, thereby making it still distinguishable from
standard queries.
In summary, perplexity-based filtering is not an effective defense against against rerouting.
3The perplexity calibration queries were chosen such that they do not overlap with the queries used for evaluation.
15 | 14 | 14 | arxiv1.pdf |
20 30 40 50
Perplexity
0.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0Count
Original
Confounded
(a) RSW
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (b) RMF
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (c) RCLS
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (d) RLLM
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.65
(e) RSW
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.73 (f) RMF
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.64 (g) RCLS
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.65 (h) RLLM
Figure 6: Perplexity values of the original and confounded queries, and the corresponding ROC curves of the defense that
detects confounded queries by checking if they cross a perplexity threshold, when the confounder gadget is optimized for
low perplexity, in the GSM8K benchmark and for one gadget sampled uniformly at random. Confounded queries have
similar perplexity values as the original queries, and can no longer be easily distinguished based on perplexity alone.
RSW RMF RCLS RLLM
Orig. PPL-opt. Orig. PPL-opt. Orig. PPL-opt. Orig. PPL-opt.
MT-Bench 100 ± 0 100 ± 0 100 ± 0 98 ± 2 100 ± 0 98 ± 1 73 ± 5 51 ± 8
MMLU 90 ± 1 59 ± 5 78 ± 4 74 ± 5 100 ± 0 66 ± 12 95 ± 1 89 ± 3
GSM8K 98 ± 0 70 ± 7 100 ± 0 98 ± 2 100 ± 0 88 ± 6 94 ± 3 81 ± 8
Table 11: Average upgrade rates for gadgets generated without (“Orig.”) and with (“PPL-opt.”) low-perplexity optimiza-
tion, for the balancing coefficient α = 0.01. In some cases, optimizing for low perplexity has a negative effect on the
attack success rate, however the attack can still be considered successful. A more careful choice ofα can potentially limit
the effect on the attack success.
LLM-based filtering. Even though adversarially modified queries cannot be easily detected using perplexity, they may
still be “unnatural.” A possible defense is to employ an oracle LLM to determine if the query is natural or not. This defense
requires the router to invoke an additional LLM for every processed query, which is computationally expensive in the case
of a high-quality open-sourced LLM or financially costly in the case of a high-quality commercial LLM. Therefore, this
defense is unlikely to be practical. Furthermore, it is possible to optimize gadgets so that they both have low perplexity
and appear “natural” to LLM evaluators [69].
Paraphrasing. Filtering defenses like those discussed above are passive. An active alternative is to paraphrase queries
using an oracle LLM. LLMs are trained to generate natural text and are thus likely to remove unnatural substrings when
paraphrasing a query. This defense is likely impractical for two reasons. First, and as with LLM-based filtering, it requires
16 | 15 | 15 | arxiv1.pdf |
an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality
of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.
Detecting anomalous user workloads. Another possible defense requires the router to monitor individual user work-
loads, and identify those users whose queries are routed to the strongest model with an abnormally high frequency. The
router can then impose a user-specific threshold. Of course such workloads may have a benign explanation, e.g., the user’s
queries may be unusually complex. Even so, routers could potentially be designed to perform user-specific routing. For
example, one could imagine using per-user thresholds that are calibrated dynamically to attempt to maintain a consistent
fraction of queries being routed to the strong model.
Such user-specific routing would complicate implementations, and would make inaccurate decisions for a user until there
is sufficient data about their queries. The latter is relevant in adversarial settings, since such an approach would still be
circumventable should attackers be able to mount Sybil attacks in which the attacker creates a new user for, in the limit,
each query.
9 Related Work
Evasion attacks against ML systems. A large body of work has investigated evasion attacks against ML systems [25,
43, 60], also referred to as adversarial examples [32, 48, 49], and these attacks are now being explored in the context of
multi-modal LLMs [28] as well as text-only LLMs (for just one example, see [22]). We discussed in Section 3 how our
results compare: LLM control plane integrity is a distinct AI safety issue, but related in that: (1) control plane integrity
attacks may use evasion-style techniques, and (2) control plane integrity attacks might be useful for performing evasion.
Prompt injection against LLMs. Prompt injection is a class of attacks against LLMs in which the adversary manipulates
the prompt, i.e., the textual input fed directly to the LLM, causing the LLM to generate outputs that satisfy some adver-
sarial objective [50, 64]. Evasion attacks as discussed above can use prompt injection, jailbreaking attacks being a widely
explored example in which the adversary aims to bypass some safety guardrail included in the LLM system, such as “do
not output expletives” [23, 42, 54, 66, 72, 73].
Prompt injection is also used for extraction attacks that aim to infer some information from or about the model, for
example, the system prompt [50, 54, 70], training data samples [46], or model parameters [18]. In indirect prompt injection
attacks [33], the adversaries do not directly interact with the target LLM, and instead inject adversarial inputs into third-
party data, which is then added to the LLM prompt (intentionally or unintentionally) by the victim application and/or its
users. This relates to another category of attacks that target LLM-based applications, such as RAG systems, and invalidate
their integrity by exploiting the weaknesses of the underlying LLM [19, 55].
Our attacks also modify queries, but with a different aim than the above types of attacks: undermining the integrity of the
control plane routing, rather than the LLM itself. Future work might investigate indirect control plane integrity attacks
that, analogously to indirect prompt injection, serve to somehow trick users of a routing system into forming control-
plane-confounding queries.
Attacks against MoE. Mixture-of-Experts (MoE) architectures enable using multiple expert modules for processing a
given query with a lower computational cost by including an inner routing mechanism that in every layer routes different
tokens to a small number of experts [29, 30, 52, 56]. This can be thought of as an internal router within a single LLM,
rather than an external control plane that orchestrates multiple LLMs. MoE has increased in popularity as it allows to
build larger models at a fixed compute budget—not all parameters are used at the same time.
Hayes et al. [34] identified a vulnerability in MoE that can be exploited for a denial-of-service attack against MoE. Thus
control plane integrity issues appear to extend to the context of single-LLM MoE systems, and future work could explore
this connection further.
Yona et al. [67] presented a side-channel attack on MoE that enables an attacker to reveal other users’ prompts. We expect
that side-channel attacks against LLM control planes exist as well, for example, to infer which models are used via timing
of responses. Such attacks, which target confidentiality, are outside the scope of control plane integrity.
10 Conclusion
LLM routers balance quality and cost of LLM inference by routing different queries to different LLMs. They are an
example of a broader, emerging class of systems we call “LLM control planes” that aim to achieve various quality,
efficiency, and cost objectives by orchestrating use of multiple LLMs to respond to a query.
17 | 16 | 16 | arxiv1.pdf |
We introduced and defined a new safety property, LLM control plane integrity . Informally, this property holds if an
adversarial user cannot influence routing decisions made by the control plane. To show that existing LLM routers do not
satisfy this property, we designed, implemented, and evaluated a black-box optimization method for generating query-
independent “confounder gadgets.” When added to any query, the confounder gadget confuses the router into routing the
query to the adversary-chosen LLM.
We evaluated the efficacy of confounder gadgets on multiple open-source and commercial routers and demonstrated that
they successfully reroute queries without a negative impact on the quality of responses. We also discussed defenses against
these attacks and indicated directions for future research.
Acknowledgments
This research was supported in part by the Google Cyber NYC Institutional Research Program, the Israel Science Founda-
tion (Grant No. 1336/22), and the European Union (ERC, FTRC, 101043243). Views and opinions expressed are however
those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council.
Neither the European Union nor the granting authority can be held responsible for them.
18 | 17 | 17 | arxiv1.pdf |
References
[1] “Chatbot Arena LLM Leaderboard: Community-driven evaluation for best LLM and AI chatbots,” https://
huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard, accessed: 2024-11-14.
[2] “Hello gpt-4o,” https://openai.com/index/hello-gpt-4o/, published: 2024-05-23.
[3] “Introducing Llama 3.1: Our most capable models to date,” https://ai.meta.com/blog/meta-llama-3-1/, published:
2024-07-23.
[4] “Introducing Meta Llama 3: The most capable openly available LLM to date,” https://ai.meta.com/blog/
meta-llama-3/, published: 2024-04-18.
[5] “Martian LLM router,” https://withmartian.com/.
[6] “New embedding models and API updates,” https://openai.com/index/new-embedding-models-and-api-updates,
published: 2024-01-25.
[7] “Notdiamond LLM router,” https://www.notdiamond.ai/.
[8] “OpenAI and others seek new path to smarter AI as current meth-
ods hit limitations,” https://www.reuters.com/technology/artificial-intelligence/
openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11, published: 2024-11-15.
[9] “OpenAI, Google and Anthropic are struggling to build more advanced AI,” https://www.bloomberg.com/news/
articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai?sref=CrGXSfHu,
published: 2024-11-13.
[10] “OpenAI shifts strategy as rate of ‘GPT’ AI improvements slows,” https://www.theinformation.com/articles/
openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows, published: 2024-11-9.
[11] “Openrouter LLM router,” https://openrouter.ai/.
[12] “Unify LLM router,” https://unify.ai/.
[13] “What is a control plane?” https://www.ibm.com/think/topics/control-plane, published: 2024-10-31.
[14] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman,
S. Anadkat et al., “GPT-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
[15] P. Aggarwal, A. Madaan, A. Anand, S. P. Potharaju, S. Mishra, P. Zhou, A. Gupta, D. Rajagopal, K. Kappaganthu,
Y . Yanget al., “Automix: Automatically mixing language models,” arXiv preprint arXiv:2310.12963, 2023.
[16] G. Alon and M. Kamfonas, “Detecting language model attacks with perplexity,” arXiv preprint arXiv:2308.14132,
2023.
[17] R. A. Bradley and M. E. Terry, “Rank analysis of incomplete block designs: I. the method of paired comparisons,”
Biometrika, vol. 39, no. 3/4, 1952.
[18] N. Carlini, D. Paleka, K. D. Dvijotham, T. Steinke, J. Hayase, A. F. Cooper, K. Lee, M. Jagielski, M. Nasr, A. Conmy
et al., “Stealing part of a production language model,” arXiv preprint arXiv:2403.06634, 2024.
[19] H. Chaudhari, G. Severi, J. Abascal, M. Jagielski, C. A. Choquette-Choo, M. Nasr, C. Nita-Rotaru, and A. Oprea,
“Phantom: General trigger attacks on retrieval augmented language generation,” arXiv preprint arXiv:2405.20485,
2024.
[20] L. Chen, M. Zaharia, and J. Zou, “FrugalGPT: How to use large language models while reducing cost and improving
performance,” arXiv preprint arXiv:2305.05176, 2023.
[21] W.-L. Chiang, L. Zheng, Y . Sheng, A. N. Angelopoulos, T. Li, D. Li, B. Zhu, H. Zhang, M. Jordan, J. E. Gon-
zalez, and I. Stoica, “Chatbot arena: An open platform for evaluating LLMs by human preference,” in Forty-first
International Conference on Machine Learning (ICML), 2024.
[22] S. Cho, S. Jeong, J. Seo, T. Hwang, and J. C. Park, “Typos that broke the RAG’s back: Genetic attack on RAG
pipeline by simulating documents in the wild via low-level perturbations,”arXiv preprint arXiv:2404.13948, 2024.
[23] J. Chu, Y . Liu, Z. Yang, X. Shen, M. Backes, and Y . Zhang, “Comprehensive assessment of jailbreak attacks against
LLMs,” arXiv preprint arXiv:2402.05668, 2024.
[24] K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakanoet al.,
“Training verifiers to solve math word problems,”arXiv preprint arXiv:2110.14168, 2021.
[25] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, “Adversarial classification,” inProceedings of the tenth
ACM SIGKDD international conference on Knowledge discovery and data mining, 2004.
19 | 18 | 18 | arxiv1.pdf |
[26] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for
language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019.
[27] D. Ding, A. Mallick, C. Wang, R. Sim, S. Mukherjee, V . R ¨uhle, L. V . Lakshmanan, and A. H. Awadallah, “Hybrid
LLM: Cost-efficient and quality-aware query routing,” in International Conference on Learning Representations
(ICLR), 2024.
[28] Y . Dong, H. Chen, J. Chen, Z. Fang, X. Yang, Y . Zhang, Y . Tian, H. Su, and J. Zhu, “How robust is Google’s Bard
to adversarial image attacks?” arXiv preprint arXiv:2309.11751, 2023.
[29] N. Du, Y . Huang, A. M. Dai, S. Tong, D. Lepikhin, Y . Xu, M. Krikun, Y . Zhou, A. W. Yu, O. Firat et al., “Glam:
Efficient scaling of language models with mixture-of-experts,” in International Conference on Machine Learning
(ICML), 2022.
[30] W. Fedus, B. Zoph, and N. Shazeer, “Switch transformers: Scaling to trillion parameter models with simple and
efficient sparsity,”Journal of Machine Learning Research (JMLR), 2022.
[31] T. Feng, Y . Shen, and J. You, “Graphrouter: A graph-based router for LLM selections,” arXiv preprint
arXiv:2410.03834, 2024.
[32] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International
Conference on Learning Representations (ICLR), 2015.
[33] K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “Not what you’ve signed up for: Compro-
mising real-world LLM-integrated applications with indirect prompt injection,” in ACM AISec, 2023.
[34] J. Hayes, I. Shumailov, and I. Yona, “Buffer overflow in mixture of experts,”arXiv preprint arXiv:2402.05526, 2024.
[35] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt, “Measuring massive multitask
language understanding,” in International Conference on Learning Representations (ICLR), 2021.
[36] N. Jain, A. Schwarzschild, Y . Wen, G. Somepalli, J. Kirchenbauer, P.-y. Chiang, M. Goldblum, A. Saha, J. Geip-
ing, and T. Goldstein, “Baseline defenses for adversarial attacks against aligned language models,” arXiv preprint
arXiv:2309.00614, 2023.
[37] F. Jelinek, “Interpolated estimation of Markov source parameters from sparse data,” 1980. [Online]. Available:
https://api.semanticscholar.org/CorpusID:61012010
[38] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel,
G. Lample, L. Saulnier et al., “Mistral 7B,” arXiv preprint arXiv:2310.06825, 2023.
[39] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l. Casas, E. B. Hanna,
F. Bressand et al., “Mixtral of experts,” arXiv preprint arXiv:2401.04088, 2024.
[40] D. Jiang, X. Ren, and B. Y . Lin, “LLM-Blender: Ensembling large language models with pairwise ranking and
generative fusion,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), 2023.
[41] C.-H. Lee, H. Cheng, and M. Ostendorf, “OrchestraLLM: Efficient orchestration of language models for dialogue
state tracking,” in Proceedings of the 2024 Conference of the North American Chapter of the Association for Com-
putational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024.
[42] Y . Liu, G. Deng, Z. Xu, Y . Li, Y . Zheng, Y . Zhang, L. Zhao, T. Zhang, K. Wang, and Y . Liu, “Jailbreaking ChatGPT
via prompt engineering: An empirical study,” arXiv preprint arXiv:2305.13860, 2023.
[43] D. Lowd and C. Meek, “Adversarial learning,” in ACM International Conference on Knowledge Discovery in Data
Mining (SIGKDD), 2005.
[44] S. Merity, C. Xiong, J. Bradbury, and R. Socher, “Pointer sentinel mixture models,” in International Conference on
Learning Representations (ICLR), 2016.
[45] S. Narayanan Hari and M. Thomson, “Tryage: Real-time, intelligent routing of user prompts to large language
models,” arXiv e-prints, 2023.
[46] M. Nasr, N. Carlini, J. Hayase, M. Jagielski, A. F. Cooper, D. Ippolito, C. A. Choquette-Choo, E. Wallace,
F. Tram`er, and K. Lee, “Scalable extraction of training data from (production) language models,” arXiv preprint
arXiv:2311.17035, 2023.
[47] I. Ong, A. Almahairi, V . Wu, W.-L. Chiang, T. Wu, J. E. Gonzalez, M. W. Kadous, and I. Stoica, “RouteLLM:
Learning to route LLMs with preference data,” arXiv preprint arXiv:2406.18665, 2024.
20 | 19 | 19 | arxiv1.pdf |
[48] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against
machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security,
2017.
[49] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in
adversarial settings,” in IEEE European symposium on security and privacy (EuroS&P), 2016.
[50] F. Perez and I. Ribeiro, “Ignore previous prompt: Attack techniques for language models,” in NeurIPS ML Safety
Workshop, 2022.
[51] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised
multitask learners,” https://cdn.openai.com/better-language-models/language models are unsupervised multitask
learners.pdf, 2019.
[52] C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, and N. Houlsby,
“Scaling vision with sparse mixture of experts,” Advances in Neural Information Processing Systems (NeurIPS) ,
2021.
[53] M. ˇSakota, M. Peyrard, and R. West, “Fly-swat or cannon? cost-effective language model choice via meta-modeling,”
in Proceedings of the 17th ACM International Conference on Web Search and Data Mining, 2024.
[54] S. Schulhoff, J. Pinto, A. Khan, L.-F. Bouchard, C. Si, S. Anati, V . Tagliabue, A. Kost, C. Carnahan, and J. Boyd-
Graber, “Ignore this title and HackAPrompt: Exposing systemic vulnerabilities of LLMs through a global prompt
hacking competition,” in EMNLP, 2023.
[55] A. Shafran, R. Schuster, and V . Shmatikov, “Machine against the RAG: Jamming retrieval-augmented generation
with blocker documents,” arXiv preprint arXiv:2406.05870, 2024.
[56] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural net-
works: The sparsely-gated mixture-of-experts layer,” in International Conference on Learning Representations ,
2016.
[57] T. Shnitzer, A. Ou, M. Silva, K. Soule, Y . Sun, J. Solomon, N. Thompson, and M. Yurochkin, “Large language model
routing with benchmark datasets,” arXiv preprint arXiv:2309.15789, 2023.
[58] K. Srivatsa, K. K. Maurya, and E. Kochmar, “Harnessing the power of multiple minds: Lessons learned from LLM
routing,” arXiv preprint arXiv:2405.00467, 2024.
[59] D. Stripelis, Z. Hu, J. Zhang, Z. Xu, A. Shah, H. Jin, Y . Yao, S. Avestimehr, and C. He, “Tensoropera router: A
multi-model router for efficient LLM inference,” arXiv preprint arXiv:2408.12320, 2024.
[60] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of
neural networks,” arXiv preprint arXiv:1312.6199, 2013.
[61] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican
et al., “Gemini: a family of highly capable multimodal models,” arXiv preprint arXiv:2312.11805, 2023.
[62] Teknium, “Openhermes 2.5: An open dataset of synthetic data for generalist LLM assistants,” 2023. [Online].
Available: https://huggingface.co./datasets/teknium/OpenHermes-2.5
[63] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale
et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023.
[64] S. Toyer, O. Watkins, E. A. Mendes, J. Svegliato, L. Bailey, T. Wang, I. Ong, K. Elmaaroufi, P. Abbeel, T. Darrell
et al., “Tensor Trust: Interpretable prompt injection attacks from an online game,” in International Conference on
Learning Representations (ICLR), 2023.
[65] F. Tram `er, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction
APIs,” in USENIX Security Symposium, 2016.
[66] A. Wei, N. Haghtalab, and J. Steinhardt, “Jailbroken: How does LLM safety training fail?” in Advances in Neural
Information Processing Systems (NeurIPS), 2023.
[67] I. Yona, I. Shumailov, J. Hayes, and N. Carlini, “Stealing user prompts from mixture of experts,” arXiv preprint
arXiv:2410.22884, 2024.
[68] M. Yue, J. Zhao, M. Zhang, L. Du, and Z. Yao, “Large language model cascades with mixture of thought represen-
tations for cost-efficient reasoning,” in International Conference on Learning Representations (ICLR), 2024.
[69] C. Zhang, T. Zhang, and V . Shmatikov, “Controlled generation of natural adversarial documents for stealthy retrieval
poisoning,” arXiv preprint arXiv:2410.02163, 2024.
[70] Y . Zhang, N. Carlini, and D. Ippolito, “Effective prompt extraction from language models,” in First Conference on
Language Modeling, 2024.
21 | 20 | 20 | arxiv1.pdf |
[71] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xinget al., “Judging LLM-
as-a-judge with MT-Bench and chatbot arena,” Advances in Neural Information Processing Systems (NeurIPS) ,
2023.
[72] S. Zhu, R. Zhang, B. An, G. Wu, J. Barrow, Z. Wang, F. Huang, A. Nenkova, and T. Sun, “AutoDAN: Automatic
and interpretable adversarial attacks on large language models,” arXiv preprint arXiv:2310.15140, 2023.
[73] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson, “Universal and transferable adversarial attacks on aligned language
models,” arXiv preprint arXiv:2307.15043, 2023.
A Gadget Examples
Below are a few examples of the optimized confounder gadgets c we got, for the different routing algorithms:
• RSW :
legationbelongs967reglo’hui
(DictionaryizedNameantal bidi.numberOf
demanding RESULTS! XVI-ioibili Written Hormchriftlatable
• RMF :
041 personne.nama tomeTN Exact relac ¸˜ao tinha seis.Description
NUMAGEMENTMedian157 mut-camerauacao nije Hir Ter
• RCLS :
contiene DEM USERNAME fourteen chees781 consultants200 inici DOJ
571:
Ord:nth Norwegian Mercer docs Abr226 METADATA
• RLLM :
dated:frameifyumi345 Kurdasciiuzeiphertext
Midnightexecution431!784 below1 unwrap : / n / n
B Ablation Study
In this section, we evaluate the effect of different hyperparameters and design choices (in the white-box setting).
Prefix vs. suffix. As described in Section 4, we prepend the confounder gadget to the query. An alternative is to append
it. This is straightforward for MT-bench and GSM8K, but MMLU consists of multi-choice questions followed by a list
of possible answers, and the term “Answer:”. We insert the gadget at the end of the question text and before the possible
answers. If we append it at the very end, after “Answer:”, the LLM assumes the query was answered and in many cases
does not generate any output at all.
Table 12 shows that average upgrade rates are similar regardless of whether the gadget was inserted as a prefix or a suffix.
For MMLU, prefix works better. The downgrade rate is 0% in all cases.
22 | 21 | 21 | arxiv1.pdf |
RSW RMF RCLS RLLM
MT-Bench Prefix 100 ± 0 100 ± 0 100 ± 0 73 ± 5
Suffix 100 ± 0 100 ± 0 100 ± 0 84 ± 4
MMLU Prefix 90 ± 1 78 ± 4 100 ± 0 95 ± 1
Suffix 82 ± 2 63 ± 3 93 ± 1 93 ± 1
GSM8K Prefix 98 ± 0 100 ± 0 100 ± 0 100 ± 0
Suffix 94 ± 1 100 ± 0 100 ± 0 94 ± 3
Table 12: Average upgrade rates for different ways of adding the gadget to queries, in the white-box setting. Results are
similar in both methods, with a slight preference to the prefix approach.
RSW RMF RCLS RLLM
MT-Bench Uniform 100 ± 0 100 ± 0 100 ± 0 73 ± 5
Natural Prob. 100 ± 0 97 ± 2 100 ± 0 70 ± 5
MMLU Uniform 90 ± 1 78 ± 4 100 ± 0 95 ± 1
Natural Prob. 77 ± 2 41 ± 3 96 ± 2 87 ± 4
GSM8K Uniform 98 ± 0 100 ± 0 100 ± 0 94 ± 3
Natural Prob. 88 ± 2 92 ± 3 100 ± 0 83 ± 9
Table 13: Average upgrade rates for different ways of sampling candidate tokens during gadget generation, in the white-
box setting. Uniformly sampling the tokens yields better upgrade rates in most cases.
As mentioned in Section 5, to encourage the LLMs to follow the specific format in their responses (so they can be
parsed and compared with the ground-truth answers), we add a short prefix to the MMLU and GSM8K queries that
instructs the model how to respond. We phrase this instruction as follows: “ Answer the question using the format:
“Answer: [A/B/C/D]. Explanation: [EXPLANATION]” ” for the multi-choice queries of the MMLU benchmark, and a
similar version for GSM8K. We add this instruction after modifying the queries with the confounder gadget, i.e. the
instruction is prepended to the gadget.
An alternative to insert the instruction after the gadget but before the query, however we observed this to slighly underper-
form its counterpart. In the white-box setting we observe a slight decrease in the average (across all four routers) upgrade
rate from 91% to 89% for the MMLU benchmark, and from 98% to 91% for the GSM8K benchmark. In the black-box
setting, the average upgrade rate on MMLU reduces from 57% to 49% and on GSM8K from 73% to 64%.
Token sampling method. When generating the confounder gadget (see Section 4), we iteratively replace tokens with the
goal of maximizing the routing algorithm’s score for the gadget. Candidate replacement tokens are chosen uniformly at
random. An alternative is to choose candidates based on their probability of appearing in natural text. To evaluate this
method, we compute token probabilities by parsing and tokenizing the wikitext-103-raw-v1 dataset [44].
Table 13 shows that in most cases uniform sampling of replacement tokens yields better upgrade rates. We conjecture that
uniform sampling produces more unnatural text, confusing the router. For example, for the RSW routing algorithm, uni-
form sampling produces the following gadget: “legationbelongs967reglo’hui(DictionaryizedNameantal bidi.numberOf”,
whereas sampling according to natural probabilities produces “ total occurred According number Letar final Bab named
remainder”.
Number of tokens in the gadget. In our main evaluation, the gadgets are composed of n = 10 tokens. We evaluate the
effect of using less ( n = 5) or more ( n = 20 or n = 50) tokens. We observed that 5 tokens were insufficient to make
changes to the routing algorithm’s score and thus we were not able to optimize the gadget in this setting. As for 20 tokens,
we observe a a small improvement in the white-box setting, increase the average upgrade rate from 93.9% to 95.8%, and
a bigger improvement in the black-box setting, increase the average upgrade rate from 70.2% to 81.3%. Using 50 tokens
further increases the upgrade rates, to 98.2% in the white-box setting and 84.2% in the black box setting. The average
convergence rate increases as well, from 60 iterations for 10 tokens, to 70 for 20 tokens, and 100 for 50 tokens. Overall
this evaluation suggests that our rerouting attack can be even further improved by using longer gadgets, however it is
important to be careful not to make them too long to the point that they might degrade the performance of the underlying
LLM.
23 | 22 | 22 | arxiv1.pdf |
gadget RSW RMF RCLS RLLM
MT-Bench Init 7 3 8 3
Random 97 ± 2 37 ± 8 62 ± 10 38 ± 4
MMLU Init 21 4 0 13
Random 49 ± 5 6 ± 3 14 ± 7 68 ± 5
GSM8K Init 21 20 0 9
Random 58 ± 8 34 ± 8 37 ± 9 41 ± 7
Table 14: Average upgrade rates when the gadget is not optimized and is either defined to be the the initial set of tokens
or a set of uniformly sampled tokens. The optimization-based approach outperforms these optimization-free approaches.
intro type RSW RMF RCLS RLLM
Up. Down. Up. Down. Up. Down. Up. Down.
MT-Bench
Ours-1 100 0 0 31 33 8 26 7
Ours-2 100 0 0 60 75 0 35 5
Gemini 100 0 0 50 100 0 55 0
GPT 100 0 0 48 46 2 19 7
MMLU
Ours-1 28 0 0 57 2 47 0 42
Ours-2 32 0 0 66 19 26 0 42
Gemini 35 0 0 60 100 0 21 21
GPT 54 0 0 51 0 66 26 23
GSM8K
Ours-1 4 46 0 100 0 77 4 36
Ours-2 6 63 0 100 16 43 2 43
Gemini 4 56 0 100 98 0 9 9
GPT 4 77 0 100 0 95 6 25
Table 15: Average upgrade and downgrade rates of gadgets containing injected instructions to the router. This method
significantly underperforms the optimization-based approach in most cases.
C Optimization-Free Gadget Generation
We evaluate optimization-free alternatives to our black-box optimization method for generating confounder gadgets.
Fixed gadget. A simple way to create a gadget without resorting to optimization is to repeat n tokens. We use ! as the
initialization token, so the gadget in this case is !!!!!!!!!!. Another possibility is to select n tokens uniformly at random.
Table 14 shows the upgrade rates for both options, were in the latter setting we repeat the process 10 times and report the
average result and the standard error. While they are non-negligible, especially for the randomly sampled gadgets, they
significantly underperform the upgrade rates reported in Table 1 for optimized gadgets.
Instruction injection. Prompt injection is a known attack on LLMs [50, 64], thus we consider a gadget consisting of a
direct instruction to the router to treat the query as a complex one and obtain a high-quality response.
We evaluated 4 differently phrased instructions: two created manually and two generated by, respectively, Gemini [61]
and GPT-4o [2], denoted as “ours-1”, “ours-2”, “Gemini”, and “GPT”.
Table 15 reports the results. This method works well in a few cases but poorly in most. This highlights the difference
between attacking LLMs and attacking LLM routers.
D Perplexity issues
In Section 5 we present perplexity as one of the metrics we use for evaluating the effect of our attack over the quality of
the generated response. However, perplexity is intended to measure the naturalness of text, and as such it is ill-suited for
comparing the quality of multiple natural texts. This results with the perplexity values of the responses of both the weak
and the strong model being close and withing the margin of error. Figure 7 shows the distribution of perplexity values of
the clean responses generated by both models, and the ROCAUC score computed on these two sets of values. As can be
seen, the perplexity values are quite similar between both models, with ROCAUC scores ranging between0.38 to 0.47.
24 | 23 | 23 | arxiv1.pdf |
0 10 20 30 40 50 60 70
Perplexity
0
5
10
15
20Count
strong
weak
(a) MT-bench
ROCAUC=0.38
0 20 40 60
Perplexity
0.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0Count
strong
weak
(b) MMLU
ROCAUC=0.47
0 20 40 60 80
Perplexity
0
5
10
15
20
25Count
strong
weak
(c) GSM8K
ROCAUC=0.38
Figure 7: Histograms of the perplexity values of clean responses generated by the strong and weak models.
RSW RMF RCLS RLLM
Up. Down. Up. Down. Up. Down. Up. Down.
MT-Bench 0 ± 0 24 ± 2 0 ± 0 67 ± 6 0 ± 0 29 ± 3 24 ± 3 1 ± 0
MMLU 8 ± 3 9 ± 2 0 ± 0 77 ± 7 0 ± 0 50 ± 4 55 ± 4 5 ± 1
GSM8K 4 ± 2 48 ± 9 1 ± 1 78 ± 11 0 ± 0 80 ± 4 21 ± 4 4 ± 2
Table 16: Upgrade and downgrade rates for the downgrading variant of our rerouting attack, where the goal is to reroute
queries to the weak model (white-box).
As mentioned in Section 5, throughout our evaluations we filter out responses with perplexity values higher than 100.
This is due to a few responses getting arbitrarily high perplexity values although corresponding to valid responses. For
example, for the query:
Suppose you are a mathematician and poet. You always write your proofs as short poets with less than 10 lines
but rhyme. Prove the square root of 2 is irrational number.
The weak model responses with:
In squares, two seeks a home, Two whole numbers, never roam. If it did, in pairs combined, A different square
would it find. But take root, two’s square, or four, Still leaves a remainder’s roar. The square root of two’s no child
of two, Its irrational nature ever true.
which results with an high perplexity value of 166. We also observed a few responses containing lists, code, or math
equations with abnormally high perplexity values.
E Rerouting to the Weak Model
In this section we evaluate the generality of our attack and show that generation of confounder gadgets can be optimized
for the opposite objective from what we consider so far: reroute queries to the weak model. For this, we repeat the same
optimization process as in Section 4 but minimize the router’s score. Table 16 shows the upgrade and downgrade rates
for this variant of the attack, in the white-box setting. In most cases we see a significant downgrade rate and a minimal
upgrade rate, meaning that most of the modified queries were routed to the weak model. One notable exception is the
LLM-based router RLLM , for which the attack does not work well. Future work will be needed to explore improving
confounder generation for this setting further.
25 | 24 | 24 | arxiv1.pdf |
A Primer in BERTology: What We Know About How BERT Works
Anna Rogers
Center for Social Data Science
University of Copenhagen
[email protected]
Olga Kovaleva
Dept. of Computer Science
University of Massachusetts Lowell
[email protected]
Anna Rumshisky
Dept. of Computer Science
University of Massachusetts Lowell
[email protected]
Abstract
Transformer-based models have pushed state
of the art in many areas of NLP, but our un-
derstanding of what is behind their success
is still limited. This paper is the first sur-
vey of over 150 studies of the popular BERT
model. We review the current state of knowl-
edge about how BERT works, what kind
of information it learns and how it is repre-
sented, common modifications to its training
objectives and architecture, the overparame-
terization issue and approaches to compres-
sion. We then outline directions for future
research.
1 Introduction
Since their introduction in 2017, Transformers
(Vaswani et al., 2017) have taken NLP by storm,
offering enhanced parallelization and better model-
ing of long-range dependencies. The best known
Transformer-based model is BERT (Devlin et al.,
2019); it obtained state-of-the-art results in numer-
ous benchmarks and is still a must-have baseline.
While it is clear that BERT works remarkably
well, it is less clear why, which limits further
hypothesis-driven improvement of the architecture.
Unlike CNNs, the Transformers have little cogni-
tive motivation, and the size of these models limits
our ability to experiment with pre-training and per-
form ablation studies. This explains a large number
of studies over the past year that attempted to un-
derstand the reasons behind BERT’s performance.
In this paper, we provide an overview of what
has been learned to date, highlighting the questions
which are still unresolved. We first consider the
linguistic aspects of it, i.e., the current evidence
regarding the types of linguistic and world knowl-
edge learned by BERT, as well as where and how
this knowledge may be stored in the model. We
then turn to the technical aspects of the model and
provide an overview of the current proposals to
improve BERT’s architecture, pre-training and fine-
tuning. We conclude by discussing the issue of
overparameterization, the approaches to compress-
ing BERT, and the nascent area of pruning as a
model analysis technique.
2 Overview of BERT architecture
Fundamentally, BERT is a stack of Transformer
encoder layers (Vaswani et al., 2017) which consist
of multiple self-attention "heads". For every input
token in a sequence, each head computes key, value
and query vectors, used to create a weighted repre-
sentation. The outputs of all heads in the same layer
are combined and run through a fully-connected
layer. Each layer is wrapped with a skip connection
and followed by layer normalization.
The conventional workflow for BERT consists
of two stages: pre-training and fine-tuning. Pre-
training uses two self-supervised tasks: masked
language modeling (MLM, prediction of randomly
masked input tokens) and next sentence prediction
(NSP, predicting if two input sentences are adjacent
to each other). In fine-tuning for downstream ap-
plications, one or more fully-connected layers are
typically added on top of the final encoder layer.
The input representations are computed as fol-
lows: each word in the input is first tokenized into
wordpieces (Wu et al., 2016), and then three em-
bedding layers (token, position, and segment) are
combined to obtain a fixed-length vector. Special
token [CLS] is used for classification predictions,
and [SEP] separates input segments.
Google1 and HuggingFace (Wolf et al., 2020)
provide many variants of BERT, including the orig-
inal "base" and "large" versions. They vary in the
number of heads, layers, and hidden state size.
1https://github.com/
google-research/bert
arXiv:2002.12327v3 [cs.CL] 9 Nov 2020 | 0 | 0 | arxiv2_taclccby4_license.pdf |
3 What knowledge does BERT have?
A number of studies have looked at the knowledge
encoded in BERT weights. The popular approaches
include fill-in-the-gap probes of MLM, analysis of
self-attention weights, and probing classifiers with
different BERT representations as inputs.
3.1 Syntactic knowledge
Lin et al. (2019) showed that BERT represen-
tations are hierarchical rather than linear , i.e.
there is something akin to syntactic tree structure
in addition to the word order information. Ten-
ney et al. (2019b) and Liu et al. (2019a) also
showed that BERT embeddings encode informa-
tion about parts of speech, syntactic chunks
and roles. Enough syntactic information seems
to be captured in the token embeddings themselves
to recover syntactic trees (Vilares et al., 2020; Kim
et al., 2020; Rosa and Mare ˇcek, 2019), although
probing classifiers could not recover the labels of
distant parent nodes in the syntactic tree (Liu et al.,
2019a). Warstadt and Bowman (2020) report evi-
dence of hierarchical structure in three out of four
probing tasks.
As far as how syntax is represented, it seems
that syntactic structure is not directly encoded
in self-attention weights. Htut et al. (2019) were
unable to extract full parse trees from BERT heads
even with the gold annotations for the root. Jawahar
et al. (2019) include a brief illustration of a depen-
dency tree extracted directly from self-attention
weights, but provide no quantitative evaluation.
However, syntactic information can be recov-
ered from BERT token representations. Hewitt
and Manning (2019) were able to learn transfor-
mation matrices that successfully recovered syn-
tactic dependencies in PennTreebank data from
BERT’s token embeddings (see also Manning et al.,
2020). Jawahar et al. (2019) experimented with
transformations of the [CLS] token using Tensor
Product Decomposition Networks (McCoy et al.,
2019a), concluding that dependency trees are the
best match among 5 decomposition schemes (al-
though the reported MSE differences are very
small). Miaschi and Dell’Orletta (2020) performs
a range of syntactic probing experiments with con-
catenated token representations as input.
Note that all these approaches look for the
evidence of gold-standard linguistic structures,
and add some amount of extra knowledge to the
probe. Most recently, Wu et al. (2020) proposed a
4168
[CLS]Forthosewhofollowsocialmedia
transitions
on
Capitol
Hill , this will be alittle
different
.
[CLS]
For
those
who
follow
social
media
transitions
on
Capitol
Hill
,
this
will
be
a
little
different
.
0
1
2
3
4
5
Figure 1: Heatmap of the impact matrix for the sen-
tence “For those who follow social media transitions
on Capitol Hill, this will be a little different.”
3 Visualization with Impact Maps
Before we discuss specific syntactic phenomena,
let us first analyze some example impact matri-
ces derived from sample sentences. We visual-
ize an impact matrix of a sentence by displaying
a heatmap. We use the term “impact map” to refer
to a heatmap of an impact matrix.
Setup. We extract impact matrices by feed-
ing BERT with 1,000 sentences from the English
Parallel Universal Dependencies (PUD) treebank
of the CoNLL 2017 Shared Task ( Zeman et al. ,
2017). We follow the setup and pre-processing
steps employed in pre-training BERT. An example
impact map is shown in Figure 1.
Dependency. We notice that the impact map
contains many stripes, which are short series of
vertical/horizontal cells, typically located along
the diagonal. Take the word “ different” as an ex-
ample (which is illustrated by the second-to-last
column in the impact matrix). We observe a clear
vertical stripe above the main diagonal. The inter-
pretation is that this particular occurrence of the
word “different” strongly affects the occurrences
of those words before it. These strong influences
are shown by the darker-colored pixels seen in the
second last column of the impact map. This ob-
servation agrees with the ground-truth dependency
tree, which selects “ different” as the head of all
remaining words in the phrase “ this will be a lit-
tle different.” We also observe similar patterns on
“transitions” and “Hill”. Such correlations lead us
to explore the idea of extracting dependency trees
from the matrices (see Section 4.1).
follow social media transitions on Capitol Hill
Figure 2: Part of the constituency tree.
Constituency. Figure 2 shows part of the con-
stituency tree of our example sentence generated
by Stanford CoreNLP ( Manning et al. , 2014). In
this sentence, “ media” and “ on” are two words
that are adjacent to “ transitions”. From the tree,
however, we see that “media” is closer to “transi-
tions” than “on” is in terms of syntactic distance.
If a model is syntactically uninformed, we would
expect “media” and “on” to have comparable im-
pacts on the prediction of “ transitions”, and vice
versa. However, we observe a far greater impact
(darker color) between “media” and “transitions”
than that between “on” and “transitions”. We will
further support this observation with empirical ex-
periments in Section 4.2.
Other Structures. Along the diagonal of the
impact map, we see that words are grouped into
four contiguous chunks that have specific intents
(e.g., a noun phrase – on Capitol Hill ). We also
observe that the two middle chunks have relatively
strong inter-chunk word impacts and thus a bond-
ing that groups them together, forming a larger
verb phrase. This observation suggest that BERT
may capture the compositionality of the language.
In the following sections we quantitatively eval-
uate these observations.
4 Syntactic Probe
We start with two syntactic probes – dependency
probe and constituency probe.
4.1 Dependency Probe
With the goal of exploring the extent dependency
relations are captured in BERT, we set out to an-
swer the following question: Can BERT outper-
form linguistically uninformed baselines in unsu-
pervised dependency parsing? If so, to what ex-
tent?
We begin by using the token-level perturbed
masking technique to extract an impact matrix F
for each sentence. We then utilize graph-based al-
gorithms to induce a dependency tree fromF, and
compare it against ground-truth whose annotations
4168
[CLS]Forthosewhofollowsocialmedia
transitions
on
Capitol
Hill , this will be alittle
different
.
[CLS]
For
those
who
follow
social
media
transitions
on
Capitol
Hill
,
this
will
be
a
little
different
.
0
1
2
3
4
5
Figure 1: Heatmap of the impact matrix for the sen-
tence “For those who follow social media transitions
on Capitol Hill, this will be a little different.”
3 Visualization with Impact Maps
Before we discuss specific syntactic phenomena,
let us first analyze some example impact matri-
ces derived from sample sentences. We visual-
ize an impact matrix of a sentence by displaying
a heatmap. We use the term “impact map” to refer
to a heatmap of an impact matrix.
Setup. We extract impact matrices by feed-
ing BERT with 1,000 sentences from the English
Parallel Universal Dependencies (PUD) treebank
of the CoNLL 2017 Shared Task ( Zeman et al. ,
2017). We follow the setup and pre-processing
steps employed in pre-training BERT. An example
impact map is shown in Figure 1.
Dependency. We notice that the impact map
contains many stripes, which are short series of
vertical/horizontal cells, typically located along
the diagonal. Take the word “ different” as an ex-
ample (which is illustrated by the second-to-last
column in the impact matrix). We observe a clear
vertical stripe above the main diagonal. The inter-
pretation is that this particular occurrence of the
word “different” strongly affects the occurrences
of those words before it. These strong influences
are shown by the darker-colored pixels seen in the
second last column of the impact map. This ob-
servation agrees with the ground-truth dependency
tree, which selects “ different” as the head of all
remaining words in the phrase “ this will be a lit-
tle different.” We also observe similar patterns on
“transitions” and “Hill”. Such correlations lead us
to explore the idea of extracting dependency trees
from the matrices (see Section 4.1).
follow social media transitions on Capitol Hill
Figure 2: Part of the constituency tree.
Constituency. Figure 2 shows part of the con-
stituency tree of our example sentence generated
by Stanford CoreNLP ( Manning et al. , 2014). In
this sentence, “ media” and “ on” are two words
that are adjacent to “ transitions”. From the tree,
however, we see that “media” is closer to “transi-
tions” than “on” is in terms of syntactic distance.
If a model is syntactically uninformed, we would
expect “media” and “on” to have comparable im-
pacts on the prediction of “ transitions”, and vice
versa. However, we observe a far greater impact
(darker color) between “media” and “transitions”
than that between “on” and “transitions”. We will
further support this observation with empirical ex-
periments in Section 4.2.
Other Structures. Along the diagonal of the
impact map, we see that words are grouped into
four contiguous chunks that have specific intents
(e.g., a noun phrase – on Capitol Hill ). We also
observe that the two middle chunks have relatively
strong inter-chunk word impacts and thus a bond-
ing that groups them together, forming a larger
verb phrase. This observation suggest that BERT
may capture the compositionality of the language.
In the following sections we quantitatively eval-
uate these observations.
4 Syntactic Probe
We start with two syntactic probes – dependency
probe and constituency probe.
4.1 Dependency Probe
With the goal of exploring the extent dependency
relations are captured in BERT, we set out to an-
swer the following question: Can BERT outper-
form linguistically uninformed baselines in unsu-
pervised dependency parsing? If so, to what ex-
tent?
We begin by using the token-level perturbed
masking technique to extract an impact matrix F
for each sentence. We then utilize graph-based al-
gorithms to induce a dependency tree fromF, and
compare it against ground-truth whose annotations
Figure 1: Parameter-free probe for syntactic knowledge:
words sharing syntactic subtrees have larger impact on
each other in the MLM prediction (Wu et al., 2020)
parameter-free approach based on measuring the
impact that one word has on predicting another
word within a sequence in the MLM task (Figure 1).
They concluded that BERT "naturally" learns
some syntactic information, although it is not
very similar to linguistic annotated resources.
The fill-in-the-gap probes of MLM showed that
BERT takes subject-predicate agreement into
account when performing the cloze task (Gold-
berg, 2019; van Schijndel et al., 2019), even for
meaningless sentences and sentences with distrac-
tor clauses between the subject and the verb (Gold-
berg, 2019). A study of negative polarity items
(NPIs) by Warstadt et al. (2019) showed thatBERT
is better able to detect the presence of NPIs(e.g.
"ever") and the words that allow their use (e.g.
"whether") than scope violations.
The above claims of syntactic knowledge are be-
lied by the evidence that BERT does not "under-
stand" negation and is insensitive to malformed
input. In particular, its predictions were not al-
tered2 even with shuffled word order, truncated
sentences, removed subjects and objects (Ettinger,
2019). This could mean that either BERT’s syn-
tactic knowledge is incomplete, or it does not
need to rely on it for solving its tasks. The latter
seems more likely, since Glavaš and Vuli´c (2020)
2See also the recent findings on adversarial triggers, which
get the model to produce a certain output even though they
are not well-formed from the point of view of a human reader
(Wallace et al., 2019a). | 1 | 1 | arxiv2_taclccby4_license.pdf |
report that an intermediate fine-tuning step with
supervised parsing does not make much difference
for downstream task performance.
3.2 Semantic knowledge
To date, more studies have been devoted to BERT’s
knowledge of syntactic rather than semantic phe-
nomena. However, we do have evidence from an
MLM probing study that BERT has some knowl-
edge of semantic roles (Ettinger, 2019). BERT
even displays some preference for the incorrect
fillers for semantic roles that are semantically re-
lated to the correct ones, as opposed to those that
are unrelated (e.g. "to tip a chef" is better than "to
tip a robin", but worse than "to tip a waiter").
Tenney et al. (2019b) showed that BERT en-
codes information about entity types, relations,
semantic roles, and proto-roles, since this infor-
mation can be detected with probing classifiers.
BERT struggles with representations of num-
bers. Addition and number decoding tasks showed
that BERT does not form good representations for
floating point numbers and fails to generalize away
from the training data (Wallace et al., 2019b). A
part of the problem is BERT’s wordpiece tokeniza-
tion, since numbers of similar values can be divided
up into substantially different word chunks.
Out-of-the-box BERT is surprisingly brittle to
named entity replacements: e.g. replacing names
in the coreference task changes 85% of predictions
(Balasubramanian et al., 2020). This suggests that
the model does not actually form a generic idea of
named entities, although its F1 scores on NER prob-
ing tasks are high (Tenney et al., 2019a). Broscheit
(2019) find that fine-tuning BERT on Wikipedia
entity linking "teaches" it additional entity knowl-
edge, which would suggest that it did not absorb all
the relevant entity information during pre-training
on Wikipedia.
3.3 World knowledge
The bulk of evidence about commonsense knowl-
edge captured in BERT comes from practitioners
using it to extract such knowledge. One direct prob-
ing study of BERT reports that BERT struggles
with pragmatic inference and role-based event
knowledge (Ettinger, 2019). BERT also struggles
with abstract attributes of objects, as well as visual
and perceptual properties that are likely to be as-
sumed rather than mentioned (Da and Kasai, 2019).
The MLM component of BERT is easy to
adapt for knowledge induction by filling in the
Language Models as Knowledge Bases?
Fabio Petroni1 Tim Rockt¨aschel1,2 Patrick Lewis1,2 Anton Bakhtin1
Yuxiang Wu1,2 Alexander H. Miller1 Sebastian Riedel1,2
1Facebook AI Research
2University College London
{fabiopetroni, rockt, plewis, yolo, yuxiangwu, ahm, sriedel}@fb.com
Abstract
Recent progress in pretraining language mod-
els on large textual corpora led to a surge
of improvements for downstream NLP tasks.
Whilst learning linguistic knowledge, these
models may also be storing relational knowl-
edge present in the training data, and may
be able to answer queries structured as “fill-
in-the-blank” cloze statements. Language
models have many advantages over structured
knowledge bases: they require no schema en-
gineering, allow practitioners to query about
an open class of relations, are easy to extend to
more data, and require no human supervision
to train. We present an in-depth analysis of the
relational knowledge already present (without
fine-tuning) in a wide range of state-of-the-
art pretrained language models. We find that
(i) without fine-tuning, BERT contains rela-
tional knowledge competitive with traditional
NLP methods that have some access to ora-
cle knowledge, (ii) BERT also does remark-
ably well on open-domain question answer-
ing against a supervised baseline, and (iii) cer-
tain types of factual knowledge are learned
much more readily than others by standard lan-
guage model pretraining approaches. The sur-
prisingly strong ability of these models to re-
call factual knowledge without any fine-tuning
demonstrates their potential as unsupervised
open-domain QA systems. The code to re-
produce our analysis is available at https:
//github.com/facebookresearch/LAMA.
1 Introduction
Recently, pretrained high-capacity language mod-
els such as ELMo (Peters et al., 2018a) and BERT
(Devlin et al. , 2018a) have become increasingly
important in NLP. They are optimised to either
predict the next word in a sequence or some
masked word anywhere in a given sequence ( e.g.
“Dante was born in [M ask] in the year 1265.”).
The parameters of these models appear to store
Memory Query Answer
Symbolic
Memory Access
Neural LM
Memory Access
(Dante, born-in, X)
“Dante was born in[Mask].”
Dante
Florence
born-in
Florence
Florence
KG
LM
e.g. ELMo/BERT
Figure 1: Querying knowledge bases (KB) and lan-
guage models (LM) for factual knowledge.
vast amounts of linguistic knowledge (Peters et al.,
2018b; Goldberg, 2019; Tenney et al., 2019) use-
ful for downstream tasks. This knowledge is
usually accessed either by conditioning on latent
context representations produced by the original
model or by using the original model weights to
initialize a task-specific model which is then fur-
ther fine-tuned. This type of knowledge transfer
is crucial for current state-of-the-art results on a
wide range of tasks.
In contrast, knowledge bases are e ffective so-
lutions for accessing annotated gold-standard re-
lational data by enabling queries such as (D ante,
born-in, X). However, in practice we often need
to extract relational data from text or other modal-
ities to populate these knowledge bases. This
requires complex NLP pipelines involving entity
extraction, coreference resolution, entity linking
and relation extraction (Surdeanu and Ji, 2014)—
components that often need supervised data and
fixed schemas. Moreover, errors can easily prop-
agate and accumulate throughout the pipeline. In-
stead, we could attempt to query neural language
models for relational data by asking them to fill in
masked tokens in sequences like “Dante was born
arXiv:1909.01066v2 [cs.CL] 4 Sep 2019
Figure 2: BERT world knowledge (Petroni et al., 2019)
blanks (e.g. "Cats like to chase [___]"). Petroni
et al. (2019) showed that, for some relation types,
vanilla BERT is competitive with methods rely-
ing on knowledge bases (Figure 2), and Roberts
et al. (2020) show the same for open-domain QA
using T5 model (Raffel et al., 2019). Davison et al.
(2019) suggest that it generalizes better to unseen
data. In order to retrieve BERT’s knowledge, we
need good template sentences, and there is work
on their automatic extraction and augmentation
(Bouraoui et al., 2019; Jiang et al., 2019b).
However, BERT cannot reason based on its
world knowledge. Forbes et al. (2019) show that
BERT can "guess" the affordances and properties of
many objects, but can not reason about the relation-
ship between properties and affordances. For ex-
ample, it “knows" that people can walk into houses,
and that houses are big, but it cannot infer that
houses are bigger than people. Zhou et al. (2020)
and Richardson and Sabharwal (2019) also show
that the performance drops with the number of nec-
essary inference steps. Some of BERT’s world
knowledge success comes from learning stereotypi-
cal associations (Poerner et al., 2019), e.g., a person
with an Italian-sounding name is predicted to be
Italian, even when it is incorrect.
3.4 Limitations
Multiple probing studies in section 3 and section 4
report that BERT possesses a surprising amount of
syntactic, semantic, and world knowledge. How-
ever, Tenney et al. (2019a) remarks, “the fact that
a linguistic pattern is not observed by our probing
classifier does not guarantee that it is not there, and
the observation of a pattern does not tell us how it
is used." There is also the issue of how complex a
probe should be allowed to be (Liu et al., 2019a). If
a more complex probe recovers more information,
to what extent are we still relying on the original
model?
Furthermore, different probing methods may
lead to complementary or even contradictory con-
clusions, which makes a single test (as in most stud- | 2 | 2 | arxiv2_taclccby4_license.pdf |
Diagonal Heterogeneous
Vertical Vertical + diagonal Block
[CLS] [CLS] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [CLS] [CLS] [SEP] [SEP] [SEP] [SEP] [CLS]
Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)
ies) insufficient (Warstadt et al., 2019). A given
method might also favor one model over another,
e.g., RoBERTa trails BERT with one tree extraction
method, but leads with another (Htut et al., 2019).
The choice of linguistic formalism also matters
(Kuznetsov and Gurevych, 2020).
In view of all that, the alternative is to focus on
identifying what BERT actually relies on at infer-
ence time. This direction is currently pursued both
at the level of architecture blocks (to be discussed
in detail in subsection 6.3), and at the level of in-
formation encoded in model weights. Amnesic
probing (Elazar et al., 2020) aims to specifically
remove certain information from the model and see
how it changes performance, finding, for example,
that language modeling does rely on part-of-speech
information.
Another direction is information-theoretic prob-
ing. Pimentel et al. (2020) operationalize prob-
ing as estimating mutual information between the
learned representation and a given linguistic prop-
erty, which highlights that the focus should be not
on the amount of information contained in a rep-
resentation, but rather on how easily it can be ex-
tracted from it. V oita and Titov (2020) quantify
the amount of effort needed to extract information
from a given representation as minimum descrip-
tion length needed to communicate both the probe
size and the amount of data required for it to do
well on a task.
4 Localizing linguistic knowledge
4.1 BERT embeddings
In studies of BERT, the term "embedding" refers
to the output of a Transformer layer (typically, the
final one). Both conventional static embeddings
(Mikolov et al., 2013) and BERT-style embeddings
can be viewed in terms of mutual information max-
imization (Kong et al., 2019), but the latter are
contextualized. Every token is represented by a
vector dependent on the particular context of occur-
rence, and contains at least some information about
that context (Miaschi and Dell’Orletta, 2020).
Several studies reported that distilled contex-
tualized embeddings better encode lexical se-
mantic information (i.e. they are better at tra-
ditional word-level tasks such as word similarity).
The methods to distill a contextualized represen-
tation into static include aggregating the informa-
tion across multiple contexts (Akbik et al., 2019;
Bommasani et al., 2020), encoding "semantically
bleached" sentences that rely almost exclusively on
the meaning of a given word (e.g. "This is <>")
(May et al., 2019), and even using contextualized
embeddings to train static embeddings (Wang et al.,
2020d).
But this is not to say that there is no room for
improvement. Ethayarajh (2019) measure how
similar the embeddings for identical words are in
every layer, reporting that later BERT layers pro-
duce more context-specific representations3. They
also find that BERT embeddings occupy a narrow
cone in the vector space, and this effect increases
from the earlier to later layers. That is, two ran-
dom words will on average have a much higher
cosine similarity than expected if embeddings
were directionally uniform (isotropic) . Since
isotropy was shown to be beneficial for static word
embeddings (Mu and Viswanath, 2018), this might
be a fruitful direction to explore for BERT.
Since BERT embeddings are contextualized, an
interesting question is to what extent they cap-
ture phenomena like polysemy and homonymy.
There is indeed evidence that BERT’s contextu-
alized embeddings form distinct clusters corre-
sponding to word senses(Wiedemann et al., 2019;
Schmidt and Hofmann, 2020), making BERT suc-
cessful at word sense disambiguation task. How-
ever, Mickus et al. (2019) note thatthe representa-
tions of the same word depend on the position
of the sentence in which it occurs , likely due to
the NSP objective. This is not desirable from the
linguistic point of view, and could be a promising
3V oita et al. (2019a) look at the evolution of token embed-
dings, showing that in the earlier Transformer layers, MLM
forces the acquisition of contextual information at the expense
of the token identity, which gets recreated in later layers. | 3 | 3 | arxiv2_taclccby4_license.pdf |
avenue for future work.
The above discussion concerns token embed-
dings, but BERT is typically used as a sentence or
text encoder. The standard way to generate sen-
tence or text representations for classification is
to use the [CLS] token, but alternatives are also
being discussed, including concatenation of token
representations (Tanaka et al., 2020), normalized
mean (Tanaka et al., 2020), and layer activations
(Ma et al., 2019). See Toshniwal et al. (2020) for a
systematic comparison of several methods across
tasks and sentence encoders.
4.2 Self-attention heads
Several studies proposed classification of attention
head types. Raganato and Tiedemann (2018) dis-
cuss attending to the token itself, previous/next
tokens and the sentence end. Clark et al. (2019)
distinguish between attending to previous/next to-
kens, [CLS], [SEP], punctuation, and "attending
broadly" over the sequence. Kovaleva et al. (2019)
propose 5 patterns shown in Figure 3.
4.2.1 Heads with linguistic functions
The "heterogeneous" attention pattern shown in
Figure 3 could potentially be linguistically inter-
pretable, and a number of studies focused on iden-
tifying the functions of self-attention heads. In
particular, some BERT heads seem to specialize
in certain types of syntactic relations. Htut et al.
(2019) and Clark et al. (2019) report that there
are BERT heads that attended significantly more
than a random baseline to words in certain syntac-
tic positions. The datasets and methods used in
these studies differ, but they both find that there are
heads that attend to words in obj role more than
the positional baseline. The evidence for nsubj,
advmod, and amod varies between these two stud-
ies. The overall conclusion is also supported by
V oita et al. (2019b)’s study of the base Transformer
in machine translation context. Hoover et al. (2019)
hypothesize that even complex dependencies like
dobj are encoded by a combination of heads
rather than a single head, but this work is limited
to qualitative analysis. Zhao and Bethard (2020)
looked specifically for the heads encoding negation
scope.
Both Clark et al. (2019) and Htut et al. (2019)
conclude that no single head has the complete
syntactic tree information, in line with evidence
of partial knowledge of syntax (cf. subsection 3.1).
However, Clark et al. (2019) identify a BERT head
that can be directly used as a classifier to perform
coreference resolution on par with a rule-based
system, which by itself would seem to require quite
a lot of syntactic knowledge.
Lin et al. (2019) present evidence that atten-
tion weights are weak indicators of subject-
verb agreement and reflexive anaphora. Instead
of serving as strong pointers between tokens that
should be related, BERT’s self-attention weights
were close to a uniform attention baseline, but there
was some sensitivity to different types of distrac-
tors coherent with psycholinguistic data. This is
consistent with conclusions by Ettinger (2019).
To our knowledge, morphological information
in BERT heads has not been addressed, but with
the sparse attention variant by Correia et al. (2019)
in the base Transformer, some attention heads ap-
pear to merge BPE-tokenized words. For semantic
relations, there are reports of self-attention heads
encoding core frame-semantic relations (Kovaleva
et al., 2019), as well as lexicographic and common-
sense relations (Cui et al., 2020).
The overall popularity of self-attention as an in-
terpretability mechanism is due to the idea that
"attention weight has a clear meaning: how much
a particular word will be weighted when comput-
ing the next representation for the current word"
(Clark et al., 2019). This view is currently debated
(Jain and Wallace, 2019; Serrano and Smith, 2019;
Wiegreffe and Pinter, 2019; Brunner et al., 2020),
and in a multi-layer model where attention is fol-
lowed by non-linear transformations, the patterns
in individual heads do not provide a full picture.
Also, while many current papers are accompanied
by attention visualizations, and there is a growing
number of visualization tools (Vig, 2019; Hoover
et al., 2019), the visualization is typically limited
to qualitative analysis (often with cherry-picked
examples) (Belinkov and Glass, 2019), and should
not be interpreted as definitive evidence.
4.2.2 Attention to special tokens
Kovaleva et al. (2019) show that most self-
attention heads do not directly encode any non-
trivial linguistic information, at least when fine-
tuned on GLUE (Wang et al., 2018), since only less
than 50% of heads exhibit the "heterogeneous" pat-
tern. Much of the model produced the vertical pat-
tern (attention to [CLS], [SEP], and punctuation
tokens), consistent with the observations by Clark
et al. (2019). This redundancy is likely related to
the overparameterization issue (see section 6). | 4 | 4 | arxiv2_taclccby4_license.pdf |
More recently, Kobayashi et al. (2020) showed
that the norms of attention-weighted input vec-
tors, which yield a more intuitive interpretation
of self-attention, reduce the attention to special to-
kens. However, even when the attention weights
are normed, it is still not the case that most heads
that do the "heavy lifting" are even potentially in-
terpretable (Prasanna et al., 2020).
One methodological choice in in many studies
of attention is to focus on inter-word attention and
simply exclude special tokens (e.g. Lin et al. (2019)
and Htut et al. (2019)). However, if attention to
special tokens actually matters at inference time,
drawing conclusions purely from inter-word atten-
tion patterns does not seem warranted.
The functions of special tokens are not yet well
understood. [CLS] is typically viewed as an ag-
gregated sentence-level representation (although
all token representations also contain at least some
sentence-level information, as discussed in subsec-
tion 4.1); in that case, we may not see e.g. full
syntactic trees in inter-word attention because part
of that information is actually packed in [CLS].
Clark et al. (2019) experiment with encoding
Wikipedia paragraphs with base BERT to consider
specifically the attention to special tokens, noting
that heads in early layers attend more to [CLS],
in middle layers to [SEP], and in final layers to
periods and commas. They hypothesize that its
function might be one of "no-op", a signal to ig-
nore the head if its pattern is not applicable to the
current case. As a result, for example, [SEP]
gets increased attention starting in layer 5, but its
importance for prediction drops. However, after
fine-tuning both [SEP] and [CLS] get a lot of
attention, depending on the task (Kovaleva et al.,
2019). Interestingly, BERT also pays a lot of at-
tention to punctuation, which Clark et al. (2019)
explain by the fact that periods and commas are
simply almost as frequent as the special tokens, and
so the model might learn to rely on them for the
same reasons.
4.3 BERT layers
The first layer of BERT receives as input a combina-
tion of token, segment, and positional embeddings.
It stands to reason that the lower layers have
the most information about linear word order.
Lin et al. (2019) report a decrease in the knowledge
of linear word order around layer 4 in BERT-base.
This is accompanied by an increased knowledge
(a) ELMo (original)
Layer 0
Layer 2
(b) ELMo (4-layer)
Layer 0
Layer 4
(c) ELMo (transformer)
Layer 0
Layer 6
(d) OpenAI transformer
Layer 0
Layer 12
(e) BERT (base, cased)
Layer 0
Layer 12
(f) BERT (large, cased)
Layer 0
Layer 24
Lower Performance Higher Performance
Figure 3: A visualization of layerwise patterns in task
performance. Each column represents a probing task,
and each row represents a contextualizer layer.
textualizers. Furthermore, the ELMo-based mod-
els facilitate a controlled comparison—they only
differ in the contextualizer architecture used.
We evaluate how well CWR features perform
the pretraining task—bidirectional language mod-
eling. Specifically, we take the pretrained repre-
sentations for each layer and relearn the language
model softmax classifiers used to predict the next
and previous token. The ELMo models are trained
on the Billion Word Benchmark, so we retrain
the softmax classifier on similar data to mitigate
any possible effects from domain shift. We split
the held-out portion of the Billion Word Bench-
mark into train (80%, 6.2M tokens) and evalua-
tion (20%, 1.6M tokens) sets and use this data to
retrain and evaluate the softmax classifiers. We
expect that biLM perplexity will be lower when
training the softmax classifiers on representations
from layers that capture more information about
the pretraining task.
5.2 Results and Discussion
Figure 4 presents the performance of softmax clas-
sifiers trained to perform the bidirectional lan-
guage modeling task, given just the CWR s as in-
put. We notice that higher layers in recurrent mod-
els consistently achieve lower perplexities. Inter-
estingly, we see that layers 1 and 2 in the 4-layer
ELMo model have very similar performance—this
warrants further exploration. On the other hand,
the layers of the ELMo (transformer) model do not
exhibit such a monotonic increase. While the top-
most layer is best (which we expected, since this
is the vector originally fed into a softmax classifier
during pretraining), the middle layers show vary-
ing performance. Across all models, the represen-
tations that are better-suited for language model-
ing are also those that exhibit worse probing task
performance (Figure 3), indicating that contextu-
alizer layers trade off between encoding general
and task-specific features.
These results also reveal a difference in the
layerwise behavior of LSTMs and transformers;
moving up the LSTM layers yields more task-
specific representations, but the same does not
hold for transformers. Better understanding the
differences between transformers and LSTMs is
an active area of research (Chen et al., 2018; Tang
et al., 2018), and we leave further exploration of
these observations to future work.
These observations motivate the gradual un-
freezing method of Howard and Ruder (2018),
where the model layers are progressively unfrozen
(starting from the final layer) during the fine-
tuning process. Given our observation that higher-
level LSTM layers are less general (and more pre-
training task-specific), they likely have to be fine-
tuned a bit more in order to make them appropri-
ately task specific. Meanwhile, the base layer of
the LSTM already learns highly transferable fea-
tures, and may not benefit from fine-tuning.
6 Transferring Between Tasks
Successful pretrained contextualizers have used
self-supervised tasks such as bidirectional lan-
guage modeling (Peters et al., 2018a) and next sen-
tence prediction ( Devlin et al. , 2018), which en-
able the use of large, unannotated text corpora.
However, contextualizers can also be pretrained
on explicitly supervised objectives, as done in
pretrained sentence embedding methods ( Con-
neau et al. , 2017). To better understand how
the choice of pretraining task affects the linguis-
tic knowledge within and transferability of CWR s,
we compare pretraining on a range of different
explicitly-supervised tasks with bidirectional lan-
guage model pretraining.
Figure 4: BERT layer transferability (columns corre-
spond to probing tasks, Liu et al. (2019a).
of hierarchical sentence structure, as detected by
the probing tasks of predicting the token index, the
main auxiliary verb and the sentence subject.
There is a wide consensus in studies with differ-
ent tasks, datasets and methodologies that syntac-
tic information is most prominent in the middle
layers of BERT.4 Hewitt and Manning (2019) had
the most success reconstructing syntactic tree depth
from the middle BERT layers (6-9 for base-BERT,
14-19 for BERT-large). Goldberg (2019) reports
the best subject-verb agreement around layers 8-
9, and the performance on syntactic probing tasks
used by Jawahar et al. (2019) also seems to peak
around the middle of the model. The prominence
of syntactic information in the middle BERT layers
is related to Liu et al. (2019a)’s observation that the
middle layers of Transformers are best-performing
overall and the most transferable across tasks (see
Figure 4).
There is conflicting evidence about syntactic
chunks. Tenney et al. (2019a) conclude that "the
basic syntactic information appears earlier in the
network while high-level semantic features appear
at the higher layers", drawing parallels between
this order and the order of components in a typical
NLP pipeline – from POS-tagging to dependency
parsing to semantic role labeling. Jawahar et al.
(2019) also report that the lower layers were more
useful for chunking, while middle layers were more
useful for parsing. At the same time, the probing
experiments by Liu et al. (2019a) find the opposite:
both POS-tagging and chunking were performed
best at the middle layers, in both BERT-base and
BERT-large. However, all three studies use differ-
ent suites of probing tasks.
The final layers of BERT are the most task-
specific. In pre-training, this means specificity to
the MLM task, which explains why the middle
4These BERT results are also compatible with findings by
Vig and Belinkov (2019), who report the highest attention to
tokens in dependency relations in the middle layers of GPT-2. | 5 | 5 | arxiv2_taclccby4_license.pdf |
layers are more transferable (Liu et al., 2019a). In
fine-tuning, it explains why the final layers change
the most (Kovaleva et al., 2019), and why restoring
the weights of lower layers of fine-tuned BERT to
their original values does not dramatically hurt the
model performance (Hao et al., 2019).
Tenney et al. (2019a) suggest that while syntactic
information appears early in the model and can be
localized, semantics is spread across the entire
model, which explains why certain non-trivial ex-
amples get solved incorrectly at first but correctly
at the later layers. This is rather to be expected:
semantics permeates all language, and linguists de-
bate whether meaningless structures can exist at
all (Goldberg, 2006, p.166-182). But this raises
the question of what stacking more Transformer
layers in BERT actually achieves in terms of the
spread of semantic knowledge, and whether that
is beneficial. Tenney et al. compared BERT-base
and BERT-large, and found that the overall pattern
of cumulative score gains is the same, only more
spread out in the larger model.
Note that Tenney et al. (2019a)’s experiments
concern sentence-level semantic relations; Cui et al.
(2020) report that the encoding of ConceptNet se-
mantic relations is the worst in the early layers and
increases towards the top. Jawahar et al. (2019)
place "surface features in lower layers, syntactic
features in middle layers and semantic features in
higher layers", but their conclusion is surprising,
given that only one semantic task in this study actu-
ally topped at the last layer, and three others peaked
around the middle and then considerably degraded
by the final layers.
5 Training BERT
This section reviews the proposals to optimize the
training and architecture of the original BERT.
5.1 Model architecture choices
To date, the most systematic study of BERT archi-
tecture was performed by Wang et al. (2019b), who
experimented with the number of layers, heads, and
model parameters, varying one option and freez-
ing the others. They concluded that the number
of heads was not as significant as the number
of layers . That is consistent with the findings
of V oita et al. (2019b) and Michel et al. (2019)
(section 6), and also the observation by Liu et al.
(2019a) that the middle layers were the most trans-
ferable. Larger hidden representation size was con-
sistently better, but the gains varied by setting.
All in all, changes in the number of heads
and layers appear to perform different func-
tions. The issue of model depth must be related to
the information flow from the most task-specific
layers closer to the classifier (Liu et al., 2019a),
to the initial layers which appear to be the most
task-invariant (Hao et al., 2019), and where the
tokens resemble the input tokens the most (Brun-
ner et al., 2020) (see subsection 4.3). If that is the
case, a deeper model has more capacity to encode
information that is not task-specific.
On the other head, many self-attention heads
in vanilla BERT seem to naturally learn the same
patterns (Kovaleva et al., 2019). This explains
why pruning them does not have too much impact.
The question that arises from this is how far we
could get with intentionally encouraging diverse
self-attention patterns: theoretically, this would
mean increasing the amount of information in the
model with the same number of weights. Raganato
et al. (2020) show for Transformer-based machine
translation we can simply pre-set the patterns that
we already know the model would learn, instead of
learning them from scratch.
Vanilla BERT is symmetric and balanced in
terms of self-attention and feed-forward layers, but
it may not have to be. For the base Transformer,
Press et al. (2020) report benefits from more self-
attention sublayers at the bottom and more feedfor-
ward sublayers at the top.
5.2 Improvements to the training regime
Liu et al. (2019b) demonstrate the benefits of
large-batch training: with 8k examples both the
language model perplexity and downstream task
performance are improved. They also publish their
recommendations for other parameters. You et al.
(2019) report that with a batch size of 32k BERT’s
training time can be significantly reduced with no
degradation in performance. Zhou et al. (2019) ob-
serve that the normalization of the trained [CLS]
token stabilizes the training and slightly improves
performance on text classification tasks.
Gong et al. (2019) note that, since self-attention
patterns in higher and lower layers are similar, the
model training can be done in a recursive man-
ner, where the shallower version is trained first and
then the trained parameters are copied to deeper
layers. Such a "warm-start" can lead to a 25% faster
training without sacrificing performance. | 6 | 6 | arxiv2_taclccby4_license.pdf |
5.3 Pre-training BERT
The original BERT is a bidirectional Transformer
pre-trained on two tasks: next sentence prediction
(NSP) and masked language model (MLM) (sec-
tion 2). Multiple studies have come up with alter-
native training objectives to improve on BERT,
which could be categorized as follows:
• How to mask. Raffel et al. (2019) systemati-
cally experiment with corruption rate and cor-
rupted span length. Liu et al. (2019b) propose
diverse masks for training examples within
an epoch, while Baevski et al. (2019) mask
every token in a sequence instead of a random
selection. Clinchant et al. (2019) replace the
MASK token with [UNK] token, to help the
model learn a representation for unknowns
that could be useful for translation. Song et al.
(2020) maximize the amount of information
available to the model by conditioning on both
masked and unmasked tokens, and letting the
model see how many tokens are missing.
• What to mask. Masks can be applied to full
words instead of word-pieces (Devlin et al.,
2019; Cui et al., 2019). Similarly, we can
mask spans rather than single tokens (Joshi
et al., 2020), predicting how many are missing
(Lewis et al., 2019). Masking phrases and
named entities (Sun et al., 2019b) improves
representation of structured knowledge.
• Where to mask. Lample and Conneau (2019)
use arbitrary text streams instead of sentence
pairs and subsample frequent outputs similar
to Mikolov et al. (2013). Bao et al. (2020)
combine the standard autoencoding MLM
with partially autoregressive LM objective us-
ing special pseudo mask tokens.
• Alternatives to masking. Raffel et al. (2019)
experiment with replacing and dropping spans,
Lewis et al. (2019) explore deletion, infilling,
sentence permutation and document rotation,
and Sun et al. (2019c) predict whether a to-
ken is capitalized and whether it occurs in
other segments of the same document. Yang
et al. (2019) train on different permutations
of word order in the input sequence, maximiz-
ing the probability of the original word order
(cf. the n-gram word order reconstruction task
(Wang et al., 2019a)). Clark et al. (2020) de-
tect tokens that were replaced by a generator
network rather than masked.
• NSP alternatives. Removing NSP does not
hurt or slightly improves performance (Liu
et al., 2019b; Joshi et al., 2020; Clinchant
et al., 2019). Wang et al. (2019a) and Cheng
et al. (2019) replace NSP with the task of
predicting both the next and the previous sen-
tences. Lan et al. (2020a) replace the negative
NSP examples by swapped sentences from
positive examples, rather than sentences from
different documents. ERNIE 2.0 includes sen-
tence reordering and sentence distance pre-
diction. Bai et al. (2020) replace both NSP
and token position embeddings by a combina-
tion of paragraph, sentence, and token index
embeddings. Li and Choi (2020) experiment
with utterance order prediction task for multi-
party dialogue (and also MLM at the level of
utterances and the whole dialogue).
• Other tasks. Sun et al. (2019c) propose si-
multaneous learning of 7 tasks, including dis-
course relation classification and predicting
whether a segment is relevant for IR. Guu
et al. (2020) include a latent knowledge re-
triever in language model pretraining. Wang
et al. (2020c) combine MLM with knowledge
base completion objective. Glass et al. (2020)
replace MLM with span prediction task (as
in extractive question answering), where the
model is expected to provide the answer not
from its own weights, but from a different pas-
sage containing the correct answer (a relevant
search engine query snippet).
Another obvious source of improvement is pre-
training data. Several studies explored the ben-
efits of increasing the corpus volume (Liu et al.,
2019b; Conneau et al., 2019; Baevski et al., 2019)
and longer training (Liu et al., 2019b). The data
also does not have to be raw text: there is a num-
ber efforts to incorporate explicit linguistic in-
formation, both syntactic (Sundararaman et al.,
2019) and semantic (Zhang et al., 2020). Wu et al.
(2019b) and Kumar et al. (2020) include the label
for a given sequence from an annotated task dataset.
Schick and Schütze (2020) separately learn repre-
sentations for rare words.
Although BERT is already actively used as a
source of world knowledge (see subsection 3.3),
there is also work on explicitly supplying struc-
tured knowledge . One approach is entity-
enhanced models. For example, Peters et al.
(2019a); Zhang et al. (2019) include entity em- | 7 | 7 | arxiv2_taclccby4_license.pdf |
Figure 5: Pre-trained weights help BERT find wider
optima in fine-tuning on MRPC (right) than training
from scratch (left) (Hao et al., 2019)
beddings as input for training BERT, while Po-
erner et al. (2019) adapt entity vectors to BERT
representations. As mentioned above, Wang et al.
(2020c) integrate knowledge not through entity em-
beddings, but through additional pre-training ob-
jective of knowledge base completion. Sun et al.
(2019b,c) modify the standard MLM task to mask
named entities rather than random words, and Yin
et al. (2020) train with MLM objective over both
text and linearized table data. Wang et al. (2020a)
enhance RoBERTa with both linguistic and factual
knowledge with task-specific adapters.
Pre-training is the most expensive part of train-
ing BERT, and it would be informative to know
how much benefit it provides. On some tasks, a
randomly initialized and fine-tuned BERT obtains
competitive or higher results than the pre-trained
BERT with the task classifier and frozen weights
(Kovaleva et al., 2019). The consensus in the com-
munity is that pre-training does help in most situa-
tions, but the degree and its exact contribution re-
quires further investigation. Prasanna et al. (2020)
found that most weights of pre-trained BERT are
useful in fine-tuning, although there are "better"
and "worse" subnetworks. One explanation is that
pre-trained weights help the fine-tuned BERT find
wider and flatter areas with smaller generalization
error, which makes the model more robust to over-
fitting (see Figure 5 from Hao et al. (2019)).
Given the large number and variety of proposed
modifications, one would wish to know how much
impact each of them has. However, due to the
overall trend towards large model sizes, systematic
ablations have become expensive. Most new mod-
els claim superiority on standard benchmarks, but
gains are often marginal, and estimates of model
stability and significance testing are very rare.
5.4 Fine-tuning BERT
Pre-training + fine-tuning workflow is a crucial
part of BERT. The former is supposed to provide
task-independent knowledge, and the latter would
presumably teach the model to rely more on the
representations useful for the task at hand.
Kovaleva et al. (2019) did not find that to be the
case for BERT fine-tuned on GLUE tasks 5: dur-
ing fine-tuning, the most changes for 3 epochs oc-
curred in the last two layers of the models, but those
changes caused self-attention to focus on [SEP]
rather than on linguistically interpretable patterns.
It is understandable why fine-tuning would increase
the attention to [CLS], but not [SEP]. If Clark
et al. (2019) are correct that [SEP] serves as "no-
op" indicator, fine-tuning basically tells BERT what
to ignore.
Several studies explored the possibilities of im-
proving the fine-tuning of BERT:
• Taking more layers into account: learning
a complementary representation of the infor-
mation in deep and output layers (Yang and
Zhao, 2019), using a weighted combination
of all layers instead of the final one (Su and
Cheng, 2019; Kondratyuk and Straka, 2019),
and layer dropout (Kondratyuk and Straka,
2019).
• Two-stage fine-tuning introduces an inter-
mediate supervised training stage between
pre-training and fine-tuning (Phang et al.,
2019; Garg et al., 2020; Arase and Tsujii,
2019; Pruksachatkun et al., 2020; Glavaš and
Vuli´c, 2020). Ben-David et al. (2020) propose
a pivot-based variant of MLM to fine-tune
BERT for domain adaptation.
• Adversarial token perturbations improve
robustness of the model (Zhu et al., 2019).
• Adversarial regularization in combination
with Bregman Proximal Point Optimization
helps alleviate pre-trained knowledge forget-
ting and therefore prevents BERT from overfit-
ting to downstream tasks (Jiang et al., 2019a).
• Mixout regularization improves the stability
of BERT fine-tuning even for a small number
of training examples (Lee et al., 2019).
With large models, even fine-tuning becomes ex-
pensive, but Houlsby et al. (2019) show that it can
5Kondratyuk and Straka (2019) suggest that fine-tuning on
Universal Dependencies does result in syntactically meaning-
ful attention patterns, but there was no quantitative evaluation. | 8 | 8 | arxiv2_taclccby4_license.pdf |
be successfully approximated with adapter mod-
ules. They achieve competitive performance on
26 classification tasks at a fraction of the computa-
tional cost. Adapters in BERT were also used for
multi-task learning (Stickland and Murray, 2019)
and cross-lingual transfer (Artetxe et al., 2019). An
alternative to fine-tuning is extracting features from
frozen representations, but fine-tuning works better
for BERT (Peters et al., 2019b).
A big methodological challenge in the current
NLP is that the reported performance improve-
ments of new models may well be within varia-
tion induced by environment factors (Crane, 2018).
BERT is not an exception. Dodge et al. (2020)
report significant variation for BERT fine-tuned
on GLUE tasks due to both weight initialization
and training data order. They also propose early
stopping on the less-promising seeds.
Although we hope that the above observations
may be useful for the practitioners, this section
does not exhaust the current research on fine-tuning
and its alternatives. For example, we do not cover
such topics as Siamese architectures, policy gradi-
ent training, automated curriculum learning, and
others.
6 How big should BERT be?
6.1 Overparameterization
Transformer-based models keep growing by or-
ders of magnitude: the 110M parameters of base
BERT are now dwarfed by 17B parameters of
Turing-NLG (Microsoft, 2020), which is dwarfed
by 175B of GPT-3 (Brown et al., 2020). This trend
raises concerns about computational complexity
of self-attention (Wu et al., 2019a), environmental
issues (Strubell et al., 2019; Schwartz et al., 2019),
fair comparison of architectures (Aßenmacher and
Heumann, 2020), and reproducibility.
Human language is incredibly complex, and
would perhaps take many more parameters to de-
scribe fully, but the current models do not make
good use of the parameters they already have. V oita
et al. (2019b) showed that all but a few Trans-
former heads could be pruned without signif-
icant losses in performance . For BERT, Clark
et al. (2019) observe that most heads in the same
layer show similar self-attention patterns (perhaps
related to the fact that the output of all self-attention
heads in a layer is passed through the same MLP),
which explains why Michel et al. (2019) were able
to reduce most layers to a single head.
Depending on the task, some BERT heads/layers
are not only redundant (Kao et al., 2020), but also
harmful to the downstream task performance. Pos-
itive effect from head disabling was reported for
machine translation (Michel et al., 2019), abstrac-
tive summarization (Baan et al., 2019), and GLUE
tasks (Kovaleva et al., 2019). Additionally, Ten-
ney et al. (2019a) examine the cumulative gains of
their structural probing classifier, observing that in
5 out of 8 probing tasks some layers cause a drop
in scores (typically in the final layers). Gordon
et al. (2020) find that 30–40% of the weights can
be pruned without impact on downstream tasks.
In general, larger BERT models perform better
(Liu et al., 2019a; Roberts et al., 2020), but not
always: BERT-base outperformed BERT-large on
subject-verb agreement (Goldberg, 2019) and sen-
tence subject detection (Lin et al., 2019). Given
the complexity of language, and amounts of pre-
training data, it is not clear why BERT ends up with
redundant heads and layers. Clark et al. (2019) sug-
gest that one possible reason is the use of attention
dropouts, which causes some attention weights to
be zeroed-out during training.
6.2 Compression techniques
Given the above evidence of overparameteriza-
tion, it does not come as a surprise that BERT
can be efficiently compressed with minimal ac-
curacy loss, which would be highly desirable for
real-world applications. Such efforts to date are
summarized in Table 1. The main approaches are
knowledge distillation, quantization, and pruning.
The studies in the knowledge distillation
framework (Hinton et al., 2014) use a smaller
student-network trained to mimic the behavior of
a larger teacher-network. For BERT, this has been
achieved through experiments with loss functions
(Sanh et al., 2019b; Jiao et al., 2019), mimicking
the activation patterns of individual portions of the
teacher network (Sun et al., 2019a), and knowledge
transfer at the pre-training (Turc et al., 2019; Jiao
et al., 2019; Sun et al., 2020) or fine-tuning stage
(Jiao et al., 2019). McCarley et al. (2020) suggest
that distillation has so far worked better for GLUE
than for reading comprehension, and report good
results for QA from a combination of structured
pruning and task-specific distillation.
Quantization decreases BERT’s memory foot-
print through lowering the precision of its weights
(Shen et al., 2019; Zafrir et al., 2019). Note that | 9 | 9 | arxiv2_taclccby4_license.pdf |
Compression Performance Speedup Model Evaluation
BERT-base (Devlin et al., 2019) ×1 100% ×1 BERT 12 All GLUE tasks, SQuAD
BERT-small ×3.8 91% - BERT 4† All GLUE tasks
Distillation
DistilBERT (Sanh et al., 2019a) ×1.5 90% § ×1.6 BERT 6 All GLUE tasks, SQuAD
BERT6-PKD (Sun et al., 2019a) ×1.6 98% ×1.9 BERT 6 No WNLI, CoLA, STS-B; RACE
BERT3-PKD (Sun et al., 2019a) ×2.4 92% ×3.7 BERT 3 No WNLI, CoLA, STS-B; RACE
Aguilar et al. (2019), Exp. 3 ×1.6 93% - BERT 6 CoLA, MRPC, QQP, RTE
BERT-48 (Zhao et al., 2019) ×62 87% ×77 BERT 12∗† MNLI, MRPC, SST-2
BERT-192 (Zhao et al., 2019) ×5.7 93% ×22 BERT 12∗† MNLI, MRPC, SST-2
TinyBERT (Jiao et al., 2019) ×7.5 96% ×9.4 BERT 4† No WNLI; SQuAD
MobileBERT (Sun et al., 2020) ×4.3 100% ×4 BERT 24† No WNLI; SQuAD
PD (Turc et al., 2019) ×1.6 98% ×2.5‡ BERT6† No WNLI, CoLA and STS-B
WaLDORf (Tian et al., 2019) ×4.4 93% ×9 BERT 8†∥ SQuAD
MiniLM (Wang et al., 2020b) ×1.65 99% ×2 BERT 6 No WNLI, STS-B, MNLImm; SQuAD
MiniBERT(Tsai et al., 2019) ×6∗∗ 98% ×27∗∗ mBERT3† CoNLL-18 POS and morphology
BiLSTM-soft (Tang et al., 2019) ×110 91% ×434‡ BiLSTM1 MNLI, QQP, SST-2
Quanti-
zation
Q-BERT-MP (Shen et al., 2019) ×13 98% ¶ - BERT 12 MNLI, SST-2, CoNLL-03, SQuAD
BERT-QAT (Zafrir et al., 2019) ×4 99% - BERT 12 No WNLI, MNLI; SQuAD
GOBO(Zadeh and Moshovos, 2020) ×9.8 99% - BERT 12 MNLI
Pruning
McCarley et al. (2020), ff2 ×2.2‡ 98%‡ ×1.9‡ BERT24 SQuAD, Natural Questions
RPP (Guo et al., 2019) ×1.7‡ 99%‡ - BERT 24 No WNLI, STS-B; SQuAD
Soft MvP (Sanh et al., 2020) ×33 94% ¶ - BERT 12 MNLI, QQP, SQuAD
IMP (Chen et al., 2020), rewind 50% ×1.4–2.5 94–100% - BERT 12 No MNLI-mm; SQuAD
Other
ALBERT-base (Lan et al., 2020b) ×9 97% - BERT 12† MNLI, SST-2
ALBERT-xxlarge (Lan et al., 2020b) ×0.47 107% - BERT 12† MNLI, SST-2
BERT-of-Theseus (Xu et al., 2020) ×1.6 98% ×1.9 BERT 6 No WNLI
PoWER-BERT (Goyal et al., 2020) N/A 99% ×2–4.5 BERT 12 No WNLI; RACE
Table 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup
figures are given with respect to BERT base, unless indicated otherwise. Performance retention is measured as
a ratio of average scores achieved by a given model and by BERT base. The subscript in the model description
reflects the number of layers used. ∗Smaller vocabulary used. †The dimensionality of the hidden layers is reduced.
∥Convolutional layers used. ‡Compared to BERTlarge. ∗∗Compared to mBERT.§As reported in (Jiao et al., 2019).¶In
comparison to the dev set.
this strategy often requires compatible hardware.
As discussed in section 6, individual self-
attention heads and BERT layers can be disabled
without significant drop in performance (Michel
et al., 2019; Kovaleva et al., 2019; Baan et al.,
2019). Pruning is a compression technique that
takes advantage of that fact, typically reducing the
amount of computation via zeroing out of certain
parts of the large model. In structured pruning,
architecture blocks are dropped, as in LayerDrop
(Fan et al., 2019). In unstructured, the weights in
the entire model are pruned irrespective of their lo-
cation, as in magnitude pruning (Chen et al., 2020)
or movement pruning (Sanh et al., 2020).
Prasanna et al. (2020) and Chen et al. (2020)
explore BERT from the perspective of the lottery
ticket hypothesis (Frankle and Carbin, 2019), look-
ing specifically at the "winning" subnetworks in
pre-trained BERT. They independently find that
such subnetworks do exist, and that transferability
between subnetworks for different tasks varies.
If the ultimate goal of training BERT is compres-
sion, Li et al. (2020) recommend training larger
models and compressing them heavily rather than
compressing smaller models lightly.
Other techniques include decomposing BERT’s
embedding matrix into smaller matrices (Lan et al.,
2020a), progressive module replacing (Xu et al.,
2020) and dynamic elimination of intermediate en-
coder outputs (Goyal et al., 2020). See Ganesh et al.
(2020) for a more detailed discussion of compres-
sion methods.
6.3 Pruning and model analysis
There is a nascent discussion around pruning as a
model analysis technique. The basic idea is that
a compressed model a priori consists of elements
that are useful for prediction; therefore by finding
out what they do we may find out what the whole
network does. For instance, BERT has heads that
seem to encode frame-semantic relations, but dis-
abling them might not hurt downstream task per-
formance Kovaleva et al. (2019); this suggests that
this knowledge is not actually used.
For the base Transformer, V oita et al. (2019b)
identify the functions of self-attention heads and | 10 | 10 | arxiv2_taclccby4_license.pdf |
then check which of them survive the pruning, find-
ing that the syntactic and positional heads are the
last ones to go. For BERT, Prasanna et al. (2020)
go in the opposite direction: pruning on the basis of
importance scores, and interpreting the remaining
"good" subnetwork. With respect to self-attention
heads specifically, it does not seem to be the case
that only the heads that potentially encode non-
trivial linguistic patterns survive the pruning.
The models and methodology in these studies
differ, so the evidence is inconclusive. In particular,
V oita et al. (2019b) find that before pruning the
majority of heads are syntactic, and Prasanna et al.
(2020) – that the majority of heads do not have
potentially non-trivial attention patterns.
An important limitation of the current head and
layer ablation studies (Michel et al., 2019; Koval-
eva et al., 2019) is that they inherently assume
that certain knowledge is contained in heads/layers.
However, there is evidence of more diffuse rep-
resentations spread across the full network, such
as the gradual increase in accuracy on difficult se-
mantic parsing tasks (Tenney et al., 2019a) or the
absence of heads that would perform parsing "in
general" (Clark et al., 2019; Htut et al., 2019). If so,
ablating individual components harms the weight-
sharing mechanism. Conclusions from component
ablations are also problematic if the same informa-
tion is duplicated elsewhere in the network.
7 Directions for further research
BERTology has clearly come a long way, but it
is fair to say we still have more questions than
answers about how BERT works. In this section,
we list what we believe to be the most promising
directions for further research.
Benchmarks that require verbal reasoning.
While BERT enabled breakthroughs on many NLP
benchmarks, a growing list of analysis papers are
showing that its language skills are not as impres-
sive as it seems. In particular, it was shown to rely
on shallow heuristics in natural language inference
(McCoy et al., 2019b; Zellers et al., 2019; Jin et al.,
2020), reading comprehension (Si et al., 2019a;
Rogers et al., 2020; Sugawara et al., 2020; Si et al.,
2019b; Yogatama et al., 2019), argument reason-
ing comprehension (Niven and Kao, 2019), and
text classification (Jin et al., 2020). Such heuristics
can even be used to reconstruct a non-publicly-
available model (Krishna et al., 2020). As with
any optimization method, if there is a shortcut in
the data, we have no reason to expect BERT to not
learn it. But harder datasets that cannot be resolved
with shallow heuristics are unlikely to emerge if
their development is not as valued as modeling
work.
Benchmarks for the full range of linguistic
competence. While the language models seem to
acquire a great deal of knowledge about language,
we do not currently have comprehensive stress tests
for different aspects of linguistic knowledge. A
step in this direction is the "Checklist" behavioral
testing (Ribeiro et al., 2020), the best paper at ACL
2020. Ideally, such tests would measure not only
errors, but also sensitivity (Ettinger, 2019).
Developing methods to "teach" reasoning.
While large pre-trained models have a lot of knowl-
edge, they often fail if any reasoning needs to be
performed on top of the facts they possess (Tal-
mor et al., 2019, see also subsection 3.3). For in-
stance, Richardson et al. (2020) propose a method
to "teach" BERT quantification, conditionals, com-
paratives, and boolean coordination.
Learning what happens at inference time.
Most BERT analysis papers focus on different
probes of the model, with the goal to find what
the language model "knows". However, probing
studies have limitations (subsection 3.4), and to this
point, far fewer papers have focused on discovering
what knowledge actually gets used. Several promis-
ing directions are the "amnesic probing" (Elazar
et al., 2020), identifying features important for pre-
diction for a given task (Arkhangelskaia and Dutta,
2019), and pruning the model to remove the non-
important components (V oita et al., 2019b; Michel
et al., 2019; Prasanna et al., 2020).
8 Conclusion
In a little over a year, BERT has become a ubiq-
uitous baseline in NLP experiments and inspired
numerous studies analyzing the model and propos-
ing various improvements. The stream of papers
seems to be accelerating rather than slowing down,
and we hope that this survey helps the community
to focus on the biggest unresolved questions.
9 Acknowledgements
We thank the anonymous reviewers for their valu-
able feedback. This work is funded in part by
the NSF award number IIS-1844740 to Anna
Rumshisky. | 11 | 11 | arxiv2_taclccby4_license.pdf |
References
Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin
Yao, Xing Fan, and Edward Guo. 2019. Knowl-
edge Distillation from Internal Representations.
arXiv preprint arXiv:1910.03723.
Alan Akbik, Tanja Bergmann, and Roland V oll-
graf. 2019. Pooled Contextualized Embeddings
for Named Entity Recognition. In Proceedings
of the 2019 Conference of the North Ameri-
can Chapter of the Association for Computa-
tional Linguistics: Human Language Technolo-
gies, Volume 1 (Long and Short Papers), pages
724–728, Minneapolis, Minnesota. Association
for Computational Linguistics.
Yuki Arase and Jun’ichi Tsujii. 2019. Transfer
Fine-Tuning: A BERT Case Study. In Proceed-
ings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the
9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), pages
5393–5404, Hong Kong, China. Association for
Computational Linguistics.
Ekaterina Arkhangelskaia and Sourav Dutta. 2019.
Whatcha lookin’at? DeepLIFTing BERT’s At-
tention in Question Answering. arXiv preprint
arXiv:1910.06431.
Mikel Artetxe, Sebastian Ruder, and Dani Yo-
gatama. 2019. On the Cross-lingual Trans-
ferability of Monolingual Representations.
arXiv:1911.03310 [cs].
Matthias Aßenmacher and Christian Heumann.
2020. On the comparability of Pre-Trained Lan-
guage Models. arXiv:2001.00781 [cs, stat].
Joris Baan, Maartje ter Hoeve, Marlies van der
Wees, Anne Schuth, and Maarten de Rijke.
2019. Understanding Multi-Head Attention
in Abstractive Summarization. arXiv preprint
arXiv:1911.03898.
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke
Zettlemoyer, and Michael Auli. 2019. Cloze-
driven Pretraining of Self-Attention Networks.
In Proceedings of the 2019 Conference on Em-
pirical Methods in Natural Language Process-
ing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 5360–5369, Hong Kong, China.
Association for Computational Linguistics.
He Bai, Peng Shi, Jimmy Lin, Luchen
Tan, Kun Xiong, Wen Gao, and Ming Li.
2020. SegaBERT: Pre-training of Segment-
aware BERT for Language Understanding.
arXiv:2004.14996 [cs].
Sriram Balasubramanian, Naman Jain, Gaurav Jin-
dal, Abhijeet Awasthi, and Sunita Sarawagi.
2020. What’s in a Name? Are BERT Named En-
tity Representations just as Good for any other
Name? In Proceedings of the 5th Workshop on
Representation Learning for NLP , pages 205–
214, Online. Association for Computational Lin-
guistics.
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang,
Nan Yang, Xiaodong Liu, Yu Wang, Songhao
Piao, Jianfeng Gao, Ming Zhou, and Hsiao-
Wuen Hon. 2020. UniLMv2: Pseudo-Masked
Language Models for Unified Language Model
Pre-Training. arXiv:2002.12804 [cs].
Yonatan Belinkov and James Glass. 2019. Anal-
ysis Methods in Neural Language Processing:
A Survey. Transactions of the Association for
Computational Linguistics, 7:49–72.
Eyal Ben-David, Carmel Rabinovitz, and Roi Re-
ichart. 2020. PERL: Pivot-based Domain Adap-
tation for Pre-trained Deep Contextualized Em-
bedding Models. arXiv:2006.09075 [cs].
Rishi Bommasani, Kelly Davis, and Claire Cardie.
2020. Interpreting Pretrained Contextualized
Representations via Reductions to Static Em-
beddings. In Proceedings of the 58th Annual
Meeting of the Association for Computational
Linguistics, pages 4758–4781.
Zied Bouraoui, Jose Camacho-Collados, and
Steven Schockaert. 2019. Inducing Relational
Knowledge from BERT. arXiv:1911.12753
[cs].
Samuel Broscheit. 2019. Investigating Entity
Knowledge in BERT with Simple Neural End-
To-End Entity Linking. In Proceedings of the
23rd Conference on Computational Natural Lan-
guage Learning (CoNLL), pages 677–685, Hong
Kong, China. Association for Computational
Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder,
Melanie Subbiah, Jared Kaplan, Prafulla Dhari-
wal, Arvind Neelakantan, Pranav Shyam, Girish | 12 | 12 | arxiv2_taclccby4_license.pdf |
Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-V oss, Gretchen Krueger, Tom Henighan,
Rewon Child, Aditya Ramesh, Daniel M.
Ziegler, Jeffrey Wu, Clemens Winter, Christo-
pher Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark,
Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei.
2020. Language Models are Few-Shot Learners.
arXiv:2005.14165 [cs].
Gino Brunner, Yang Liu, Damian Pascual, Oliver
Richter, Massimiliano Ciaramita, and Roger
Wattenhofer. 2020. On Identifiability in Trans-
formers. In International Conference on Learn-
ing Representations.
Tianlong Chen, Jonathan Frankle, Shiyu Chang,
Sijia Liu, Yang Zhang, Zhangyang Wang, and
Michael Carbin. 2020. The Lottery Ticket
Hypothesis for Pre-trained BERT Networks.
arXiv:2007.12223 [cs, stat].
Xingyi Cheng, Weidi Xu, Kunlong Chen, Wei
Wang, Bin Bi, Ming Yan, Chen Wu, Luo Si, Wei
Chu, and Taifeng Wang. 2019. Symmetric Reg-
ularization based BERT for Pair-Wise Semantic
Reasoning. arXiv:1909.03405 [cs].
Kevin Clark, Urvashi Khandelwal, Omer Levy,
and Christopher D. Manning. 2019. What Does
BERT Look at? An Analysis of BERT’s Atten-
tion. In Proceedings of the 2019 ACL Workshop
BlackboxNLP: Analyzing and Interpreting Neu-
ral Networks for NLP, pages 276–286, Florence,
Italy. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V . Le, and
Christopher D. Manning. 2020. ELECTRA: Pre-
Training Text Encoders as Discriminators Rather
Than Generators. In International Conference
on Learning Representations.
Stephane Clinchant, Kweon Woo Jung, and Vas-
silina Nikoulina. 2019. On the use of BERT
for Neural Machine Translation. In Proceedings
of the 3rd Workshop on Neural Generation and
Translation, pages 108–117, Hong Kong. Asso-
ciation for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman
Goyal, Vishrav Chaudhary, Guillaume Wen-
zek, Francisco Guzmán, Edouard Grave, Myle
Ott, Luke Zettlemoyer, and Veselin Stoyanov.
2019. Unsupervised Cross-Lingual Representa-
tion Learning at Scale. arXiv:1911.02116 [cs].
Gonçalo M. Correia, Vlad Niculae, and André F. T.
Martins. 2019. Adaptively Sparse Transform-
ers. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Pro-
cessing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 2174–2184, Hong Kong, China.
Association for Computational Linguistics.
Matt Crane. 2018. Questionable Answers in Ques-
tion Answering Research: Reproducibility and
Variability of Published Results. Transactions of
the Association for Computational Linguistics,
6:241–252.
Leyang Cui, Sijie Cheng, Yu Wu, and Yue Zhang.
2020. Does BERT Solve Commonsense Task via
Commonsense Knowledge? arXiv:2008.03945
[cs].
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin,
Ziqing Yang, Shijin Wang, and Guoping Hu.
2019. Pre-Training with Whole Word Masking
for Chinese BERT. arXiv:1906.08101 [cs].
Jeff Da and Jungo Kasai. 2019. Cracking the
Contextual Commonsense Code: Understand-
ing Commonsense Reasoning Aptitude of Deep
Contextual Representations. In Proceedings of
the First Workshop on Commonsense Inference
in Natural Language Processing , pages 1–12,
Hong Kong, China. Association for Computa-
tional Linguistics.
Joe Davison, Joshua Feldman, and Alexander Rush.
2019. Commonsense Knowledge Mining from
Pretrained Models. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 1173–1178, Hong
Kong, China. Association for Computational
Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training
of Deep Bidirectional Transformers for Lan-
guage Understanding. In Proceedings of the
2019 Conference of the North American Chapter
of the Association for Computational Linguis-
tics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 4171–4186. | 13 | 13 | arxiv2_taclccby4_license.pdf |
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali
Farhadi, Hannaneh Hajishirzi, and Noah Smith.
2020. Fine-Tuning Pretrained Language Models:
Weight Initializations, Data Orders, and Early
Stopping. arXiv:2002.06305 [cs].
Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and
Yoav Goldberg. 2020. When Bert Forgets How
To POS: Amnesic Probing of Linguistic Proper-
ties and MLM Predictions. arXiv:2006.00995
[cs].
Kawin Ethayarajh. 2019. How Contextual are
Contextualized Word Representations? Compar-
ing the Geometry of BERT, ELMo, and GPT-2
Embeddings. In Proceedings of the 2019 Con-
ference on Empirical Methods in Natural Lan-
guage Processing and the 9th International Joint
Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 55–65, Hong Kong,
China. Association for Computational Linguis-
tics.
Allyson Ettinger. 2019. What BERT is
not: Lessons from a new suite of psy-
cholinguistic diagnostics for language models.
arXiv:1907.13528 [cs].
Angela Fan, Edouard Grave, and Armand Joulin.
2019. Reducing Transformer Depth on Demand
with Structured Dropout. In International Con-
ference on Learning Representations.
Maxwell Forbes, Ari Holtzman, and Yejin Choi.
2019. Do Neural Language Representations
Learn Physical Commonsense? In Proceedings
of the 41st Annual Conference of the Cognitive
Science Society (CogSci 2019), page 7.
Jonathan Frankle and Michael Carbin. 2019. The
Lottery Ticket Hypothesis: Finding Sparse,
Trainable Neural Networks. In International
Conference on Learning Representations.
Prakhar Ganesh, Yao Chen, Xin Lou, Moham-
mad Ali Khan, Yin Yang, Deming Chen, Mari-
anne Winslett, Hassan Sajjad, and Preslav Nakov.
2020. Compressing large-scale transformer-
based models: A case study on BERT. arXiv
preprint arXiv:2002.11985.
Siddhant Garg, Thuy Vu, and Alessandro Moschitti.
2020. TANDA: Transfer and Adapt Pre-Trained
Transformer Models for Answer Sentence Selec-
tion. In AAAI.
Michael Glass, Alfio Gliozzo, Rishav Chakravarti,
Anthony Ferritto, Lin Pan, G P Shrivatsa Bhar-
gav, Dinesh Garg, and Avi Sil. 2020. Span
Selection Pre-training for Question Answering.
In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics,
pages 2773–2782, Online. Association for Com-
putational Linguistics.
Goran Glavaš and Ivan Vuli ´c. 2020. Is Super-
vised Syntactic Parsing Beneficial for Language
Understanding? An Empirical Investigation.
arXiv:2008.06788 [cs].
Adele Goldberg. 2006. Constructions at Work: The
Nature of Generalization in Language. Oxford
University Press, USA.
Yoav Goldberg. 2019. Assessing BERT’s syntactic
abilities. arXiv preprint arXiv:1901.05287.
Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei
Wang, and Tieyan Liu. 2019. Efficient training
of BERT by progressively stacking. In Interna-
tional Conference on Machine Learning, pages
2337–2346.
Mitchell A Gordon, Kevin Duh, and Nicholas An-
drews. 2020. Compressing BERT: Studying the
effects of weight pruning on transfer learning.
arXiv preprint arXiv:2002.08307.
Saurabh Goyal, Anamitra Roy Choudhary, Venkate-
san Chakaravarthy, Saurabh ManishRaje, Yogish
Sabharwal, and Ashish Verma. 2020. Power-
bert: Accelerating BERT inference for classifi-
cation tasks. arXiv preprint arXiv:2001.08950.
Fu-Ming Guo, Sijia Liu, Finlay S. Mungall, Xue
Lin, and Yanzhi Wang. 2019. Reweighted Prox-
imal Pruning for Large-Scale Language Repre-
sentation. arXiv:1909.12486 [cs, stat].
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pa-
supat, and Ming-Wei Chang. 2020. REALM:
Retrieval-Augmented Language Model Pre-
Training. arXiv:2002.08909 [cs].
Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019.
Visualizing and Understanding the Effective-
ness of BERT. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 4143–4152, Hong | 14 | 14 | arxiv2_taclccby4_license.pdf |
Kong, China. Association for Computational
Linguistics.
John Hewitt and Christopher D. Manning. 2019.
A Structural Probe for Finding Syntax in Word
Representations. In Proceedings of the 2019
Conference of the North American Chapter of
the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
and Short Papers), pages 4129–4138.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean.
2014. Distilling the Knowledge in a Neural Net-
work. In Deep Learning and Representation
Learning Workshop: NIPS 2014.
Benjamin Hoover, Hendrik Strobelt, and Sebastian
Gehrmann. 2019. exBERT: A Visual Analy-
sis Tool to Explore Learned Representations in
Transformers Models. arXiv:1910.05276 [cs].
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzeb-
ski, Bruna Morrone, Quentin de Laroussilhe, An-
drea Gesmundo, Mona Attariyan, and Sylvain
Gelly. 2019. Parameter-Efficient Transfer Learn-
ing for NLP. arXiv:1902.00751 [cs, stat].
Phu Mon Htut, Jason Phang, Shikha Bordia, and
Samuel R Bowman. 2019. Do attention heads
in BERT track syntactic dependencies? arXiv
preprint arXiv:1911.12246.
Sarthak Jain and Byron C. Wallace. 2019. Atten-
tion is not Explanation. In Proceedings of the
2019 Conference of the North American Chapter
of the Association for Computational Linguis-
tics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 3543–3556.
Ganesh Jawahar, Benoît Sagot, Djamé Seddah,
Samuel Unicomb, Gerardo Iñiguez, Márton Kar-
sai, Yannick Léo, Márton Karsai, Carlos Sar-
raute, Éric Fleury, et al. 2019. What does BERT
learn about the structure of language? In 57th
Annual Meeting of the Association for Computa-
tional Linguistics (ACL), Florence, Italy.
Haoming Jiang, Pengcheng He, Weizhu Chen, Xi-
aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019a.
SMART: Robust and Efficient Fine-Tuning for
Pre-trained Natural Language Models through
Principled Regularized Optimization. arXiv
preprint arXiv:1911.03437.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Gra-
ham Neubig. 2019b. How Can We Know What
Language Models Know? arXiv:1911.12543
[cs].
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang,
Xiao Chen, Linlin Li, Fang Wang, and Qun
Liu. 2019. TinyBERT: Distilling BERT for nat-
ural language understanding. arXiv preprint
arXiv:1909.10351.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter
Szolovits. 2020. Is BERT Really Robust? A
Strong Baseline for Natural Language Attack
on Text Classification and Entailment. In AAAI
2020.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S.
Weld, Luke Zettlemoyer, and Omer Levy. 2020.
SpanBERT: Improving Pre-Training by Repre-
senting and Predicting Spans. Transactions of
the Association for Computational Linguistics,
8:64–77.
Wei-Tsung Kao, Tsung-Han Wu, Po-Han Chi,
Chun-Cheng Hsieh, and Hung-Yi Lee. 2020.
Further boosting BERT-based models by du-
plicating existing layers: Some intriguing
phenomena inside BERT. arXiv preprint
arXiv:2001.09309.
Taeuk Kim, Jihun Choi, Daniel Edmiston, and
Sang-goo Lee. 2020. Are pre-trained language
models aware of phrases? simple but strong
baselines for grammar induction. In ICLR 2020.
Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi,
and Kentaro Inui. 2020. Attention Module is
Not Only a Weight: Analyzing Transformers
with Vector Norms. arXiv:2004.10102 [cs].
Dan Kondratyuk and Milan Straka. 2019. 75 Lan-
guages, 1 Model: Parsing Universal Dependen-
cies Universally. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 2779–2795, Hong
Kong, China. Association for Computational
Linguistics.
Lingpeng Kong, Cyprien de Masson d’Autume, Lei
Yu, Wang Ling, Zihang Dai, and Dani Yogatama.
2019. A mutual information maximization per-
spective of language representation learning. In | 15 | 15 | arxiv2_taclccby4_license.pdf |
International Conference on Learning Represen-
tations.
Olga Kovaleva, Alexey Romanov, Anna Rogers,
and Anna Rumshisky. 2019. Revealing the Dark
Secrets of BERT. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 4356–4365, Hong
Kong, China. Association for Computational
Linguistics.
Kalpesh Krishna, Gaurav Singh Tomar, Ankur P.
Parikh, Nicolas Papernot, and Mohit Iyyer. 2020.
Thieves on Sesame Street! Model Extraction of
BERT-Based APIs. In ICLR 2020.
Varun Kumar, Ashutosh Choudhary, and Eunah
Cho. 2020. Data Augmentation using Pre-
Trained Transformer Models. arXiv:2003.02245
[cs].
Ilia Kuznetsov and Iryna Gurevych. 2020. A Mat-
ter of Framing: The Impact of Linguistic For-
malism on Probing Results. arXiv:2004.14999
[cs].
Guillaume Lample and Alexis Conneau. 2019.
Cross-Lingual Language Model Pretraining.
arXiv:1901.07291 [cs].
Zhenzhong Lan, Mingda Chen, Sebastian Good-
man, Kevin Gimpel, Piyush Sharma, and Radu
Soricut. 2020a. ALBERT: A Lite BERT for
Self-Supervised Learning of Language Repre-
sentations. In ICLR.
Zhenzhong Lan, Mingda Chen, Sebastian Good-
man, Kevin Gimpel, Piyush Sharma, and Radu
Soricut. 2020b. ALBERT: A Lite BERT for
Self-supervised Learning of Language Represen-
tations. In ICLR 2020.
Cheolhyoung Lee, Kyunghyun Cho, and Wanmo
Kang. 2019. Mixout: Effective regularization to
finetune large-scale pretrained language models.
arXiv preprint arXiv:1909.11299.
Mike Lewis, Yinhan Liu, Naman Goyal, Mar-
jan Ghazvininejad, Abdelrahman Mohamed,
Omer Levy, Ves Stoyanov, and Luke Zettle-
moyer. 2019. BART: Denoising Sequence-to-
Sequence Pre-Training for Natural Language
Generation, Translation, and Comprehension.
arXiv:1910.13461 [cs, stat].
Changmao Li and Jinho D. Choi. 2020. Transform-
ers to Learn Hierarchical Contexts in Multiparty
Dialogue for Span-based Question Answering.
In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics,
pages 5709–5714, Online. Association for Com-
putational Linguistics.
Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin,
Kurt Keutzer, Dan Klein, and Joseph E Gonzalez.
2020. Train large, then compress: Rethinking
model size for efficient training and inference of
transformers. arXiv preprint arXiv:2002.11794.
Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019.
Open Sesame: Getting inside BERT’s Linguistic
Knowledge. In Proceedings of the 2019 ACL
Workshop BlackboxNLP: Analyzing and Inter-
preting Neural Networks for NLP , pages 241–
253.
Nelson F. Liu, Matt Gardner, Yonatan Belinkov,
Matthew E. Peters, and Noah A. Smith. 2019a.
Linguistic Knowledge and Transferability of
Contextual Representations. In Proceedings
of the 2019 Conference of the North Ameri-
can Chapter of the Association for Computa-
tional Linguistics: Human Language Technolo-
gies, Volume 1 (Long and Short Papers), pages
1073–1094, Minneapolis, Minnesota. Associa-
tion for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du,
Mandar Joshi, Danqi Chen, Omer Levy, Mike
Lewis, Luke Zettlemoyer, and Veselin Stoyanov.
2019b. RoBERTa: A Robustly Optimized BERT
Pretraining Approach. arXiv:1907.11692 [cs].
Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh
Nallapati, and Bing Xiang. 2019. Universal Text
Representation from BERT: An Empirical Study.
arXiv:1910.07973 [cs].
Christopher D. Manning, Kevin Clark, John He-
witt, Urvashi Khandelwal, and Omer Levy. 2020.
Emergent linguistic structure in artificial neural
networks trained by self-supervision. Proceed-
ings of the National Academy of Sciences, page
201907367.
Chandler May, Alex Wang, Shikha Bordia,
Samuel R. Bowman, and Rachel Rudinger. 2019.
On Measuring Social Biases in Sentence En-
coders. In Proceedings of the 2019 Confer- | 16 | 16 | arxiv2_taclccby4_license.pdf |
ence of the North American Chapter of the As-
sociation for Computational Linguistics: Hu-
man Language Technologies, Volume 1 (Long
and Short Papers), pages 622–628, Minneapo-
lis, Minnesota. Association for Computational
Linguistics.
J. S. McCarley, Rishav Chakravarti, and Avirup
Sil. 2020. Structured Pruning of a BERT-based
Question Answering Model. arXiv:1910.06360
[cs].
R. Thomas McCoy, Tal Linzen, Ewan Dunbar, and
Paul Smolensky. 2019a. RNNs implicitly imple-
ment tensor-product representations. In Interna-
tional Conference on Learning Representations.
Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019b.
Right for the Wrong Reasons: Diagnosing Syn-
tactic Heuristics in Natural Language Inference.
In Proceedings of the 57th Annual Meeting of
the Association for Computational Linguistics,
pages 3428–3448, Florence, Italy. Association
for Computational Linguistics.
Alessio Miaschi and Felice Dell’Orletta. 2020.
Contextual and Non-Contextual Word Embed-
dings: An in-depth Linguistic Investigation. In
Proceedings of the 5th Workshop on Representa-
tion Learning for NLP, pages 110–119.
Paul Michel, Omer Levy, and Graham Neubig.
2019. Are Sixteen Heads Really Better than
One? Advances in Neural Information Process-
ing Systems 32 (NIPS 2019).
Timothee Mickus, Denis Paperno, Mathieu Con-
stant, and Kees van Deemeter. 2019. What do
you mean, BERT? assessing BERT as a dis-
tributional semantics model. arXiv preprint
arXiv:1911.05758.
Microsoft. 2020. Turing-NLG: A 17-billion-
parameter language model by microsoft.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S.
Corrado, and Jeff Dean. 2013. Distributed repre-
sentations of words and phrases and their compo-
sitionality. In Advances in Neural Information
Processing Systems 26 (NIPS 2013), pages 3111–
3119.
Jiaqi Mu and Pramod Viswanath. 2018. All-but-
the-top: Simple and effective postprocessing for
word representations. In International Confer-
ence on Learning Representations.
Timothy Niven and Hung-Yu Kao. 2019. Probing
Neural Network Comprehension of Natural Lan-
guage Arguments. In Proceedings of the 57th
Annual Meeting of the Association for Computa-
tional Linguistics, pages 4658–4664, Florence,
Italy. Association for Computational Linguistics.
Matthew E. Peters, Mark Neumann, Robert Logan,
Roy Schwartz, Vidur Joshi, Sameer Singh, and
Noah A. Smith. 2019a. Knowledge Enhanced
Contextual Word Representations. In Proceed-
ings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the
9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), pages
43–54, Hong Kong, China. Association for Com-
putational Linguistics.
Matthew E. Peters, Sebastian Ruder, and Noah A.
Smith. 2019b. To Tune or Not to Tune? Adapt-
ing Pretrained Representations to Diverse Tasks.
In Proceedings of the 4th Workshop on Repre-
sentation Learning for NLP (RepL4NLP-2019),
pages 7–14, Florence, Italy. Association for
Computational Linguistics.
Fabio Petroni, Tim Rocktäschel, Sebastian Riedel,
Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and
Alexander Miller. 2019. Language Models as
Knowledge Bases? In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 2463–2473, Hong
Kong, China. Association for Computational
Linguistics.
Jason Phang, Thibault Févry, and Samuel R. Bow-
man. 2019. Sentence Encoders on STILTs: Sup-
plementary Training on Intermediate Labeled-
Data Tasks. arXiv:1811.01088 [cs].
Tiago Pimentel, Josef Valvoda, Rowan Hall Maud-
slay, Ran Zmigrod, Adina Williams, and Ryan
Cotterell. 2020. Information-Theoretic Probing
for Linguistic Structure. arXiv:2004.03061 [cs].
Nina Poerner, Ulli Waltinger, and Hinrich Schütze.
2019. BERT is not a knowledge base
(yet): Factual knowledge vs. name-based rea-
soning in unsupervised qa. arXiv preprint
arXiv:1911.03681. | 17 | 17 | arxiv2_taclccby4_license.pdf |
Sai Prasanna, Anna Rogers, and Anna Rumshisky.
2020. When BERT Plays the Lottery, All Tick-
ets Are Winning. In Proceedings of the 2020
Conference on Empirical Methods in Natural
Language Processing, Online. Association for
Computational Linguistics.
Ofir Press, Noah A. Smith, and Omer Levy. 2020.
Improving Transformer Models by Reordering
their Sublayers. In Proceedings of the 58th An-
nual Meeting of the Association for Computa-
tional Linguistics, pages 2996–3005, Online. As-
sociation for Computational Linguistics.
Yada Pruksachatkun, Jason Phang, Haokun Liu,
Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe
Pang, Clara Vania, Katharina Kann, and
Samuel R. Bowman. 2020. Intermediate-Task
Transfer Learning with Pretrained Language
Models: When and Why Does It Work? In Pro-
ceedings of the 58th Annual Meeting of the As-
sociation for Computational Linguistics, pages
5231–5247, Online. Association for Computa-
tional Linguistics.
Colin Raffel, Noam Shazeer, Adam Roberts,
Katherine Lee, Sharan Narang, Michael Matena,
Yanqi Zhou, Wei Li, and Peter J. Liu.
2019. Exploring the Limits of Transfer Learn-
ing with a Unified Text-to-Text Transformer.
arXiv:1910.10683 [cs, stat].
Alessandro Raganato, Yves Scherrer, and Jörg
Tiedemann. 2020. Fixed Encoder Self-Attention
Patterns in Transformer-Based Machine Transla-
tion. arXiv:2002.10260 [cs].
Alessandro Raganato and Jörg Tiedemann. 2018.
An Analysis of Encoder Representations in
Transformer-Based Machine Translation. In Pro-
ceedings of the 2018 EMNLP Workshop Black-
boxNLP: Analyzing and Interpreting Neural Net-
works for NLP, pages 287–297, Brussels, Bel-
gium. Association for Computational Linguis-
tics.
Marco Tulio Ribeiro, Tongshuang Wu, Carlos
Guestrin, and Sameer Singh. 2020. Beyond Ac-
curacy: Behavioral Testing of NLP Models with
CheckList. In Proceedings of the 58th Annual
Meeting of the Association for Computational
Linguistics, pages 4902–4912, Online. Associa-
tion for Computational Linguistics.
Kyle Richardson, Hai Hu, Lawrence S. Moss, and
Ashish Sabharwal. 2020. Probing Natural Lan-
guage Inference Models through Semantic Frag-
ments. In AAAI 2020.
Kyle Richardson and Ashish Sabharwal. 2019.
What Does My QA Model Know? Devis-
ing Controlled Probes using Expert Knowledge.
arXiv:1912.13337 [cs].
Adam Roberts, Colin Raffel, and Noam Shazeer.
2020. How Much Knowledge Can You Pack Into
the Parameters of a Language Model? arXiv
preprint arXiv:2002.08910.
Anna Rogers, Olga Kovaleva, Matthew Downey,
and Anna Rumshisky. 2020. Getting Closer to
AI Complete Question Answering: A Set of Pre-
requisite Real Tasks. In AAAI, page 11.
Rudolf Rosa and David Mare ˇcek. 2019. Induc-
ing syntactic trees from BERT representations.
arXiv preprint arXiv:1906.11511.
Victor Sanh, Lysandre Debut, Julien Chaumond,
and Thomas Wolf. 2019a. Distilbert, a distilled
version of BERT: smaller, faster, cheaper and
lighter. arXiv preprint arXiv:1910.01108.
Victor Sanh, Lysandre Debut, Julien Chaumond,
and Thomas Wolf. 2019b. DistilBERT, a dis-
tilled version of BERT: Smaller, faster, cheaper
and lighter. In 5th Workshop on Energy Efficient
Machine Learning and Cognitive Computing -
NeurIPS 2019.
Victor Sanh, Thomas Wolf, and Alexander M.
Rush. 2020. Movement Pruning: Adaptive Spar-
sity by Fine-Tuning. arXiv:2005.07683 [cs].
Timo Schick and Hinrich Schütze. 2020.
BERTRAM: Improved Word Embeddings
Have Big Impact on Contextualized Model
Performance. In Proceedings of the 58th
Annual Meeting of the Association for Compu-
tational Linguistics, pages 3996–4007, Online.
Association for Computational Linguistics.
Florian Schmidt and Thomas Hofmann. 2020.
BERT as a Teacher: Contextual Embeddings
for Sequence-Level Reward. arXiv preprint
arXiv:2003.02738.
Roy Schwartz, Jesse Dodge, Noah A. Smith,
and Oren Etzioni. 2019. Green AI.
arXiv:1907.10597 [cs, stat]. | 18 | 18 | arxiv2_taclccby4_license.pdf |
Sofia Serrano and Noah A. Smith. 2019. Is Atten-
tion Interpretable? arXiv:1906.03731 [cs].
Sheng Shen, Zhen Dong, Jiayu Ye, Linjian Ma,
Zhewei Yao, Amir Gholami, Michael W Ma-
honey, and Kurt Keutzer. 2019. Q-BERT: Hes-
sian Based Ultra Low Precision Quantization of
BERT. arXiv preprint arXiv:1909.05840.
Chenglei Si, Shuohang Wang, Min-Yen Kan, and
Jing Jiang. 2019a. What does BERT learn
from multiple-choice reading comprehension
datasets? arXiv preprint arXiv:1910.12391.
Chenglei Si, Shuohang Wang, Min-Yen Kan, and
Jing Jiang. 2019b. What does BERT Learn
from Multiple-Choice Reading Comprehension
Datasets? arXiv:1910.12391 [cs].
Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and
Tie-Yan Liu. 2020. MPNet: Masked and Per-
muted Pre-training for Language Understanding.
arXiv:2004.09297 [cs].
Asa Cooper Stickland and Iain Murray. 2019.
BERT and PALs: Projected Attention Layers for
Efficient Adaptation in Multi-Task Learning. In
International Conference on Machine Learning,
pages 5986–5995.
Emma Strubell, Ananya Ganesh, and Andrew Mc-
Callum. 2019. Energy and Policy Considera-
tions for Deep Learning in NLP. In ACL 2019.
Ta-Chun Su and Hsiang-Chih Cheng. 2019.
SesameBERT: Attention for Anywhere.
arXiv:1910.03176 [cs].
Saku Sugawara, Pontus Stenetorp, Kentaro Inui,
and Akiko Aizawa. 2020. Assessing the Bench-
marking Capacity of Machine Reading Compre-
hension Datasets. In AAAI.
Siqi Sun, Yu Cheng, Zhe Gan, and Jingjing Liu.
2019a. Patient Knowledge Distillation for BERT
Model Compression. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 4314–4323.
Yu Sun, Shuohuan Wang, Yukun Li, Shikun Feng,
Xuyi Chen, Han Zhang, Xin Tian, Danxiang
Zhu, Hao Tian, and Hua Wu. 2019b. ERNIE:
Enhanced Representation through Knowledge
Integration. arXiv:1904.09223 [cs].
Yu Sun, Shuohuan Wang, Yukun Li, Shikun
Feng, Hao Tian, Hua Wu, and Haifeng Wang.
2019c. ERNIE 2.0: A Continual Pre-
Training Framework for Language Understand-
ing. arXiv:1907.12412 [cs].
Zhiqing Sun, Hongkun Yu, Xiaodan Song, Ren-
jie Liu, Yiming Yang, and Denny Zhou. 2020.
MobileBERT: Task-Agnostic Compression of
BERT for Resource Limited Devices.
Dhanasekar Sundararaman, Vivek Subramanian,
Guoyin Wang, Shijing Si, Dinghan Shen, Dong
Wang, and Lawrence Carin. 2019. Syntax-
Infused Transformer and BERT models for Ma-
chine Translation and Natural Language Under-
standing. arXiv:1911.06156 [cs, stat].
Alon Talmor, Yanai Elazar, Yoav Goldberg,
and Jonathan Berant. 2019. oLMpics – On
what Language Model Pre-Training Captures.
arXiv:1912.13283 [cs].
Hirotaka Tanaka, Hiroyuki Shinnou, Rui Cao, Jing
Bai, and Wen Ma. 2020. Document Classifica-
tion by Word Embeddings of BERT. In Compu-
tational Linguistics, Communications in Com-
puter and Information Science, pages 145–154,
Singapore. Springer.
Raphael Tang, Yao Lu, Linqing Liu, Lili Mou,
Olga Vechtomova, and Jimmy Lin. 2019. Dis-
tilling Task-Specific Knowledge from BERT
into Simple Neural Networks. arXiv preprint
arXiv:1903.12136.
Ian Tenney, Dipanjan Das, and Ellie Pavlick. 2019a.
BERT Rediscovers the Classical NLP Pipeline.
In Proceedings of the 57th Annual Meeting of
the Association for Computational Linguistics,
pages 4593–4601.
Ian Tenney, Patrick Xia, Berlin Chen, Alex Wang,
Adam Poliak, R. Thomas McCoy, Najoung Kim,
Benjamin Van Durme, Samuel R. Bowman, Di-
panjan Das, and Ellie Pavlick. 2019b. What do
you learn from context? Probing for sentence
structure in contextualized word representations.
In International Conference on Learning Repre-
sentations.
James Yi Tian, Alexander P Kreuzer, Pai-Hung
Chen, and Hans-Martin Will. 2019. WaL-
DORf: Wasteless Language-model Distillation | 19 | 19 | arxiv2_taclccby4_license.pdf |
On Reading-comprehension. arXiv preprint
arXiv:1912.06638.
Shubham Toshniwal, Haoyue Shi, Bowen Shi,
Lingyu Gao, Karen Livescu, and Kevin Gim-
pel. 2020. A Cross-Task Analysis of Text Span
Representations. In Proceedings of the 5th Work-
shop on Representation Learning for NLP, pages
166–176, Online. Association for Computational
Linguistics.
Henry Tsai, Jason Riesa, Melvin Johnson, Naveen
Arivazhagan, Xin Li, and Amelia Archer. 2019.
Small and Practical BERT Models for Sequence
Labeling. arXiv preprint arXiv:1909.00100.
Iulia Turc, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. Well-Read Students
Learn Better: The Impact of Student Initializa-
tion on Knowledge Distillation. arXiv preprint
arXiv:1908.08962.
Marten van Schijndel, Aaron Mueller, and Tal
Linzen. 2019. Quantity doesn’t buy quality syn-
tax with neural language models. In Proceed-
ings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the
9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), pages
5831–5837, Hong Kong, China. Association for
Computational Linguistics.
Ashish Vaswani, Noam Shazeer, Niki Parmar,
Jakob Uszkoreit, Llion Jones, Aidan N Gomez,
Łukasz Kaiser, and Illia Polosukhin. 2017. At-
tention is All you Need. In Advances in neu-
ral information processing systems, pages 5998–
6008.
Jesse Vig. 2019. Visualizing Attention in
Transformer-Based Language Representation
Models. arXiv:1904.02679 [cs, stat].
Jesse Vig and Yonatan Belinkov. 2019. Analyzing
the Structure of Attention in a Transformer Lan-
guage Model. In Proceedings of the 2019 ACL
Workshop BlackboxNLP: Analyzing and Inter-
preting Neural Networks for NLP, pages 63–76,
Florence, Italy. Association for Computational
Linguistics.
David Vilares, Michalina Strzyz, Anders Søgaard,
and Carlos Gómez-Rodríguez. 2020. Parsing as
pretraining. In Thirty-Fourth AAAI Conference
on Artificial Intelligence (AAAI-20).
Elena V oita, Rico Sennrich, and Ivan Titov. 2019a.
The Bottom-up Evolution of Representations in
the Transformer: A Study with Machine Transla-
tion and Language Modeling Objectives. In Pro-
ceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and
the 9th International Joint Conference on Nat-
ural Language Processing (EMNLP-IJCNLP),
pages 4387–4397.
Elena V oita, David Talbot, Fedor Moiseev, Rico
Sennrich, and Ivan Titov. 2019b. Analyzing
Multi-Head Self-Attention: Specialized Heads
Do the Heavy Lifting, the Rest Can Be Pruned.
arXiv preprint arXiv:1905.09418.
Elena V oita and Ivan Titov. 2020. Information-
Theoretic Probing with Minimum Description
Length. arXiv:2003.12298 [cs].
Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gard-
ner, and Sameer Singh. 2019a. Universal Ad-
versarial Triggers for Attacking and Analyzing
NLP. In Proceedings of the 2019 Conference
on Empirical Methods in Natural Language Pro-
cessing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 2153–2162, Hong Kong, China.
Association for Computational Linguistics.
Eric Wallace, Yizhong Wang, Sujian Li, Sameer
Singh, and Matt Gardner. 2019b. Do NLP Mod-
els Know Numbers? Probing Numeracy in Em-
beddings. arXiv preprint arXiv:1909.07940.
Alex Wang, Amapreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R. Bowman. 2018.
GLUE: A Multi-Task Benchmark and Analysis
Platform for Natural Language Understanding.
In Proceedings of the 2018 EMNLP Workshop
BlackboxNLP: Analyzing and Interpreting Neu-
ral Networks for NLP, pages 353–355, Brussels,
Belgium. Association for Computational Lin-
guistics.
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu
Wei, Xuanjing Huang, Jianshu ji, Guihong Cao,
Daxin Jiang, and Ming Zhou. 2020a. K-Adapter:
Infusing Knowledge into Pre-Trained Models
with Adapters. arXiv:2002.01808 [cs].
Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi
Bao, Liwei Peng, and Luo Si. 2019a. Struct-
BERT: Incorporating Language Structures into | 20 | 20 | arxiv2_taclccby4_license.pdf |
Pre-Training for Deep Language Understanding.
arXiv:1908.04577 [cs].
Wenhui Wang, Furu Wei, Li Dong, Hangbo
Bao, Nan Yang, and Ming Zhou. 2020b.
MiniLM: Deep Self-Attention Distillation for
Task-Agnostic Compression of Pre-Trained
Transformers. arXiv preprint arXiv:2002.10957.
Xiaozhi Wang, Tianyu Gao, Zhaocheng Zhu,
Zhiyuan Liu, Juanzi Li, and Jian Tang. 2020c.
KEPLER: A Unified Model for Knowledge Em-
bedding and Pre-trained Language Representa-
tion. arXiv:1911.06136 [cs].
Yile Wang, Leyang Cui, and Yue Zhang. 2020d.
How Can BERT Help Lexical Semantics Tasks?
arXiv:1911.02929 [cs].
Zihan Wang, Stephen Mayhew, Dan Roth, et al.
2019b. Cross-Lingual Ability of Multilingual
BERT: An Empirical Study. arXiv preprint
arXiv:1912.07840.
Alex Warstadt and Samuel R. Bowman. 2020. Can
neural networks acquire a structural bias from
raw linguistic data? In Proceedings of the 42nd
Annual Virtual Meeting of the Cognitive Science
Society, Online.
Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng,
Hagen Blix, Yining Nie, Anna Alsop, Shikha
Bordia, Haokun Liu, Alicia Parrish, et al. 2019.
Investigating BERT’s Knowledge of Language:
Five Analysis Methods with NPIs. In Proceed-
ings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the
9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), pages
2870–2880.
Gregor Wiedemann, Steffen Remus, Avi Chawla,
and Chris Biemann. 2019. Does BERT Make
Any Sense? Interpretable Word Sense Dis-
ambiguation with Contextualized Embeddings.
arXiv preprint arXiv:1909.10430.
Sarah Wiegreffe and Yuval Pinter. 2019. Atten-
tion is not not Explanation. In Proceedings of
the 2019 Conference on Empirical Methods in
Natural Language Processing and the 9th In-
ternational Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 11–
20, Hong Kong, China. Association for Compu-
tational Linguistics.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien
Chaumond, Clement Delangue, Anthony Moi,
Pierric Cistac, Tim Rault, Rémi Louf, Morgan
Funtowicz, and Jamie Brew. 2020. Hugging-
Face’s Transformers: State-of-the-Art Natural
Language Processing. arXiv:1910.03771 [cs].
Felix Wu, Angela Fan, Alexei Baevski, Yann
Dauphin, and Michael Auli. 2019a. Pay Less At-
tention with Lightweight and Dynamic Convolu-
tions. In International Conference on Learning
Representations.
Xing Wu, Shangwen Lv, Liangjun Zang, Jizhong
Han, and Songlin Hu. 2019b. Conditional BERT
Contextual Augmentation. In ICCS 2019: Com-
putational Science – ICCS 2019 , pages 84–95.
Springer.
Yonghui Wu, Mike Schuster, Zhifeng Chen,
Quoc V Le, Mohammad Norouzi, Wolfgang
Macherey, Maxim Krikun, Yuan Cao, Qin Gao,
Klaus Macherey, et al. 2016. Google’s Neural
Machine Translation System: Bridging the Gap
between Human and Machine Translation. arXiv
preprint arXiv:1609.08144.
Zhiyong Wu, Yun Chen, Ben Kao, and Qun Liu.
2020. Perturbed Masking: Parameter-free Prob-
ing for Analyzing and Interpreting BERT. In
Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics,
pages 4166–4176, Online. Association for Com-
putational Linguistics.
Canwen Xu, Wangchunshu Zhou, Tao Ge, Furu
Wei, and Ming Zhou. 2020. BERT-of-Theseus:
Compressing BERT by Progressive Module Re-
placing. arXiv preprint arXiv:2002.02925.
Junjie Yang and Hai Zhao. 2019. Deepening Hid-
den Representations from Pre-Trained Language
Models for Natural Language Understanding.
arXiv:1911.01940 [cs].
Zhilin Yang, Zihang Dai, Yiming Yang, Jaime
Carbonell, Ruslan Salakhutdinov, and Quoc V .
Le. 2019. XLNet: Generalized Autoregres-
sive Pretraining for Language Understanding.
arXiv:1906.08237 [cs].
Pengcheng Yin, Graham Neubig, Wen-tau Yih, and
Sebastian Riedel. 2020. TaBERT: Pretraining
for Joint Understanding of Textual and Tabular | 21 | 21 | arxiv2_taclccby4_license.pdf |
Data. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Lin-
guistics, pages 8413–8426, Online. Association
for Computational Linguistics.
Dani Yogatama, Cyprien de Masson d’Autume,
Jerome Connor, Tomas Kocisky, Mike
Chrzanowski, Lingpeng Kong, Angeliki Lazari-
dou, Wang Ling, Lei Yu, Chris Dyer, and Phil
Blunsom. 2019. Learning and Evaluating Gen-
eral Linguistic Intelligence. arXiv:1901.11373
[cs, stat].
Yang You, Jing Li, Sashank Reddi, Jonathan Hseu,
Sanjiv Kumar, Srinadh Bhojanapalli, Xiaodan
Song, James Demmel, and Cho-Jui Hsieh. 2019.
Large Batch Optimization for Deep Learning:
Training BERT in 76 Minutes. arXiv preprint
arXiv:1904.00962, 1(5).
Ali Hadi Zadeh and Andreas Moshovos. 2020.
GOBO: Quantizing Attention-Based NLP Mod-
els for Low Latency and Energy Efficient Infer-
ence. arXiv:2005.03842 [cs, stat].
Ofir Zafrir, Guy Boudoukh, Peter Izsak, and Moshe
Wasserblat. 2019. Q8BERT: Quantized 8bit
BERT. arXiv preprint arXiv:1910.06188.
Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali
Farhadi, and Yejin Choi. 2019. HellaSwag: Can
a Machine Really Finish Your Sentence? In Pro-
ceedings of the 57th Annual Meeting of the As-
sociation for Computational Linguistics, pages
4791–4800.
Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang,
Maosong Sun, and Qun Liu. 2019. ERNIE: En-
hanced Language Representation with Informa-
tive Entities. In Proceedings of the 57th Annual
Meeting of the Association for Computational
Linguistics, pages 1441–1451, Florence, Italy.
Association for Computational Linguistics.
Zhuosheng Zhang, Yuwei Wu, Hai Zhao, Zuchao
Li, Shuailiang Zhang, Xi Zhou, and Xiang Zhou.
2020. Semantics-aware BERT for Language
Understanding. In AAAI 2020.
Sanqiang Zhao, Raghav Gupta, Yang Song,
and Denny Zhou. 2019. Extreme Lan-
guage Model Compression with Optimal Sub-
words and Shared Projections. arXiv preprint
arXiv:1909.11687.
Yiyun Zhao and Steven Bethard. 2020. How does
BERT’s attention change when you fine-tune?
An analysis methodology and a case study in
negation scope. In Proceedings of the 58th An-
nual Meeting of the Association for Computa-
tional Linguistics, pages 4729–4747, Online. As-
sociation for Computational Linguistics.
Wenxuan Zhou, Junyi Du, and Xiang Ren.
2019. Improving BERT Fine-tuning with
Embedding Normalization. arXiv preprint
arXiv:1911.03918.
Xuhui Zhou, Yue Zhang, Leyang Cui, and Dandan
Huang. 2020. Evaluating Commonsense in Pre-
Trained Language Models. In AAAI 2020.
Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Tom
Goldstein, and Jingjing Liu. 2019. FreeLB: En-
hanced Adversarial Training for Language Un-
derstanding. arXiv:1909.11764 [cs]. | 22 | 22 | arxiv2_taclccby4_license.pdf |
Revisiting Feature Prediction for Learning Visual
Representations from Video
Adrien Bardes1,2,3, Quentin Garrido1,4, Jean Ponce3,5,6, Xinlei Chen1, Michael Rabbat1, Yann LeCun1,5,6,
Mahmoud Assran1,†, Nicolas Ballas1,†
1FAIR at Meta,2Inria, 3École normale supérieure, CNRS, PSL Research University,4Univ. Gustave Eiffel,
CNRS, LIGM,5Courant Institute, New York University,6Center for Data Science, New York University
†Joint last author
This paper explores feature prediction as a stand-alone objective for unsupervised learning from video and
introduces V-JEPA, a collection of vision models trained solely using a feature prediction objective, without
the use of pretrained image encoders, text, negative examples, reconstruction, or other sources of supervision.
The models are trained on 2 million videos collected from public datasets and are evaluated on downstream
image and video tasks. Our results show that learning by predicting video features leads to versatile visual
representations that perform well on both motion and appearance-based tasks, without adaption of the
model’s parameters; e.g., using a frozen backbone. Our largest model, aViT-H/16 trained only on videos,
obtains 81.9% on Kinetics-400,72.2% on Something-Something-v2, and77.9% on ImageNet1K.
Date: April 15, 2024
Correspondence: {abardes, massran, ballasn}@meta.com
Code: https://github.com/facebookresearch/jepa
Blogpost: Click here
1 Introduction
Humans possess the remarkable ability to map low-level
signals originating from the retina into a semantic spatio-
temporal understanding of the world; synthesizing no-
tions such as objects and global motion (Spelke et al.,
1995). A long-standing goal of the machine learning
community is to identify the principles or objectives that
may guide such unsupervised learning in humans (Field,
1994; Berkes and Wiskott, 2005; Hinton, 1989). One
related hypothesis is based on the predictive feature
principle (Rao and Ballard, 1999), which posits that
representations of temporally adjacent sensory stimuli
should be predictive of each other.
In this work, we revisit feature prediction as a stand-
alone objective for unsupervised learning of visual repre-
sentations from video. Numerous advances in the field —
such as the standard use of transformer architectures in
vision (Dosovitskiy et al., 2020), the maturing of masked
autoencoding frameworks (Xie et al., 2021; Bao et al.,
2021; He et al., 2021), query-based feature pooling (Chen
et al., 2022), joint-embedding predictive architectures
(JEPA) (LeCun, 2022; Assran et al., 2023; Baevski et al.,
2022b), and larger datasets — form a unique arsenal of
tools, which we integrate in a modern and conceptually
simple method, thevideo joint-embedding predictive ar-
chitecture or V-JEPA, which is based solely on feature
prediction, without using pretrained image encoders,
text, negative examples, human annotations, or pixel-
70 72 74 76 78 80 82 84 86 88 90 92
40
50
60
70
SOTA fine-tuned task-specific
model on SSv 2 (MVD)
SOTA fine-tuned
task-specific model
on K 400 (UniFormer)
ViT-L/16
V-JEPA
ViT-H/16
DINOv2
ViT-g/14
OpenCLIP
ViT-G/14
I-JEPA
ViT-H/16
Hiera
Hiera-H
VideoMAE
ViT-H/16
VideoMAEv2
ViT-g/14
OmniMAE
ViT-H/16
Kinetics 400
Something-Something-v2
Frozen Evaluation
Video Feature Pred.
Video Pixel Pred.
Image Models
Figure 1 V-JEPA models pretrained on video learn versatile
visual representations. It performs well on motion-based
tasks (Something-Something-v2) and appearance-based tasks
(Kinetics 400) without adaptation of the model’s parameters,
i.e., using the same frozen backbone for both tasks.
level reconstruction.
We seek to answer the simple question:
How effective is feature prediction as a stand-
alone objective for unsupervised learning from
video with modern tools?
1
arXiv:2404.08471v1 [cs.CV] 15 Feb 2024 | 0 | 0 | arxiv3.pdf |
To that end, we pretrain a family ofV-JEPA models
on a dataset of 2 million videos collected from pub-
licly available datasets by combining a masked modeling
prediction task with a joint-embedding predictive ar-
chitecture (see Figure 2). We measure performance on
several downstream image and video tasks, using both
frozen evaluation and end-to-end fine-tuning. Our find-
ings suggest that feature prediction can indeed serve as
an effective stand-alone objective for unsupervised learn-
ing from video, while using significantly shorter training
schedules than pixel prediction methods. Specifically:
• Feature prediction leads to versatile visual repre-
sentations that perform well across downstream
image and video tasks without adaption of the
model’s weights; i.e., using a frozen backbone.
V-JEPA achieves the best performance among
methods we consider (+6% accuracy) on the
SomethingSomething-v2 task, which requires fine-
grained temporal understanding. V-JEPA is
also competitive on tasks like Kinetics400, where
appearance-based features are sufficient and hence
state-of-the-art image models such as DINOv2 excel
(Figure 1 and Table 6).
• Models trained with feature prediction are supe-
rior to pixel prediction approaches under a frozen
evaluation protocol (attentive probing) and are com-
petitive with pixel prediction under full fine-tuning,
while using significantly shorter training schedules
(Tables 5 and 6).
• Models trained with feature prediction are more
label-efficient than pixel prediction approaches. De-
creasing the available number of labeled examples re-
sults in an increase in the performance gap between
V-JEPA and pixel-reconstruction models (Table 7).
2 Related Works
Slow Features. One way to encourage temporally
adjacent representations to be predictive of each other
is to ensure that they vary slowly over time. Early
works targeting predictive features encouraged represen-
tations of individual video frames to be locally tempo-
rally invariant, while preventing representation collapse
by using spectral methods, as in SFA (Wiskott and Se-
jnowski, 2002), SSA (Kayser et al., 2001), and Simulated
Fixations (Zou et al., 2012). More recently, Goroshin
et al. (2015); Wang et al. (2010) train a siamese con-
volutional network to map the representations of two
subsequent frames to the same point, while encouraging
distant frames to have diverse representations via a pair-
wise margin loss and a triplet loss, respectively. Other
works (Oord et al., 2018; Surís et al., 2021; Feichtenhofer
et al., 2021) implement temporal invariance using noise-
contrastive estimation (Gutmann and Hyvärinen, 2012).
Our exploration in this paper goes beyond temporal in-
variance and explores feature prediction using masked
modeling.
Predictive Features. Going beyond local invariance,
a family of works trains a predictor network to map the
representation of a frame or clip at one time-step to a
distinct representation at another time-step. Srivastava
et al. (2015); Vondrick et al. (2016); Wang et al. (2023b)
train such a video feature predictor network on top of
a frozen pretrained image or video encoder. Unfreezing
the target feature extractor, several methods train the
video encoder and the predictor network simultaneously,
while preventing collapse by using a supervised action
forecasting loss (Girdhar and Grauman, 2021), or by
using the representations of distant clips as negative
samples in a contrastive loss (Han et al., 2019, 2020;
Tan et al., 2023), often focusing on small convolutional
encoders (Han et al., 2019, 2020). The idea of learning a
representation by predicting missing information in fea-
ture space is also core to the joint-embedding predictive
architecture (JEPA) (LeCun, 2022), which combines a
siamese encoder with a predictor network. JEPAs have
been successfully instantiated in several modalities, such
as with audio data (Baevski et al., 2022b) and image
data (Zhou et al., 2021; Oquab et al., 2023; Assran et al.,
2023). In this work, we extend this paradigm to video
data by leveraging recent advances in self-supervised
learning.
Advances in Self-Supervised Learning.The use
of vision transformers (Dosovitskiy et al., 2020; Li et al.,
2022) has become standard practice in self-supervised
learning with joint-embedding architectures (Chen et al.,
2021; Caron et al., 2021; Oquab et al., 2023; Zhou et al.,
2021; Assran et al., 2022), and unlocked masked image
modeling in pixel space by parameterizing the pixel de-
coder as a transformer with learnable mask tokens (Doso-
vitskiy et al., 2020; Xie et al., 2021; He et al., 2021; Bao
et al., 2021), demonstrating a step-change in the rep-
resentation quality of autoencoding methods (Vincent
et al., 2010). This line of generative methods was sub-
sequently extended to video data using spatio-temporal
masking (Tong et al., 2022; Feichtenhofer et al., 2022;
Wang et al., 2023a; Kalluri et al., 2023; Gupta et al.,
2023). It was also recently shown that the representa-
tionsofmaskedimageautoencoderscouldbesignificantly
improved by using learnable pooling mechanisms based
on cross-attention (Chen et al., 2022). Finally, through
careful selection of design choices, the non-contrastive
collapse prevention strategy in BYOL (Grill et al., 2020)
was recently made to work with image feature prediction
methods (Baevski et al., 2022b; Assran et al., 2023),
which demonstrated the ability to learn representations
that can be leveraged for various downstream tasks with-
out relying on invariance to hand-crafted image trans-
formations.
2 | 1 | 1 | arxiv3.pdf |
Feature Prediction versus Pixel Reconstruction.
Approaches that predict in pixel space must dedicate
significant model capacity and compute to capture all
the low-level detail in the visual input. By contrast, ap-
proaches that predict in latent space have the flexibility
to eliminate irrelevant or unpredictable pixel-level details
from the target representation (Vondrick et al., 2016).
Predicting in representation space has been shown to
lead to versatile representations that perform well across
many downstream tasks through linear probing or low-
shot adaptation (Assran et al., 2023; Oquab et al., 2023;
Assran et al., 2022), while demonstrating an efficiency
gain during pretraining compared to pixel level recon-
struction (Assran et al., 2023; Baevski et al., 2022b,a).
The works of Baevski et al. (2022a,b) additionally show
that predicting in representation space results in compet-
itive end-to-end fine-tuning performance in the image,
audio and text domains. In this work, we extend these
findings to the video modality.
3 Methodology: Video-JEPA
x
x-encoder
predictorz
y
y-encoder
D(ˆsy, sy)
ˆsy
sy
Figure 2 Joint-Embedding Predictive Architectures are
trained to predict the representation of an inputy from
the representation of another inputx. The additional vari-
able z provides the predictor with information about the
transformation that computesy from x.
Our goal is to explore the effectiveness of feature pre-
diction as a stand-alone objective for learning visual
representations from video. To that end, we use a
joint-embedding predictive architecture (JEPA) (LeCun,
2022); see Figure 2. The main idea behind a JEPA is
to learn by predicting the representation of an inputy
from the representation of another inputx. The basic
architecture is made up of an encoder,Eθ (·), which com-
putes the representation of the inputs, and a predictor,
Pϕ (·), which predicts the representation ofy from the
representation ofx, conditioned on a variablez indicat-
ing the transformation (or corruption) betweenx and
y. Conditioning onz enables the generation of distinct
predictions for various transformations ofx.
3.1 Training Objective
We train our visual encoderEθ (·) to satisfy the con-
straint that representations computed from one part of
the video,y, should be predictable from representations
computed from another part of the video,x. The pre-
dictor networkPϕ (·), which maps the representation of
x to the representation ofy, is trained simultaneously
with the encoder, and is provided specification of the
spatio-temporal positions ofy through the conditioning
variable z ← ∆y .
Naively implementing the objective using the regression
minimizeθ,ϕ ∥Pϕ (Eθ (x), ∆y ) − Eθ (y)∥1,
would admit a trivial solution, where the encoder out-
puts a constant representation, regardless of its input.
In practice, we use the following modified objective to
prevent representation collapse,
minimizeθ,ϕ ∥Pϕ (Eθ (x), ∆y ) − sg(Eθ (y))∥1, (1)
where sg(·) denotes a stop-gradient operation, which
does not backpropagate through its argument, andEθ (·)
is an exponential moving average of the networkEθ (·).
The use of an exponential-moving average feature ex-
tractor along with a stop-gradient and a predictor has
been used as a collapse prevention strategy for image pre-
training (Grill et al., 2020), and studied empirically (Xie
et al., 2021) and theoretically (Tian et al., 2021). In
fact, the objective in equation(1) is similar to the loss
of Assran et al. (2023) used for image pretraining, but
we modify it to use anℓ1 regression, which we found to
be more stable.
Theoretical motivation. A theoretical motivation for
the effectiveness of this collapse prevention strategy was
proposed in Grill et al. (2020) for the BYOL method. We
provide a simple adaptation of their analysis for ourℓ1
loss. For ease of exposition, we will disregard the effect of
the conditioning variablez and consider one dimensional
representations. Denote the representation Eθ (y) by
a random variable Y . The optimal predictor under
equation (1) is thus given by the following functional
expression,
P⋆ (Eθ (x)) = argminP ∥P(Eθ (x)) − Y ∥1
= median(Y |Eθ (x)).
Substituting this expression for the optimal predictor
into the loss function and evaluating the expected gradi-
ent of the encoder gives
∇θ E∥P⋆ (Eθ (x)) − Y ∥1 = ∇θ MAD(Y |Eθ (x)),
where MAD(· |Eθ (x)) is the median absolute deviation
of a random variable conditioned onEθ (x). Thus, in the
case where the predictor is optimal, the encoder must
learn to capture as much information about the video
as possible to minimize the deviation of the target. The
hypothesis is that incorporating an exponential moving
average to compute the representation ofy ensures that
the predictor evolves faster than the encoder and remains
close to optimal, thereby preventing collapse.
3 | 2 | 2 | arxiv3.pdf |
[L×d]
[N×d]
\
Remove
masked
tokens
Binary Mask
[T×H×W]
Eθ
x-encoder
[N×d]
[L×d]
Concatenate
mask tokens
Pφ
predictor
[M×d]
[M×d]
[L×d]
/
Remove
unmasked
tokens
E ¯θ
y-encoder
[L×d]
L1 / /
stop-grad
Figure 3 V-JEPA. Training operates on a video clip ofT frames with spatial resolutionH × W, flattened into a sequence
of L tokens. (Left to right): We first obtain the input of thex-encoder by dropping tokens from the video clip. The
x-encoder then processes the masked video sequence, and outputs an embedding vector for each input token. Next, the
outputs of thex-encoder are concatenated with a set of learnable mask tokens containing positional embeddings of the masked
spatio-temporal patches. The predictor network processes the combined token sequence, and outputs an embedding vector for
each mask token. The outputs of the predictor are then regressed to the prediction targets using anL1 loss. The prediction
targets correspond to the output of they-encoder.
3.2 Prediction Task: Predictingy from x
The feature prediction task is based on a masked mod-
eling formulation (He et al., 2021; Tong et al., 2022);
i.e., regionsx and y from the video are sampled using
masking. To sampley from a video, we sample several
(possibly overlapping) spatially continuous blocks with
various aspect ratios and repeat the spatial blocks across
the entire temporal dimension of the video;x is taken to
be the complement. Masking a large continuous block
that covers the full temporal dimension limits informa-
tion leakage due to the spatial and temporal redundancy
of videos, and results in a harder prediction task (Tong
et al., 2022).
We leverage two types of masks: short-range masks,
where we take the union of8 randomly sampled target
blocks covering 15% of each frame, and long-range masks,
where we take the union of2 randomly sampled target
blocks covering 70% of each frame. In both cases, the
aspect ratio for all sampled blocks is randomly chosen in
the range(0.75, 1.5). Given that both short-range and
long-range masks are produced by sampling many blocks
and taking their union, the result is an average masking
ratio of ∼ 90%. We refer to our masking strategy as
multi-block, and compare it to other possible masking
strategies in Section 4.
3.3 Network Parameterization
We use a Vision Transformer (ViT) (Dosovitskiy et al.,
2020; Arnab et al., 2021) as our video backbone. To
process a video with a transformer network, we split the
video clip into a 3D grid ofL spatio-temporal patches,
where a patch consists of a16 × 16 pixel block spanning
2 consecutive frames; we refer to these spatio-temporal
patches as tokens. This sequence of tokens is then di-
rectly processed by the stack of transformer blocks. In-
puts x and y correspond to masked regions of a video, we
apply the video masks by simply dropping a subset of the
tokens. We apply masking at the input of thex-encoder,
and at the output of they-encoder to construct contex-
tualized targets (Baevski et al., 2022b). The encoder is
parameterized using standard ViT networks, while the
predictor is a narrow transformer implemented using
12 blocks with an embedding dimension of384. Taking
inspiration from masked autoencoders (He et al., 2021),
our predictor takes as input the sequence of embeddings
produced by thex-encoder as well as a sequence of learn-
able mask tokens with positional embeddings indicating
the spatio-temporal positions of they tokens. The out-
put of the predictor is an embedding vector for each
mask token; see Figure 3 and refer to Appendix B for
more details.
3.4 Pretraining Data and Evaluation Setup
Pretraining. We combine several public datasets to
construct an unsupervised video pretraining dataset,
which we refer to as VideoMix2M. Specifically, we com-
bine the videos from HowTo100M (HT) (Miech et al.,
2019), Kinetics-400/600/700 (K710) (Kay et al., 2017),
and Something-Something-v2 (SSv2) (Goyal et al., 2017),
and remove any overlap with the validation sets of
Kinetics-400/600/700 and Something-Something-v2, re-
sulting in approximately 2 million videos. We train a
ViT-L/16, a ViT-H/16, and a ViT-H/16384 transformer
model on VideoMix2M. We use a batch size of 3072 for
the ViT-L/16 and ViT-H/16 models, and a batch size
of 2400 for the ViT-H/16384 model. Each model takes
as input a video clip of 16 frames sampled with a frame-
skip of 4, corresponding to roughly 3 second clips on
average. The ViT-L/16 and ViT-H/16 process the video
at a spatial resolution of 224, while the ViT-H/16384
uses an input resolution of 384; cf. Appendix C.
4 | 3 | 3 | arxiv3.pdf |
Table 1 Pixels vs. Featurized Targets.We ablate the effect of computing the prediction loss in feature space vs pixel space. All
models are trained on VideoMix2M for 90K iterations with a batch size of 3072 using the multi-block prediction task. We
examine downstream performance using a frozen backbone with attentive probing, and report top-1 accuracy using a single
center view. We also examine end-to-end fine-tuning performance of the models on K400. Predicting in feature space provide
a consistent improvement over pixel space prediction.
Frozen Evaluation Fine-Tuning
K400 SSv2 IN1K K400-ft
Target Arch. (16×1×1) (16 ×1×1) (16 ×5×3)
Pixels ViT-L/16 68.6 66.0 73.3 85.4
Features ViT-L/16 73.7 66.2 74.8 85.6
Table 2 Pretraining Data Distribution.We pretrain all models for 90K iterations using a batch size of 3072, and evaluate
downstream performance of the frozen backbones with an attentive probe using a single center view. Average performance
across tasks increases with the pretraining dataset size.
Frozen Evaluation
K400 SSv2 IN1K Avg.
Arch. Data #Samples (16×1×1) (16 ×1×1)
ViT-L/16
K710 700K 75.8 63.2 73.7 70.9
K710+SSv2 900K 72.9 67.4 72.8 71.0
K710+HT 1900K 74.5 64.2 74.8 71.1
VideoMix2M 2000K 73.7 66.2 74.8 71.5
ViT-H/16 K710+SSv2 900K 75.7 66.8 73.7 72.0
VideoMix2M 2000K 74.0 68.5 75.9 72.8
Evaluations. Pretrained models are evaluated on
downstream video and image tasks. On video tasks,
we use a subset of the VideoGLUE benchmark (Yuan
et al., 2023) to test for various capabilities; specif-
ically, we investigate action recognition on Kinetics-
400 (K400) (Kay et al., 2017), motion classification on
Something-Something-v2 (SSv2) (Goyal et al., 2017),
and action localization on AVA (Gu et al., 2018). Action
classification on Kinetics evaluates the appearance-based
understanding of the model, as many action classes in
the dataset can be inferred from the presence of specific
objects in the video (Sevilla-Lara et al., 2021). Motion
classification on Something-Something-v2 evaluates the
temporal understanding of the model, as action classes
in the dataset are decoupled from the appearance/pres-
ence of specific objects in the video (Goyal et al., 2017).
Finally, action localization on AVA evaluates the ability
of the model to understand and localize motions in the
video. We follow standard practice and report accu-
racy on K400 and SSv2 by sampling several spatial and
temporal views. For static image tasks, we explore ob-
ject recognition on ImageNet (Russakovsky et al., 2015),
scene classification on Places205 (Zhou et al., 2014), and
fine-grained recognition on iNaturalist 2021 (Van Horn
et al., 2018).
4 What Matters for Learning Represen-
tations from Video?
In this section we isolate the contributions of several de-
sign choices, including: a) the use of a feature prediction
versus pixel prediction objective, b) the construction of
the pretraining data distribution, c) the feature pooling
strategy for leveraging the model’s representations in
downstream tasks, and d) the masking strategy, towards
identifying: what to predict from what?
4.1 Predicting Representations versus Pixels
We first ablate the effect of computing the prediction
loss in representation space. We train a pair of ViT-L/16
models using either aV-JEPA feature prediction loss,
or a mean-squared error loss with the normalized pixel
values, as in masked autoencoders (He et al., 2021), and
perform a sweep over the learning rate and weight decay
schedules for both approaches. All models are pretrained
on VideoMix2M for 90K iterations with a batch size of
3072 using multi-block masking. We examine perfor-
mance on Kinetics-400 (K400), Something-Something-v2
(SSv2), and ImageNet-1K (IN1K), using a frozen back-
bone with an attentive probe, and report top-1 accuracy
using a single center view. We also examine end-to-end
fine-tuning performance of the models on Kinetics-400.
Results of this comparison are reported in Table 1 and
indicate that predicting in feature space provides a con-
sistent performance improvement over pixel space pre-
diction in both frozen evaluation of the video backbone,
as well as end-to-end fine-tuning.
4.2 Pretraining Data Distribution
Next we study the impact of the pretraining data dis-
tribution in Table 2. Leveraging large scale datasets
5 | 4 | 4 | arxiv3.pdf |
Table 3 Average Pooling vs. Adaptive Pooling.We pool the
feature map output by the frozen V-JEPA encoder using
an attentive probe, which is then fed into a linear classifier
for downstream supervised tasks (K400 and SSv2). We
evaluate two pooling strategies: 1) average pooling (Avg.),
and attentive pooling (Att.). Results are reported using
a single center view. Using adaptive pooling with a cross-
attention layer leads to improvements of+17.3 points on
K400 and+16.1 points on SSv2.
Frozen Evaluation
K400 SSv2
(16×1×1) (16 ×1×1)
Method Arch. Avg. Att. Avg. Att.
V-JEPA ViT-L/16 56.7 73.7 50.1 66.2
has been critical for enabling the surge of advancements
in other modalities, such as text and images (Kaplan
et al., 2020; Cherti et al., 2023). We investigate whether
a similar trend holds for video data. To control for the
possible confounding variable of compute budget, we
pretrain all models in Table 2 for 90K iterations using
a batch-size of 3072. We report downstream results on
K400, SSv2, and IN1K using a frozen backbone with an
attentive probe, and report top-1 accuracy using a single
center view.
Table 2 shows that average performance across tasks
monotonically increases as we increase the size of the
pretraining dataset, but the best task-specific perfor-
mance is obtained by independently selecting the pre-
training data for each specific downstream task. For
instance, the L/16 obtains its best SSv2 performance
when pretrained on K710+SSv2, its best K400 perfor-
mance when pretrained only on K710, and its best IN1K
performance when pretrained only on K710+HT. The
best average performance across all tasks is achieved by
pretraining VideoMix2M, which combines all the data
sources. Similarly, the H/16 pretrained on K710+SSv2
achieves a greater K400 score than the H/16 pretrained
on VideoMix2M, however, the top performing H/16 on
average is pretrained on VideoMix2M.
4.3 Evaluation: Attentive Probing
Next we explore the feature pooling strategy for apply-
ing the model’s representations in downstream tasks.
Since the prediction objective in equation(1) is unnor-
malized, there is no a priori reason for the encoder to
yield a linearly separable subspace (Chen et al., 2020).
Thus, rather than using a linear operation (averaging)
to pool the features output of the frozen backbone, we
explore a learnable non-linear pooling strategy. Specifi-
cally, when evaluating the frozen pretrained backbone
on downstream tasks, we learn a cross-attention layer
with a learnable query token. The output of the cross-
attention layer is then added back to the query token
(residual connection), and then fed into two-layer MLP
Table 4 Ablating Prediction Task. Models are ViT-L/16
networks pretrained on K710 and SSv2 and evaluated with
an attentive probe using a single center view. The regionx is
sampled by masking spatio-temporal regions in the video;y is
the mask complement.1) random-tube[r]:x is obtained by
masking a fractionr of tubes (spatial patches extended across
the entire temporal duration) from the video,2) causal
multi-block[p]: x is restricted to the firstp frames of the
16-frame video, which are then masked with a random set
of spatio-temporal blocks,3) multi-block: x is obtained
by masking a random set of spatio-temporal blocks from the
entire video. Best performance obtained by using multiblock
masking.
Frozen Evaluation
K400 SSv2 IN1K
Masking (16×1×1) (16 ×1×1)
random-tube[0.9] 51.5 46.4 55.6
causal multi-block[6] 61.3 49.8 66.9
causal multi-block[12] 71.9 63.6 72.2
multi-block 72.9 67.4 72.8
with a single GeLU activation, followed by a LayerNorm,
and finally a linear classifier.
In Table 3 we see that using adaptive pooling with
a learnable cross-attention layer leads to a significant
improvement of+17 points on K400 and+16.1 points
on SSv2. Using an attentive-probe is also beneficial for
other baseline models as reported in Appendix E.
4.4 Prediction Task: Predictingy from x
We conduct an ablation on the masking strategy used in
V-JEPA pretraining. We examine the following masking
strategies: random-tube[r] in which x is obtained by
removing a random fractionr of tubes (spatial patches
extended across the entire temporal duration) from the
video, causal multi-block[p]in whichx is restricted to
the firstp frames of the 16-frame video, which are then
masked with a random set of spatio-temporal blocks,
and multi-block in whichx obtained by masking a ran-
dom set of spatio-temporal blocks from the entire video.
Spatio-temporal blocks are sampled using the parame-
ters described in Section 3.2; an ablation on the size and
quantity of masked spatio-temporal blocks is provided
in Appendix E.4.
Table 4 indicates that the best results are obtained by
sampling x using a multi-block strategy, wherein the
network is forced to make predictions after removing
large continuous blocks in the video. Whenx is only
sampled from the first few frames of the video, as in
the causal multi-blockstrategy, we observe a decrease
in downstream performances. Finally, therandom-tube
strategy, wherein 90% of the tubes in the video are ran-
domly masked, leads to features of low-semantic quality
when combined with our feature prediction objective.
6 | 5 | 5 | arxiv3.pdf |
Table 5 Comparison with Pixel Prediction Methods.We compare V-JEPA with OmniMAE (Girdhar et al., 2023), Video-
MAE (Tong et al., 2022), and Hiera (Ryali et al., 2023), which leverage a pixel-reconstruction loss. All models are trained using
a ViT-L architecture or a comparable Hiera-L. We evaluate the approaches on downstream image tasks (IN1K, Places205,
iNat201) and video tasks (K400, SSv2, AVA) in both frozen evaluation (with a frozen backbone), and end-to-end fine-tuning.
All models are evaluated at resolution 224. On K400 and SSv2 we follow the standard practice of reporting accuracy from
several spatial and temporal views from the video. In frozen evaluation, V-JEPA outperforms the baselines on all downstream
tasks, except ImageNet, where the model achieves74.8% compared to75.1% of an OmniMAE model trained directly on
ImageNet. V-JEPA also achieves the best fine-tuning performance amongs all ViT-L models and matches the Hiera-L on
SSv2. The V-JEPA results are achieved while processing significantly fewer examples during pretraining.
Frozen Evaluation w/ Att. Pooling Fine-Tuning
#Samples K400 SSv2 AVA IN1K Places205 iNat21 K400-ft SSv2-ft
Method Arch. Seen Iter. (16×8×3) (16 ×2×3) (16 ×5×3) (16 ×2×3)
Methods pretrained using pixel prediction
OmniMAE ViT-L/16 2400M 1170K 65.6 60.6 14.4 75.1 59.8 66.1 84.0 74.2
VideoMAE ViT-L/16 410M 400K 77.8 65.5 21.6 71.1 59.3 64.6 85.4 74.3
Hiera Hiera-L 770M 1500K 75.5 64.2 15.8 68.9 58.5 56.9 87.3 75.1
V-JEPA ViT-L/16 270M 90K 80.8 69.5 25.6 74.8 60.3 67.8 85.6 75.1
Table 6 Comparison with State-of-the-Art Models.We compare V-JEPA with state-of-the-art baselines in frozen evaluation
with an attentive probe on downstream image tasks (IN1K, Place205, iNat21) and video tasks (K400, SSv2, AVA). All models
are evaluated at resolution 224, except I-JEPA512 and V-JEPA384 which are evaluated respectively at resolution512 and
384. On K400 and SSv2 we follow the standard practice of reporting accuracy from several spatial and temporal views
from the video. Compared to other video baselines, V-JEPA exhibits a consistent improvement across all downstream tasks.
Compared to image-models that excel under the frozen evaluation, V-JEPA shows a significant performance improvement on
tasks requiring motion understanding (+21 points on SSv2), and reduces the gap between video and image models on tasks
requiring static appearance-based features.
Video Tasks Image Tasks
K400 SSv2 AVA IN1K Places205 iNat21
Method Arch. Params. Data (16×8×3) (16 ×2×3)
Methods pretrained on Images
I-JEPA ViT-H/16 512 630M IN22K 79.7 50.0 19.8 84.4 66.5 85.7
OpenCLIP ViT-G/14 1800M LAION 81.8 34.8 23.2 85.3 70.2 83.6
DINOv2 ViT-g/14 1100M LVD-142M 83.4 50.6 24.3 86.2 68.4 88.8
Methods pretrained on Videos
MVD ViT-L/16 200M IN1K+K400 79.4 66.5 19.7 73.3 59.4 65.7
OmniMAE ViT-H/16 630M IN1K+SSv2 71.4 65.4 16.0 76.3 60.6 72.4
VideoMAE ViT-H/16 630M K400 79.8 66.2 20.7 72.3 59.1 65.5
VideoMAEv2 ViT-g/14 1100M Un.Hybrid 71.2 61.2 12.9 71.4 60.6 68.3
Hiera Hiera-H 670M K400 77.0 64.7 17.5 71.4 59.5 61.7
V-JEPA
ViT-L/16 200M
VideoMix2M
80.8 69.5 25.6 74.8 60.3 67.8
ViT-H/16 630M 82.0 71.4 25.8 75.9 61.7 67.9
ViT-H/16384 630M 81.9 72.2 25.0 77.4 62.8 72.6
5 Comparison with Prior Work
In Section 5.1, we investigate the impact of feature pre-
diction by comparing V-JEPA with video approaches
that rely on pixel prediction, while using a similar ar-
chitecture for all baselines. Subsequently, in Section 5.2,
we remove the architectural constraint and report the
best performance across architectures for self-supervised
video and image pretraining approaches. Finally, we ex-
plore the label-efficiency ofV-JEPA relative to other self-
supervised video pretraining approaches in Section 5.3.
We further detail the evaluation setup in Appendix D.
5.1 Comparison with Pixel Prediction
To investigate the effectiveness of feature prediction pre-
training, we first compareV-JEPAto video masked mod-
elingmodelsrelyingonapixelpredictionloss. Wecontrol
for the possible confounding factor of model architec-
ture by evaluating all models using either a ViT-L/16
encoder, or a Hiera-L encoder, which has a similar num-
ber of parameters. For the pixel prediction baselines
we consider VideoMAE (Tong et al., 2022; Wang et al.,
2023a), which trains vision transformer autoencoders
exclusively on video, Hiera (Ryali et al., 2023), which
trains a hierarchical transformer autoencoder on video,
and OmniMAE (Girdhar et al., 2023), which trains a
vision transformer autoencoder on static images and
video simultaneously.
Table 5 examines both frozen evaluation with an atten-
tive probe on downstream video and image tasks, as well
as end-to-end fine-tuning. In frozen evaluation,V-JEPA
outperforms the baselines on all downstream tasks, ex-
cept ImageNet, where we achieve74.8% compared to
75.1% of an OmniMAE model trained directly on Im-
7 | 6 | 6 | arxiv3.pdf |
102.4 102.6 102.8 103 103.2 103.4
74
74.5
75
SOTA fine-tuned task-specific
model on SSv 2 (MVD)
V-JEPA
ViT-L/16
VideoMAE
ViT-L/16
Hiera
Hiera-L
OmniMAE
ViT-L/16
Samples Seen (M)
Something-Something-v2 End-to-End Fine-Tuning
Video Feature Pred.
Video Pixel Pred.
Figure 4 SSv2 fine-tuning performance vs. Samples Seen.We
report SSv2 fine-tuning for V-JEPA and pixel-reconstruction
baselines using a ViT-L/16 or Hiera-L architecture. V-JEPA
outperforms all pixel-reconstruction methods using a ViT-
L/16 and matches the Hiera-L performance while seeing
significantly less samples during pretraining.
ageNet; hence,V-JEPA achieves comparable ImageNet
performance despite only pretraining on video.
Under the fine-tuning protocol,V-JEPAalso achieves the
best performance of any model trained with a ViT-L/16,
and matches the performance of the Hiera-L on SSv2,
whichbenefitsfromahierachicalprior (Ryali etal.,2023).
The V-JEPA models achieve this result while processing
significantly fewer samples during pretraining (Figure 4),
demonstrating the efficiency of feature prediction as a
learning principle.
5.2 Comparison with State-of-the-Art
Next, in Table 6, we inspect how theV-JEPA models
pretrained on video stack up next to the largest state-
of-the-art self-supervised image and video models when
freezing the backbone encoder and training an attentive
probe on top. Our image pretrained baselines include
OpenCLIP (Cherti et al., 2023), DINOv2 (Oquab et al.,
2023), and I-JEPA (Assran et al., 2023). The Open-
CLIP model is trained with a contrastive image-text
alignment objective, DINOv2 and I-JEPA are trained
with self-supervision. These models are known to excel
in their frozen-evaluation performance (Oquab et al.,
2023); i.e., their ability to produce visual features that
can be applied to many downstream tasks simultane-
ously, without end-to-end fine-tuning, and thus pro-
vide highly competitive baselines. Our video pretrained
baselines include VideoMAE (Tong et al., 2022), Omni-
MAE (Girdhar et al., 2023), Hiera (Ryali et al., 2023),
VideoMAEv2 (Wang et al., 2023a), and MVD (Wang
et al., 2023b). The OpenCLIP, DINOv2 and Video-
MAEv2 models are parameterized as Giant/Gigantic
vision transformer architectures containing over 1B pa-
rameters trained on large-scale image or video datasets.
Comparison with video models. Compared to
large-scale video baselines, theV-JEPA models outper-
form all previous models on every downstream video
50 100 150 200 250 300 350
60
65
70
75 V-JEPA
ViT-H/16384
VideoMAE
ViT-H/16
VideoMAEv2
ViT-g/14
Pretraining Time (Hrs.)
Something-Something-v2 Frozen Evaluation
Video Feature Pred.
Video Pixel Pred.
Figure 5 SSv2 frozen-evaluation performance vs. Pretraining
Time. Wallclock times for all methods are measured on a
single GPU with a batch size of 10 clips, using the official
codebases for VideoMAE and VideoMAEv2, and linearly
extrapolated assuming a global batch size of 2400 samples.
However, note that the SSv2 accuracies of video pixel pre-
diction methods are actually obtained with small batch sizes
and significantly longer training schedules. V-JEPA out-
performs pixel-reconstruction methods while training signifi-
cantly faster.
and image task with notable margin (see Table 6). Our
H/16 model outperforms the largest publicly available
VideoMAE, VideoMAEv2, OmniMAE, MVD, and Hiera
models by at least+5 points in motion understanding
(Something-Something-v2), +2 points in action recogni-
tion (Kinetics-400),+5 points on action detection (AVA),
+1 point on object recognition (ImageNet-1K),+2 points
in scene recognition (Places205), and+0.2 points on fine-
grained recognition (iNaturalist). Moreover, when com-
paring pretraining wallclock time in Figure 5, we see that
V-JEPA achieves this performance with a roughly2×
speedup compared to the large pixel prediction models.
Comparison with image models. On tasks that re-
quire a fine-grained understanding of motion (Something-
Something-v2), theV-JEPA models provide a major im-
provement (over+21 points) compared to large-scale
image baselines, such as DINOv2, OpenCLIP, and I-
JEPA. Self-supervised pretraining from videos allows to
model dynamic concepts that are not easily learned from
static image datasets. Similarly, we observe that the
V-JEPA models outperform image-based pretraining on
action localization.
On Kinetics-400, we find image models to perform well;
e.g., while DINOv2 (Oquab et al., 2023) previously re-
ported 78.4% on K400 with a linear probe, we improve
the frozen evaluation of the g/14 model to83.4% by
using an attentive probe. In this case, our H/16 model
achieves 82.0% top-1 accuracy. It is worth noting that
the label for many Kinetics videos can be inferred using
appearance-based cues, without requiring an understand-
ing of motion (Sevilla-Lara et al., 2021).
The V-JEPA models narrow the gap with image models
on image classification tasks. In particular, V-JEPA
achieves a score of 77.4% on ImageNet using a one-
8 | 7 | 7 | arxiv3.pdf |
Table 7 Low-Shot Frozen Evaluation.Comparing V-JEPA to other video models in frozen evaluation on Kinetics-400 and
Something-Something-v2 as we vary the percentage of labeled examples from each dataset available for training the attentive
probe. We train the probes in several low-shot settings: using either 5% of the train set, 10%, or 50%, and take 3 random
splits in each setting to obtain more robust metrics, resulting in 9 different evaluation experiments for each model. We report
the mean performances and standard deviation using the K400 and SSv2 validation sets. V-JEPA is more label-efficient than
other models; specifically, decreasing the available number of labeled examples from each class increases the performance gap
between V-JEPA and the baselines.
Frozen Evaluation
K400 SSv2
(16×8×3) (16 ×2×3)
5% 10% 50% 5% 10% 50%
Method Arch. (∼29 samples per class) ( ∼58 samples per class) ( ∼287 samples per class) ( ∼48 samples per class) ( ∼96 samples per class) ( ∼440 samples per class)
MVD ViT-L/16 62.6 ± 0.2 68.3 ± 0.2 77.2 ± 0.3 42.9 ± 0.8 49.5 ± 0.6 61.0 ± 0.2
VideoMAE ViT-H/16 62.3 ± 0.3 68.5 ± 0.2 78.2 ± 0.1 41.4 ± 0.8 48.1 ± 0.2 60.5 ± 0.4
VideoMAEv2 ViT-g/14 37.0 ± 0.3 48.8 ± 0.4 67.8 ± 0.1 28.0 ± 1.0 37.3 ± 0.3 54.0 ± 0.3
V-JEPA ViT-H/16 67.0 ± 0.2 72.1 ± 0.1 80.2 ± 0.2 51.9 ± 0.3 57.5 ± 0.4 67.3 ± 0.2
ViT-H/16384 68.2 ± 0.2 72.8 ± 0.2 80.6 ± 0.2 54.0 ± 0.2 59.3 ± 0.5 67.9 ± 0.2
layer attentive probe, which can be further improved to
77.9% using a two-layer attentive probe. More generally,
we hypothesize that the datasets used to trainV-JEPA
and other video models are too constrained and lack the
visualdiversityoftheinternet-scalepretrainingdataused
by the images models; as such, there is value in focusing
future work on building diverse publicly available video
datasets.
5.3 Label-efficiency
We examine the label-efficiency ofV-JEPA compared to
other self-supervised video models by measuring the abil-
ity of the pretrained backbones to adapt to downstream
tasks with few labels. Specifically, we investigate the
performance of the frozen models on Kinetics-400 and
Something-Something-v2 as we vary the percentage of
labeled examples from each dataset available for training
the attentive probe. We train the probes in several low-
shot settings: using either 5% of the train set, 10%, or
50%, and take 3 random splits in each setting to obtain
more robust metrics, resulting in 9 different evaluation
experiments for each model. Table 7 reports the mean
performances and standard deviation using the K400
and SSv2 validation sets.
We findV-JEPA to be more label-efficient than other
self-supervised video models: decreasing the available
number of labeled examples for training the attentive
probe results in an increase in the performance gap
between V-JEPA and the other models. In particular,
the performance of the largestV-JEPA model on K400
drops by 12% to 68.2% top-1 when we reduce the number
of labeled examples by a factor of10× (from roughly
287 examples per class to 29 examples per class). By
contrast, VideoMAEv2 drops by 30% to 37.0% top-1,
VideoMAE drops by 15.9% to 62.3% top-1, and MVD
drops by 14.6% to 62.6% top-1.
Similar observations hold on SSv2. The performance
of the largestV-JEPA model on SSv2 drops by 13.9%
to 54.0% top-1 when we reduce the number of labeled
examples by a factor of10× (from roughly 440 examples
per class to 48 examples per class). By contrast, Video-
MAEv2 drops by 26% to 28.0% top-1, VideoMAE drops
by 19.1% to 41.4% top-1, and MVD drops by 18.1% to
42.9% top-1.
6 Evaluating the Predictor
Next, we seek to qualitatively inspect theV-JEPA mod-
els. Recall that the predictor network inV-JEPApredicts
the representations of a masked spatio-temporal regiony
from a visible regionx, given the positional information
of the masked regions (see Section 3). To qualitatively in-
vestigate the grounding of the feature-space predictions,
we freeze the pretrained encoder and predictor networks
and train a conditional diffusion decoder to map the
V-JEPA predictions to interpretable pixels. Notably, the
decoder is only fed the representations predicted for the
missing regions of the video, and does not have access
to the unmasked regions of the video (see Figure 6a).
Given a masked video, we use theV-JEPA pretrained
models to predict the representations of the missing
regions, and then use the decoder to project the rep-
resentations to pixel space. Figure 6b shows decoder
outputs for various random seeds. Qualities that are
common across samples represent information that is
contained in the predictor representation.
Figure 6b shows that theV-JEPA feature predictions
are indeed grounded, and exhibit spatio-temporal con-
sistency with the unmasked regions of the video. Specif-
ically, the samples in Figure 6b show that theV-JEPA
predictor correctly captures positional uncertainty and
produces a variety of visual objects at various locations
with consistent motion. Some of the samples also demon-
strate an understanding of object-permanence, as the
visual objects remain consistent after partial occlusion.
9 | 8 | 8 | arxiv3.pdf |
Frozen
x-encoder
predictor
decoder
(a) Visualization Methodology.We train a conditional diffusion model to decode the V-JEPA feature-space predictions to
interpretable pixels; the pretrained V-JEPA encoder and predictor networks are kept frozen in this process. The decoder is
only fed the representations predicted for the missing regions of the video, and does not have access to the unmasked regions
of the video.
(b) Visualizations. First Row:Masked videos used as input to the V-JEPA models (a pretrained ViT-H/16 encoder and its
corresponding predictor network).Other rows:Bounding boxes contain various samples from the decoder overlayed on the
original video. V-JEPA is not a generative model and the decoder does not have access to the context (first row), so we do
not expect samples to exactly match the input. This experiment qualitatively illustrates what information is encoded and
predicted by V-JEPA. In particular, characteristics that are common across samples represent information that is encoded in
the V-JEPA predictions. V-JEPA generates predictions that are spatially and temporally coherent with unmask region of the
video. The predictions also capture consistent motion through time.
Figure 6 Qualitative Analysis. Offline visualizations of the V-JEPA feature-space predictions.
7 Conclusion
In this work, we explored the effectiveness of feature
prediction as a stand-alone objective for unsupervised
learning from video and introducedV-JEPA, a collection
of vision models trained solely using a self-supervised
feature prediction objective. TheV-JEPAmodels demon-
strate the ability to solve various downstream image and
video tasks without adaption of the model parameters,
and outperform previous video representation learning
approaches in frozen evaluation on action recognition,
spatio-temporal action detection, and image classifica-
tion tasks. Additionally, we show that pretrainingV-
JEPA on videos is particularly effective for solving down-
stream tasks requiring fine-grained motion understand-
ing, while large-scale image models trained on internet
scale datasets fall short on such tasks. Finally, we em-
pirically observed thatV-JEPA models are label-efficient
learners, and exhibit good performance on downstream
tasks, even when only few labeled examples are available.
References
Hassan Akbari, Liangzhe Yuan, Rui Qian, Wei-Hong Chuang,
Shih-Fu Chang, Yin Cui, and Boqing Gong. Vatt: Trans-
formers for multimodal self-supervised learning from raw
video, audio and text.Advances in Neural Information
Processing Systems, 34:24206–24221, 2021.
10 | 9 | 9 | arxiv3.pdf |
Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun,
Mario Lucic, and Cordelia Schmid. Vivit: A video vision
transformer. In Proceedings of the IEEE international
conference on computer vision, 2021.
Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr
Bojanowski, Florian Bordes, Pascal Vincent, Armand
Joulin, Michael Rabbat, and Nicolas Ballas. Masked
siamese networks for label-efficient learning.arXiv preprint
arXiv:2204.07141, 2022.
Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bo-
janowski, Pascal Vincent, Michael Rabbat, Yann LeCun,
and Nicolas Ballas. Self-supervised learning from images
with a joint-embedding predictive architecture. InProceed-
ings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 15619–15629, 2023.
Alexei Baevski, Arun Babu, Wei-Ning Hsu, and Michael
Auli. Efficient self-supervised learning with contextualized
target representations for vision, speech and language.
arXiv preprint arXiv:2212.07525, 2022a.
Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu,
Jiatao Gu, and Michael Auli. Data2vec: A general frame-
work for self-supervised learning in speech, vision and
language. arXiv preprint arXiv:2202.03555, 2022b.
Hangbo Bao, Li Dong, and Furu Wei. Beit: Bert pre-training
of image transformers.arXiv preprint arXiv:2106.08254,
2021.
Pietro Berkes and Laurenz Wiskott. Slow feature analysis
yields a rich repertoire of complex cell properties.Journal
of vision, 5(6):9–9, 2005.
Mathilde Caron, Ishan Misra, Julien Mairal, Priya Goyal,
Piotr Bojanowski, and Armand Joulin. Unsupervised learn-
ing of visual features by contrasting cluster assignments.
arXiv preprint arXiv:2006.09882, 2020.
Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jé-
gou, Julien Mairal, Piotr Bojanowski, and Armand Joulin.
Emerging properties in self-supervised vision transformers.
arXiv preprint arXiv:2104.14294, 2021.
Ting Chen, Simon Kornblith, Mohammad Norouzi, and Ge-
offrey Hinton. A simple framework for contrastive learning
of visual representations.preprint arXiv:2002.05709, 2020.
Xiaokang Chen, Mingyu Ding, Xiaodi Wang, Ying Xin,
Shentong Mo, Yunhao Wang, Shumin Han, Ping Luo,
Gang Zeng, and Jingdong Wang. Context autoencoder
for self-supervised representation learning.arXiv preprint
arXiv:2202.03026, 2022.
Xinlei Chen, Saining Xie, and Kaiming He. An empirical
study of training self-supervised vision transformers.arXiv
preprint arXiv:2104.02057, 2021.
Mehdi Cherti, Romain Beaumont, Ross Wightman, Mitchell
Wortsman, Gabriel Ilharco, Cade Gordon, Christoph
Schuhmann, Ludwig Schmidt, and Jenia Jitsev. Repro-
ducible scaling laws for contrastive language-image learn-
ing. InProceedings of the IEEE/CVF Conference on Com-
puter Vision and Pattern Recognition, pages 2818–2829,
2023.
Ekin Dogus Cubuk, Barret Zoph, Vijay Mane, Dandelion and-
Vasudevan, and Quoc V. Le. Autoaugment: Learning aug-
mentation policies from data. InProceedings of the IEEE
Conference on Computer Vision and Pattern Recognition,
2019.
Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk
Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa
Dehghani, Matthias Minderer, Georg Heigold, Sylvain
Gelly, et al. An image is worth 16x16 words: Trans-
formers for image recognition at scale. arXiv preprint
arXiv:2010.11929, 2020.
Christoph Feichtenhofer, Haoqi Fan, Bo Xiong, Ross Girshick,
and Kaiming He. A large-scale study on unsupervised spa-
tiotemporal representation learning. Proceedings of the
IEEE conference on computer vision and pattern recogni-
tion, 2021.
Christoph Feichtenhofer, Yanghao Li, Kaiming He, et al.
Masked autoencoders as spatiotemporal learners.Advances
in neural information processing systems, 35:35946–35958,
2022.
David J Field. What is the goal of sensory coding?Neural
computation, 6(4):559–601, 1994.
Spyros Gidaris, Andrei Bursuc, Nikos Komodakis, Patrick
Pérez, and Matthieu Cord. Learning representations by
predicting bags of visual words. In Proceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pages 6928–6938, 2020.
Rohit Girdhar and Kristen Grauman. Anticipative video
transformer. In Proceedings of the IEEE/CVF interna-
tional conference on computer vision, pages 13505–13515,
2021.
Rohit Girdhar, Alaaeldin El-Nouby, Mannat Singh,
Kalyan Vasudev Alwala, Armand Joulin, and Ishan Misra.
Omnimae: Single model masked pretraining on images
and videos. In Proceedings of the IEEE/CVF Confer-
ence on Computer Vision and Pattern Recognition, pages
10406–10417, 2023.
Ross Goroshin, Joan Bruna, Jonathan Tompson, David Eigen,
and Yann LeCun. Unsupervised learning of spatiotempo-
rally coherent metrics. InProceedings of the IEEE inter-
national conference on computer vision, pages 4086–4093,
2015.
Raghav Goyal, Samira Ebrahimi Kahou, Vincent Michal-
ski, Joanna Materzynska, Susanne Westphal, Heuna Kim,
Valentin Haenel, Ingo Fruend, Peter Yianilos, Moritz
Mueller-Freitag, et al. The" something something" video
database for learning and evaluating visual common sense.
In Proceedings of the IEEE international conference on
computer vision, pages 5842–5850, 2017.
Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin
Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Do-
ersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Moham-
mad Gheshlaghi Azar, et al. Bootstrap your own latent: A
new approach to self-supervised learning.arXiv preprint
arXiv:2006.07733, 2020.
11 | 10 | 10 | arxiv3.pdf |
Chunhui Gu, Chen Sun, David A Ross, Carl Vondrick, Caro-
line Pantofaru, Yeqing Li, Sudheendra Vijayanarasimhan,
George Toderici, Susanna Ricco, Rahul Sukthankar, et al.
Ava: A video dataset of spatio-temporally localized atomic
visual actions. InProceedings of the IEEE conference on
computer vision and pattern recognition, pages 6047–6056,
2018.
Agrim Gupta, Jiajun Wu, Jia Deng, and Li Fei-Fei. Siamese
masked autoencoders. arXiv preprint arXiv:2305.14344,
2023.
Michael U Gutmann and Aapo Hyvärinen. Noise-contrastive
estimation of unnormalized statistical models, with appli-
cations to natural image statistics.Journal of machine
learning research, 13(2), 2012.
Tengda Han, Weidi Xie, and Andrew Zisserman. Video
representation learning by dense predictive coding. In
Proceedings of the IEEE/CVF International Conference
on Computer Vision Workshops, pages 0–0, 2019.
Tengda Han, Weidi Xie, and Andrew Zisserman. Memory-
augmented dense predictive coding for video representation
learning. InEuropean conference on computer vision, pages
312–329. Springer, 2020.
Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dol-
lár, and Ross Girshick. Masked autoencoders are scalable
vision learners. arXiv preprint arXiv:2111.06377, 2021.
Geoffrey E Hinton. Connectionist learning procedures. In
Machine learning, pages 555–610. Elsevier, 1989.
Tarun Kalluri, Deepak Pathak, Manmohan Chandraker, and
Du Tran. Flavr: Flow-agnostic video representations for
fast frame interpolation. InProceedings of the IEEE/CVF
Winter Conference on Applications of Computer Vision,
pages 2071–2082, 2023.
Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B
Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec
Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for
neural language models.arXiv preprint arXiv:2001.08361,
2020.
Will Kay, Joao Carreira, Karen Simonyan, Brian Zhang,
Chloe Hillier, Sudheendra Vijayanarasimhan, Fabio Vi-
ola, Tim Green, Trevor Back, Paul Natsev, et al. The
kinetics human action video dataset. arXiv preprint
arXiv:1705.06950, 2017.
Christoph Kayser, Wolfgang Einhäuser, Olaf Dümmer, Peter
König, and Konrad Körding. Extracting slow subspaces
from natural videos leads to complex cells. InArtificial
Neural Networks—ICANN 2001: International Conference
Vienna, Austria, August 21–25, 2001 Proceedings 11, pages
1075–1080. Springer, 2001.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich.
Learning representations for automatic colorization. 2016.
Gustav Larsson, Michael Maire, and Gregory Shakhnarovich.
Colorization as a proxy task for visual understanding. 2017.
Yann LeCun. A path towards autonomous machine intelli-
gence version 0.9. 2, 2022-06-27. 2022.
Hsin-Ying Lee, Jia-Bin Huang, Maneesh Singh, and Ming-
Hsuan Yang. Unsupervised representation learning by
sorting sequences. InProceedings of the IEEE international
conference on computer vision, pages 667–676, 2017.
Kunchang Li, Yali Wang, Peng Gao, Guanglu Song, Yu Liu,
Hongsheng Li, and Yu Qiao. Uniformer: Unified trans-
former for efficient spatiotemporal representation learning.
arXiv preprint arXiv:2201.04676, 2022.
Ilya Loshchilov and Frank Hutter. Decoupled weight decay
regularization. arXiv preprint arXiv:1711.05101, 2017.
Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac,
Makarand Tapaswi, Ivan Laptev, and Josef Sivic.
Howto100m: Learning a text-video embedding by watch-
ing hundred million narrated video clips. InProceedings
of the IEEE/CVF international conference on computer
vision, pages 2630–2640, 2019.
Mehdi Noroozi and Paolo Favaro. Unsupervised learning of
visual representations by solving jigsaw puzzles. InEuro-
pean conference on computer vision, pages 69–84. Springer,
2016.
Aaron van den Oord, Yazhe Li, and Oriol Vinyals. Represen-
tation learning with contrastive predictive coding.arXiv
preprint arXiv:1807.03748, 2018.
Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy
Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez,
Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al.
Dinov2: Learning robust visual features without supervi-
sion. arXiv preprint arXiv:2304.07193, 2023.
Nikhil Parthasarathy, SM Eslami, João Carreira, and
Olivier J Hénaff. Self-supervised video pretraining
yields strong image representations. arXiv preprint
arXiv:2210.06433, 2022.
Deepak Pathak, Philipp Krahenbuhl, Jeff Donahue, Trevor
Darrell, and Alexei A Efros. Context encoders: Feature
learning by inpainting. InProceedings of the IEEE con-
ference on computer vision and pattern recognition, pages
2536–2544, 2016.
Silvia L Pintea, Jan C van Gemert, and Arnold WM Smeul-
ders. Déja vu: Motion prediction in static images. In
Computer Vision–ECCV 2014: 13th European Conference,
Zurich, Switzerland, September 6-12, 2014, Proceedings,
Part III 13, pages 172–187. Springer, 2014.
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,
Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn-
ing transferable visual models from natural language su-
pervision. InInternational conference on machine learning,
pages 8748–8763. PMLR, 2021.
Rajesh PN Rao and Dana H Ballard. Predictive coding
in the visual cortex: a functional interpretation of some
extra-classical receptive-field effects.Nature neuroscience,
2(1):79–87, 1999.
Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, San-
jeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy,
Aditya Khosla, Michael Bernstein, Alexander C. Berg, and
12 | 11 | 11 | arxiv3.pdf |
Li Fei-Fei. Imagenet large scale visual recognition chal-
lenge. International Journal of Computer Vision, 115(3):
211–252, 2015.
Chaitanya Ryali, Yuan-Ting Hu, Daniel Bolya, Chen Wei,
Haoqi Fan, Po-Yao Huang, Vaibhav Aggarwal, Arka-
bandhu Chowdhury, Omid Poursaeed, Judy Hoffman, et al.
Hiera: A hierarchical vision transformer without the bells-
and-whistles. arXiv preprint arXiv:2306.00989, 2023.
Laura Sevilla-Lara, Shengxin Zha, Zhicheng Yan, Vedanuj
Goswami, Matt Feiszli, and Lorenzo Torresani. Only time
can tell: Discovering temporal data for temporal modeling.
In Proceedings of the IEEE/CVF winter conference on
applications of computer vision, pages 535–544, 2021.
Elizabeth S Spelke, Peter Vishton, and Claes Von Hofsten.
Object perception, object-directed action, and physical
knowledge in infancy. 1995.
Nitish Srivastava, Elman Mansimov, and Ruslan Salakhudi-
nov. Unsupervised learning of video representations using
lstms. In International conference on machine learning,
pages 843–852. PMLR, 2015.
Chen Sun, Austin Myers, Carl Vondrick, Kevin Murphy, and
Cordelia Schmid. Videobert: A joint model for video and
language representation learning. In Proceedings of the
IEEE/CVF international conference on computer vision,
pages 7464–7473, 2019.
Dídac Surís, Ruoshi Liu, and Carl Vondrick. Learning the pre-
dictability of the future. InProceedings of the IEEE/CVF
Conference on Computer Vision and Pattern Recognition,
pages 12607–12617, 2021.
Reuben Tan, Matthias De Lange, Michael Iuzzolino, Bryan A
Plummer, Kate Saenko, Karl Ridgeway, and Lorenzo Tor-
resani. Multiscale video pretraining for long-term activity
forecasting. arXiv preprint arXiv:2307.12854, 2023.
Antti Tarvainen and Harri Valpola. Mean teachers are bet-
ter role models: Weight-averaged consistency targets im-
prove semi-supervised deep learning results.arXiv preprint
arXiv:1703.01780, 2017.
Yuandong Tian, Xinlei Chen, and Surya Ganguli. Under-
standing self-supervised learning dynamics without con-
trastive pairs. In International Conference on Machine
Learning, pages 10268–10278. PMLR, 2021.
Zhan Tong, Yibing Song, Jue Wang, and Limin Wang. Video-
mae: Masked autoencoders are data-efficient learners for
self-supervised video pre-training. Advances in neural
information processing systems, 35:10078–10093, 2022.
Grant Van Horn, Oisin Mac Aodha, Yang Song, Yin Cui,
Chen Sun, Alex Shepard, Hartwig Adam, Pietro Perona,
and Serge Belongie. The inaturalist species classification
and detection dataset. InProceedings of the IEEE con-
ference on computer vision and pattern recognition, pages
8769–8778, 2018.
Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-
Antoine Manzagol. Extracting and composing robust fea-
tures with denoising autoencoders. InProceedings of the
25th International Conference on Machine Learning, ICML
’08, page 1096–1103, 2008.
Pascal Vincent, Hugo Larochelle, Isabelle Lajoie, Yoshua Ben-
gio, Pierre-Antoine Manzagol, and Léon Bottou. Stacked
denoising autoencoders: Learning useful representations
in a deep network with a local denoising criterion.Journal
of machine learning research, 11(12), 2010.
Carl Vondrick, Hamed Pirsiavash, and Antonio Torralba.
Anticipating visual representations from unlabeled video.
In Proceedings of the IEEE conference on computer vision
and pattern recognition, pages 98–106, 2016.
Fei Wang, Ping Li, and Arnd Christian Konig. Learning
a bi-stochastic data similarity matrix. In 2010 IEEE
International Conference on Data Mining, pages 551–560.
IEEE, 2010.
Limin Wang, Bingkun Huang, Zhiyu Zhao, Zhan Tong, Yinan
He, Yi Wang, Yali Wang, and Yu Qiao. Videomae v2:
Scaling video masked autoencoders with dual masking. In
Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 14549–14560, 2023a.
Rui Wang, Dongdong Chen, Zuxuan Wu, Yinpeng Chen,
Xiyang Dai, Mengchen Liu, Lu Yuan, and Yu-Gang Jiang.
Masked video distillation: Rethinking masked feature mod-
eling for self-supervised video representation learning. In
Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 6312–6322, 2023b.
YiWang, KunchangLi, YizhuoLi, YinanHe, BingkunHuang,
Zhiyu Zhao, Hongjie Zhang, Jilan Xu, Yi Liu, Zun Wang,
et al. Internvideo: General video foundation models via
generative and discriminative learning. arXiv preprint
arXiv:2212.03191, 2022.
Laurenz Wiskott and Terrence J Sejnowski. Slow feature
analysis: Unsupervised learning of invariances. Neural
computation, 14(4):715–770, 2002.
Zhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin.
Unsupervised feature learning via non-parametric instance
discrimination. In Proceedings of the IEEE conference on
computer vision and pattern recognition, pages 3733–3742,
2018.
Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin
Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A sim-
ple framework for masked image modeling.arXiv preprint
arXiv:2111.09886, 2021.
Dejing Xu, Jun Xiao, Zhou Zhao, Jian Shao, Di Xie, and
Yueting Zhuang. Self-supervised spatiotemporal learn-
ing via video clip order prediction. InProceedings of the
IEEE/CVF Conference on Computer Vision and Pattern
Recognition, pages 10334–10343, 2019.
Hu Xu, Gargi Ghosh, Po-Yao Huang, Dmytro Okhonko,
Armen Aghajanyan, Florian Metze, Luke Zettlemoyer,
and Christoph Feichtenhofer. Videoclip: Contrastive pre-
training for zero-shot video-text understanding. arXiv
preprint arXiv:2109.14084, 2021.
Jiahui Yu, Zirui Wang, Vijay Vasudevan, Legg Yeung, Mo-
jtaba Seyedhosseini, and Yonghui Wu. Coca: Contrastive
captioners are image-text foundation models. arXiv
preprint arXiv:2205.01917, 2022.
13 | 12 | 12 | arxiv3.pdf |
Liangzhe Yuan, Nitesh Bharadwaj Gundavarapu, Long Zhao,
Hao Zhou, Yin Cui, Lu Jiang, Xuan Yang, Menglin Jia,
Tobias Weyand, Luke Friedman, et al. Videoglue: Video
general understanding evaluation of foundation models.
arXiv preprint arXiv:2307.03166, 2023.
Rowan Zellers, Jiasen Lu, Ximing Lu, Youngjae Yu, Yanpeng
Zhao, Mohammadreza Salehi, Aditya Kusupati, Jack Hes-
sel, Ali Farhadi, and Yejin Choi. Merlot reserve: Neural
script knowledge through vision and language and sound.
In Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, pages 16375–16387, 2022.
Bolei Zhou, Agata Lapedriza, Jianxiong Xiao, Antonio Tor-
ralba, and Aude Oliva. Learning deep features for scene
recognition using places database. In Z. Ghahramani,
M. Welling, C. Cortes, N. Lawrence, and K.Q. Wein-
berger, editors, Advances in Neural Information Pro-
cessing Systems, volume 27. Curran Associates, Inc.,
2014. https://proceedings.neurips.cc/paper/2014/file/
3fe94a002317b5f9259f82690aeea4cd-Paper.pdf.
Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie,
Alan Yuille, and Tao Kong. Ibot: Image bert pre-training
with online tokenizer.arXiv preprint arXiv:2111.07832,
2021.
Will Zou, Shenghuo Zhu, Kai Yu, and Andrew Ng. Deep
learning of invariant features via simulated fixations in
video. Advances in neural information processing systems,
25, 2012.
14 | 13 | 13 | arxiv3.pdf |
Appendix
A Extended Related Works
We first review approaches for learning visual perception from static images before discussing strategies for learning
from video.
Weakly-Supervised Learning from Static Images
One family of approaches for learning visual perception from static images trains a visual encoder to predict the
representations of text captions often found accompanying images from the Web, as in CLIP (Radford et al., 2021) or
CoCa (Yu et al., 2022). The largest open source CLIP model to date, numbering 2B parameters and trained on over
2B web-scraped images (Cherti et al., 2023), demonstrates impressive performance on a wide range of downstream
image and video tasks. Notably, this is achieved using only the light-weight adaptation of task-specific heads, also
referred to as frozen-evaluation, and does not require expensive end-to-end fine-tuning of the pretrained model.
Self-Supervised Learning from Static Images
Other approaches for learning from static images leverage unsupervised objectives. Initial works on self-supervised
approaches are based on sparse coding or hand-crafted pretext tasks, such as colorization (Larsson et al., 2016, 2017),
rotation prediction (Gidaris et al., 2020), and jigsaws (Noroozi and Favaro, 2016). More recent approaches leverage
invariance-based objectives by training a visual encoder to be invariant to hand-crafted image transformations (Wu
et al., 2018; Chen et al., 2020).
Another family of methods learn representations using denoising autoencoders (Vincent et al., 2008); image inpainting
is one popular instantiation of this idea (Pathak et al., 2016). More recently, masked autoencoders (He et al., 2021)
train an encoder-decoder transformer to predict missing pixels of a masked image. Follow-up work addresses the
indeterminism of pixel reconstruction by exploring instantiations of masked image modeling in latent space (Baevski
et al., 2022b; Assran et al., 2023; Baevski et al., 2022a). These approaches can be seen as applications of the
predictive feature principle in the image modality.
There are also various methods that combine both masked image modeling and invariance criteria to learn visual
representations from static images, such as iBOT (Zhou et al., 2021) and DINOv2 (Zhou et al., 2021; Oquab et al.,
2023), the latter is currently the most competitive instantiation of self-supervised learning with static images, scaled
to a model with over 1.1B parameters trained on a curated dataset of 142M images.
Weakly-Supervised Learning from Videos
One family of approaches for learning visual perception from videos relies on weakly-supervised guidance from closed
captioning, often computed from an ASR transcription of audio data accompanying internet videos. For instance,
VideoBERT (Sun et al., 2019; Xu et al., 2021) trains a video encoder to predict masked spans in the textual closed
captions. Similarly, VideoCLIP (Xu et al., 2021) trains a video encoder to predict the representation of video
captions computed by a text encoder. Follow-up work such as MERLOT (Zellers et al., 2022), VATT (Akbari et al.,
2021), and InternVideo (Wang et al., 2022) extended VideoCLIP by incorporating additional unsupervised objectives.
Self-Supervised Learning from Videos
Similar to unsupervised learning from images, a family of unsupervised video representation learning approaches
enforces a spatio-temporal representation of a video clip to be invariant to hand-crafted spatio-temporal data
augmentations (Parthasarathy et al., 2022). However, one obvious insight is that the temporal ordering of visual
information in video can provide implicit supervision. Indeed, this insight is the key insight leveraged by many works
on unsupervised video learning. Towards leveraging temporal information as supervision, some approaches train a
visual encoder by predicting the temporal ordering of frames (Xu et al., 2019; Lee et al., 2017). Other approaches
seek to predict low-level motion vectors computed from optical flow (Pintea et al., 2014), or to predict mixing pixels
in video frames, using either a frame-interpolation objective (Kalluri et al., 2023) or a denoising autoencoder (Tong
et al., 2022; Feichtenhofer et al., 2022; Wang et al., 2023a).
15 | 14 | 14 | arxiv3.pdf |
B Extended Description of V-JEPA
In this section, we provide an in-depth description of our approachV-JEPA that is illustrated in Figure 3.
Input. Unless stated otherwise, during during pretraining, we always randomly sample a clip of 16 frames from
each input video with a temporal stride of 4 between sampled frames. An input video clip therefore covers 64 frames
in total, or roughly 2 seconds of a given video running at 30 frames per second. We then resize the video’s spatial
dimensions to224 × 224, resulting in an overall shape of16 × 224 × 224 × 3 for the entire clip. Since ViT networks
process a 1D sequence of tokens, we must convert an input video clip into a 1D token sequence. To do so, we apply a
3D convolution comprisingd filters of size2 × 16 × 16 with a temporal stride of2 and a spatial stride of16, resulting
in a tensor of shape8 × 14 × 14 × d. Next we add absolute 3D sin-cos positional embeddings to the spatio-temporal
feature map and flatten it, resulting in a 1D token sequence of shape1568 × d. This process is demonstrated in
Figure 7.
[16 x 224 x 224 x 3]
3D Conv
[2 x 16 x 16 x d]
[8 x 14 x 14 x d]
3D sin-cos absolute position
embeddings
[8 x 14 x 14 x d]
[1568 x d]
+16 video frames
resolution 224 x 224
flatten
Figure 7 V-JEPA training operates on a video clip flattened into a sequence of tokens. To convert a video clip of size
16 ×224 ×224 ×3 into a 1D token sequence, we apply a 3D convolution comprisingd filters of size2 ×16 ×16 with a temporal
stride of2 and a spatial stride of16, resulting in a tensor of shape8 ×14 ×14 ×d. Next we add absolute 3D sin-cos positional
embeddings to the spatio-temporal feature map and flatten it, resulting in a 1D token sequence of shape1568 × d.
V-JEPA. We sample both a video clip, and a video mask in each iteration. We denote a video clip represented as
a 1D token sequence of lengthL = 1568 by xL = (x1, . . . , xL). Similarly, given a mask ofM < Lpatches, leaving
N = L − M patches unmasked, we denote the indices of masked patches by(i1, . . . , iM ) and its complement (the
indices of unmasked patches) by(j1, . . . , jN ).
Computing thex-representations. To compute theV-JEPA loss, we first produce thex-representations by masking
the video clip and feeding it into thex-encoder; we denote the masked video byxN = (xj1 , . . . , xjN ). Applying thex-
encoder Eθ(·) to the masked clip gives a sequence of patch representations, denoted aszN = Eθ(xN ) = (zj1 , . . . , zjN ).
Predicting the target. Next, the V-JEPA predictor network Pϕ(·, ·) takes as input the tokens produced by the
x-encoder and predicts the missing regions in the video clip, which are specified by a set of learnable mask tokens.
Specifically, the mask tokens are parameterized as the sum of a shared learnable vector and an absolute 3D
sin-cos positional embedding, denoted bymM = (mi1 , . . . , miM ). The output of the predictor is thus given by,
ˆsM = Pϕ(zN , mM ) = (ˆsi1 , . . . ,ˆsiM ), corresponding to ad-dimensional output for each of theM masked patches.
Computing the y-representations. Finally to compute the prediction targets, the entire unmasked video clip is
processed by they-encoder to obtain a set of target representations, denoted bysL = Eθ(xL) = (s1, . . . , sL). The
V-JEPA loss is now computed as
Loss = 1
M
X
k∈(i1,...,iM)
∥ˆsk − sk∥1, (2)
which is simply the averageL1 distance between the output of the predictor and they-encoder. We then compute a
gradient update with respect to the parameters of thex-encoder, θ, and the predictor,ϕ, and subsequently update
the parameters of they-encoder as an exponential moving average of the context encoder weights (Polyak average).
16 | 15 | 15 | arxiv3.pdf |
Table 8 pretraining hyper-parameters for V-JEPA.
Hyper-parameter ViT-L/16 224 ViT-H/16224 ViT-H/16384
data
datasets VideoMix2M VideoMix2M VideoMix2M
resolution 224 224 384
num_frames 16 16 16
temporal_stride 4 4 4
horizontal_flip true true true
random_resize_scale (0.3, 1.0) (0.3, 1.0) (0.3, 1.0)
random_resize_aspect_ratio (0.75, 1.35) (0.75, 1.35) (0.75, 1.35)
masking
block_aspect_ratio (0.75, 1.5) (0.75, 1.5) (0.75, 1.5)
shortrange_mask_num_blocks 8 8 8
shortrange_mask_spatial_scale 0.15 0.15 0.15
longrange_mask_num_blocks 2 2 2
longrange_mask_spatial_scale 0.7 0.7 0.7
optimization
batch_size 3072 3072 2400
total_number_of_iterations 90000 90000 90000
warmup_iterations 12000 12000 12000
lr 6.25e-4 6.25 ×10−4 6.25×10−4
start_lr 2 ×10−4 2×10−4 2×10−4
final_lr 1 ×10−6 1×10−6 1×10−6
start_momentum 0.998 0.998 0.998
final_momentum 1.0 1.0 1.0
start_weight_decay 0.04 0.04 0.04
final_weight_decay 0.4 0.4 0.4
scheduler_scale_factor 1.25 1.25 1.25
architecture
patch_size 16 16 16
tubelet_size 2 2 2
pred_depth 12 12 12
pred_embed_dim 384 384 384
hardware
dtype bfloat16 bfloat16 bfloat16
accelerator A100 80G A100 80G A100 80G
Multi-Mask Prediction. To increase the efficiency ofV-JEPA, we use a multi-masking strategy (Caron et al.,
2020; Baevski et al., 2022a), which enables us to amortize the cost of the target computation. As mentioned in
Section 3, for a given video clip, we sample 2 different masks, short-range and long-range. While we need to forward
propagate thex-encoder and predictor separately for each mask, we only need to compute they-representation once.
C Pretraining details
In section, we reportV-JEPA pretraining details. Table 8 summarizes the main hyperparameters used during
pretraining.
Architectures. We use Vision Transformer (Dosovitskiy et al., 2020) (ViT) architectures for thex-encoder and
y-encoder. We train threeV-JEPA encoders: a ViT-L/16224, a ViT-H/16224 and a ViT-H/16384. All three encoders
take as input a short video clip of 16 frames with a temporal stride of 4 between consecutive frames. The subscripts,
224 and 384, indicate the spatial resolution of the video clip.V-JEPA flattens the video clip into a sequence of
non-overlapping spatio-temporal patches of size16 × 16 × 2 (see Figure 7). For all three models, the predictor is
designed as a narrow ViT architecture, consisting of 12 transformer blocks with an embedding dimension of 384. For
simplicity, we keep the number of self-attention heads in the predictor equal to that of the backbone used for the
context-encoder/target-encoder. V-JEPA is pretrainedwithout using a[cls] token.
Optimization. We use AdamW (Loshchilov and Hutter, 2017) to optimize thex-encoder and predictor weights.
The ViT-L/16224 and ViT-H/16224 models use a batch size of3072 while the ViT-H/16384 uses a batch size of
2400. Models are trained for a total of 90,000 iterations. The learning rate is linearly increased from2 × 10−4
to 6.25 × 10−4 during the first12, 000 iterations of pretraining, and decayed to10−6 following a cosine schedule.
17 | 16 | 16 | arxiv3.pdf |
Table 9 Frozen Evaluation hyper-parameters.
Hyper-parameter K400 SSv2 IN1K Place205 iNat21
data
num_clips 8 1 N.A. N.A. N.A.
num_frames 16 16 N.A. N.A. N.A.
temporal_stride 4 4 N.A. N.A. N.A.
horizontal_flip true true true true true
random_resize_scale (0.08, 1.0) (0.08, 1.0) (0.08, 1.0) (0.08, 1.0) (0.08, 1.0)
random_resize_aspect_ratio (0.75, 1.33) (0.75, 1.33) (0.75, 1.33) (0.75, 1.33) (0.75, 1.33)
auto_augment false false true true true
optimization
batch_size 256 256 1024 1024 1024
epochs 20 20 20 20 20
lr 1e-3 1e-3 1e-3 1e-3 1e-3
final_lr 0 0 0 0 0
weight_decay 0.01 0.01 0.01 0.01 0.01
Weight-decay is also linearly increased from0.04 to 0.4 throughout pretraining. They-encoder weights are initialized
identically to thex-encoder, and subsequently updated as an exponential moving average (EMA) (Tarvainen and
Valpola, 2017) of thex-encoder weights using a momentum value which starts at0.998 and is linearly increased to
1.0 during training (Caron et al., 2021; Assran et al., 2022). We scale all hyper-parameter schedules 25% beyond
the actual training schedule. Specifically, the learning rate schedule, weight-decay schedule, and EMA schedule
are computed assuming a training length of 112,500 iterations, even though we only train our model for 90,000
iterations. We found the last25% of the default scheduler period to update hyper-parameters too aggressively, and
simply truncating the schedulers improved performance.
Masking. As described in Section 3, we propose a 3D Multi-Block masking strategy. We use two type of masks:
short-range masks, where we take the union of8 randomly sampled target blocks with a spatial scale of0.15, and
long-range masks, where we take the union of2 randomly sampled target blocks with a spatial scale of0.7. In both
cases, the aspect ratio for all sampled blocks is randomly chosen in the range(0.75, 1.5).
D Evaluation details
D.1 Frozen classification
Attentive Probing. Given an input video,xL, theV-JEPA target encoderEθ(·) outputs a sequence ofL tokens,
Eθ(xL) = ( s1, . . . , sL), where si ∈ Rd. To pool this sequence of tokens into a single feature vector, we apply a
lightweight non-linear cross-attention block which replace the self-attention operation of a transformer block with
cross attention. Specifically, the cross-attention performs the following computation:
LX
i=1
exp(q⊤Wksi)P
j exp(q⊤Wksj)Wvsi,
where Wk, Wv ∈ Rd×d are the key and value matrices, andq ∈ Rd is a learnable query token. The output of the
cross-attention is then added back to the query token (residual connection), and then fed into two-layer MLP with a
single GeLU activation, followed by a LayerNorm, and finally a linear classifier. The parameters of the cross-attention
block are jointly learned with that of the linear classifier for the downstream task, while the encoder parameters
are kept frozen. Note that, in practice, we actually use an attentive probe with 12 heads, each of dimension12. In
Appendix E we show that baselines benefit from the attentive probing protocol.
Optimization. For all the tasks, we use AdamW optimizer with a cosine scheduler (no warmup) that decays the
learning rate from0.001 to 0. We use a fixed weight-decay of0.01 and apply simple data augmentations (random
resized crops and horizontal flips) during training of the attentive probe, except on image tasks, where we apply
AutoAugment (Dogus Cubuk et al., 2019). Table 9 reports the hyperparameters for each downstream evaluation.
Extension to multiple clips. Unless stated otherwise, our attentive probe takes 8 clips of 16 frames as input on
Kinetics, and 2 clips of 16 frames on Something-Somethingv2 to increase the temporal coverage of the video.
18 | 17 | 17 | arxiv3.pdf |
Table 10 Frozen Detection hyper-parameters.
Hyper-parameter ViT-L/16 ViT-H/16
out_layers [18, 20, 22, 24] [26, 28, 30, 32]
batch_size 64 64
epochs 30 30
opt AdamW AdamW
opt_eps 0.00000001 0.00000001
momentum 0.9 0.9
weight_decay 0.05 0.05
lr 0.0001 0.0001
warmup_lr 0.000001 0.000001
min_lr 0.000001 0.000001
warmup_epochs 2 2
warmup_steps 1 1
Specifically, we first divide a video in 8 (or 2) equal-length temporal segments, and sample 1 clip at random per
segment. The video encoderEθ processes each clip separately and produces a clip-level feature map. The feature
maps for each clip are then concatenated together and fed to the attentive probe. At test time, we average the
prediction of 3 spatial views following standard practice in video classification.
Application of video models to images.To evaluate the video models on image tasks, we simply duplicate input
images to generate still video clips of 16 frames. We perform this duplication operation simply for convenience in
evaluation of the video models, however we find this step to be unnecessary in general. Given a video tokenizer
implemented as a 3D-conv with a temporal stride of2, it is sufficient to simply duplicate the image into a 2 frame
video clip. This would result in the same number of input tokens as that produced by a static image model with a
2D-conv tokenizer.
Application of image models to videos.To evaluate image models such as DINOv2 and OpenCLIP on video tasks,
we simply process each frame independently with the image encoder to produce a frame-level feature map. The
feature maps for each frame are then concatenated and fed to the attentive probe, just as we do with the clip-level
feature maps when evaluating video models.
D.2 Frozen detection
We evaluate our model on the AVA (Gu et al., 2018) spatio-temporal localization of human actions dataset, containing
211k training and 57k validation video segments. We follow the experimental protocol of (Feichtenhofer et al., 2021),
and use precomputed masks from a pretrained Faster-RCNN adapted to videos, which uses a ResNeXt-101-FPN
backbone and is pretrained on ImageNet and COCO. We train a linear classifier on top of thefrozen V-JEPAfeatures
to classify the extracted regions of interest and report mean Average Precision (mAP) on the 60 most common
classes. Hyper-parameters are provided in Table 10. Our frozen features are obtained by concatenating the last layer
of the transformer encoder with three intermediate layers. We use a batch size of 64 and pretrain for 30 epochs with
AdamW using a learning rate of 0.0001 with 2 epochs of warmup and a weight decay of 0.05.
D.3 Finetuning
Following Tong et al. (2022), we finetune a linear layer on top of our model, using a layer decay schema and mixup
as the data augmentation pipeline. We provide all hyper-parameters for both K400 and SSv2 in Table 11.
E Extra Results
E.1 Frozen Evaluation.
Linear vs. Attentive probe Table 12 shows thatV-JEPA and VideoMAE benefit from using a non-linear attentive
probe and multiple clips on the K400 and SSv2 downstream tasks. Additionally, Table 13 shows that attentive
probing leads to better performance on average for DINOv2 and OpenCLIP models. Since attentive probing and
multiclips eval improves the performance of all models, we use it as our default protocol in frozen evaluation.
19 | 18 | 18 | arxiv3.pdf |
Table 11 Finetuning Evaluation hyper-parameters.
Hyper-parameter K400 SSv2
data
num_segments 1
num_frames 16
sampling_rate 4
resolution 224
model
model_name ViT-L/16 ViT-H/16 ViT-L/16 ViT-H/16
drop_path 0.1 0.2 0.2 0.2
head_drop_rate 0. 0. 0.5 0.5
optimization
batch_size 256 1024 256 256
epochs 35 25 15 15
opt adamw
opt_eps 0.00000001
momentum 0.9
weight_decay 0.05
lr 0.002 0.0005 0.0005 0.0005
layer_decay 0.75 0.75 0.75 0.75
warmup_lr 1e-6 1e-8 1e-6 1e-6
min_lr 1e-6 1e-5 1.5e-4 1.5e-3
warmup_epochs 5
augmentations
color_jitter 0.4
horizontal_flip True True False False
num_sample 2
aa rand-m7-n4-mstd0.5-inc1
smoothing 0.1
train_interpolation bicubic
test_num_segment 5 5 2 2
test_num_crop 3 3 3 3
erase
prob 0.25
mode pixel
count 1
split False
mixup
mixup 0.8
cutmix 1.0
mixup_prob 1.0
mixup_switch_prob 0.5
mixup_mode batch
20 | 19 | 19 | arxiv3.pdf |
Table 12 Linear vs. Attentive Probe Evaluation for V-JEPA and VideoMAE.We evaluate the effect of linear (Lin.)
and attentive (Att.) probing when adapting V-JEPA to the K400 (16 × 5 × 3) and SSv2(16 × 2 × 2) tasks. V-JEPA and
VideoMAE benefit from using a non-linear attentive probe.
K400 SSv2
Method Arch. Lin. Att. Lin. Att.
VideoMAE ViT-L/16 52.5 77.8 41.3 61.2
V-JEPA ViT-L/16 56.7 80.8 50.1 69.5
Table 13 Linear vs. Attentive Probe Evaluation for DINOv2 and OpenCLIP.We evaluate the effect of linear (Lin.)
and attentive probing (Att.) when adapting DINOv2 and OpenCLIP. Image-baselines benefit from using an attentive probing
strategy. Results shown ingray are reported from the linear probe evaluation in Oquab et al. (2023).
K400 SSv2 IN1K Place205 iNat21
Method Arch. Lin. Att. Lin. Att. Lin. Att. Lin. Att. Lin. Att.
DINOv2 ViT-g/14 78.4 83.4 38.3 50.0 86.5 86.2 67.5 68.4 85.7 88.8
OpenCLIP ViT-G/14 78.3 81.8 35.8 34.8 86.2 85.3 69.8 70.2 76.0 83.6
One Clip vs Multiple clips.We examine the impact of changing the temporal coverage of a model during downstream
evaluation on K400 action classification. In Table 14, we evaluate VideoMAE andV-JEPA models using an attentive
probe with access to either the feature map of 1 clip randomly sampled from the video, or the concatenated feature
map of 8 clips randomly sampled from the video. To sample 8 clips from a video, we first divide the video into 8
equal length temporal segments, and sample 1 clip at random from each segment. A single clip corresponds to≈ 2
seconds of a video on average, while 8 clips correspond to≈ 16 seconds. The video encoders processes each clip
separately to produce a clip-level feature map, which are then concatenated at the input to the attentive probe.
Increasing the temporal coverage from 1 clip per video to 8 clips improves the performance of bothV-JEPA and
VideoMAE on K400 action classification. We therefore use the multiclip attentive probing setup as our default
evaluation pipeline.
E.2 Finetuning
In Table 15, we evaluateV-JEPA using finetuning (separately) on K400 and SSv2. We compareV-JEPA with
VideoMAEv2 (Wang et al., 2023a), VideoMAE (Tong et al., 2022) and MVD (Wang et al., 2023b) using a ViT-L/16
or a ViT-H/16 architecture.V-JEPA obtains competitive performance using a finetuning protocol. With a ViTiH/16
architecture, V-JEPAoutperforms by1.2% VideoMAE and+0.3% VideoMAEv2 on the SSv2 dataset, while obtaining
comparable performance on K400.V-JEPA also obtains performance similar to MVD on the SSv2 dataset. The
MVD model achieves the best performance across models on the K400 dataset, and is trained using the image
dataset ImageNet1K, in contrast to the other methods in the table, which only use video data. Additionally MVD
requires the processing of significantly more samples during pretraining due to the cost of training the teacher
encoder networks in a pre-pre-training step.
E.3 Sample Efficiency of pretraining
We compare the sample efficiency of pretraining various state-of-the-art image and video models. Specifically, we
look at the number of samples (image or video clips) processed by the network during pretraining, which is larger
than the size of the pretraining dataset for multi-epoch training. Notably, our results withV-JEPA are obtained
while processing an order of magnitude fewer samples than previous methods, and notably two orders of magnitude
fewer samples than OpenCLIP. We believe that further investment towards improving the video pretraining data
distribution could lead to substantial gains in downstream image and video tasks.
E.4 Masking Strategy
An important component of theV-JEPA pretraining strategy is the 3D clip masking strategy. In this section, we
detail 26 ablation experiments exploring different masks. For all the experiments, we pretrain a ViT-B/16 pretrained
on K400. Figure 8 presents a summary of those results.
Figure 8c shows the effect of changing the spatial and temporal masking ratio. Figure 8b ablates the number of
sampled blocks used to construct the masks given a fixed effective masking ratio of90%. Finally, in Figure 8a we
21 | 20 | 20 | arxiv3.pdf |
Table 14Temporal Coverage on Kinetics-400.We evaluate the effect of temporal coverage on K400. We train an attentive
probe on K400 using either 1 clip (≈ 2 seconds of a video) or 8 clips (≈ 16 seconds of a video). To sampleN clips, we first
divide a video inN equal-length temporal segments and sample one clip at random per segment. The video encoder processes
each clip in parallel and all the encoder output tokens are concatenated at the input of the attentive probe. Increasing the
temporal coverage from 1 clip per video to 8 clips significantly improves the performance for both our VideoMAE baseline
and V-JEPA.
Method Arch. 1 Clip 8 Clips
VideoMAE ViT-L/16 69.4 77.8
V-JEPA ViT-L/16 73.7 80.9
Table 15 Finetuning results.We evaluate a V-JEPA model with the finetuning protocol on the K400 and SSv2 datasets
using 16 frames per clip and multi-view fusion (5×3 or2×3) for inference. The#Samples Seenentry corresponds to the
number of video clips processed during pretraining, which is larger than the size of the pretraining dataset for multi-epoch
training. We compare V-JEPA with different video self-supervised learning approaches. We report the VideoMAEv2 results
without instruction-turning for consistency with the other approaches. V-JEPA obtains competitive performance using the
finetuning protocol.
Method Arch. Pretraining Data #Samples Seen K400 SSv2
(16×5×3) (16 ×2×3)
VideoMAEv1 ViT-L/16 K400 |SSv2 380M |410M 85.4 74.3
ViT-H/16 K400 |SSv2 380M |410M 86.6 74.8
VideoMAEv2 ViT-H/16 Un.Hybrid 1600M 86.9 76.8
MVD ViT-L/16 K400+IN1K 2400M 86.4 76.7
ViT-H/16 K400+IN1K 2400M 87.2 77.3
V-JEPA ViT-L/16 VideoMix2M 270M 85.6 75.1
ViT-H/16 VideoMix2M 270M 86.6 77.0
examine our multi-masking strategy and find that sampling two masks for each clip (long-range and short-range) to
be more effective than sampling just a single mask for each clip.
In Figure 8c, we explore different average spatial and temporal masking ratio, i.e. the spatial/temporal ratio of
the area that is covered by a mask on average for a clip. Recall that each mask is constructed by sampling several
(possibly overlapping) blocks and taking their union. We change the average spatial or temporal masking ratio by
changing a block spatial or temporal size, as well as the overall number of blocks. We found that low spatial or
temporal coverage results in a trivial prediction task, which degrades downstream performance. Based on those
results, we sample masks that remove roughly90% of the frame and extend along the entire temporal dimension of
the clip by default.
In Figure 8b , we explore different block size given an effective spatial masking ratio of 90% and temporal ratio of
100%. We keep the masking ratio approximately constant by changing the block size and the number of block at the
same time. We find that sampling several blocks to perform better than sampling a single large block. Figure 9
visually illustrates the effect of sampling several smaller blocks to construct a mask.
In Figure 8a, we explore the effect of sampling various number of masks per samples. We find that sampling two
masks for each clip, with different spatial block sizes for each, to be more effective than sampling just a single mask.
We hypothesize that this masking strategy induces complementary tasks. In our experiment, we use this as our
default masks sampling.
22 | 21 | 21 | arxiv3.pdf |
Table 16Sample efficiency.We compare the sample efficiency of pretraining various state-of-the-art image and video models.
The #Samples Seenentry corresponds to the number of samples (image or video clips) processed by the network during
pretraining, which is larger than the size of the pretraining dataset for multi-epoch training. The V-JEPA results in this
paper are obtained while processing an order of magnitude fewer samples than previous methods.
Method Arch. Data #Samples Seen
OpenCLIP ViT-G/14 LAION-2B 39000M
DINOv2 ViT-g/14 LVD 142M 1900M
VideoMAEv2 ViT-g/14 UnlabeledHybrid 1600M
V-JEPA ViT-H/16 384 VideoMix2M 210M
1 2 3
50
51
52
53
54
55
Number of Masks per SamplesKinetics 400
Ablating Number of Masks per Sample
(a)
1 2 4 8 16
47
48
49
50
Number of Blocks per Mask
Kinetics 400
Ablating Number of Blocks per Mask (b)
25 50 75 90
0
10
20
30
40
50
Spatial Masking Ratio
Kinetics 400
Ablating Masking Ratio
Temporal Masking Ratio
100%
75%
50% (c)
Figure 8 Masking Strategy Ablation.Evaluating a linear probe on a ViT-B/16 pretrained with V-JEPA on K400 under
various 3D Multi-Block masking settings. We examine the impact of(a) sampling several masks per video,(b) varying the
number of blocks in a mask, and(c) varying the average spatial and temporal masking ratio. A temporal masking ratio of
100% extends the spatial mask across all the frames in the clip. We find it important to maintain a high spatial and temporal
masking ratio during pretraining.
(a) Num. Blocks: 8, Spatial Block Size:32 × 32
(b) Num. Blocks: 4, Spatial Block Size:80 × 80
(c) Num. Blocks: 2, Spatial Block Size:160 × 160
Figure 9 Illustration of mask with number of blocks and block size. Each mask is constructed by sampling several (possibly
overlapping) blocks and taking their union.
23 | 22 | 22 | arxiv3.pdf |
MTEB-French: Resources for French Sentence Embedding Evaluation and
Analysis
Mathieu Ciancone
Wikit, France
[email protected]
Imene Kerboua
Esker, France
[email protected]
Marion Schaeffer
Wikit, France
[email protected]
Wissam Siblini
[email protected]
Abstract
Recently, numerous embedding models have
been made available and widely used for var-
ious NLP tasks. The Massive Text Embed-
ding Benchmark (MTEB) has primarily sim-
plified the process of choosing a model that
performs well for several tasks in English, but
extensions to other languages remain challeng-
ing. This is why we expand MTEB to propose
the first massive benchmark of sentence em-
beddings for French. We gather 15 existing
datasets in an easy-to-use interface and create
three new French datasets for a global evalua-
tion of 8 task categories. We compare 51 care-
fully selected embedding models on a large
scale, conduct comprehensive statistical tests,
and analyze the correlation between model per-
formance and many of their characteristics. We
find out that even if no model is the best on all
tasks, large multilingual models pre-trained on
sentence similarity perform exceptionally well.
Our work comes with open-source code, new
datasets and a public leaderboard1.
1 Introduction
Embeddings are dense vector representations that
capture the semantics of an input. The first emblem-
atic example is Word2Vec, introduced by Mikolov
et al. (2013). It consists of neural architectures
trained to learn high-quality word representations
from contextual relationships in vast amounts of
text. Other models were proposed since then, lever-
aging the transformer architecture (Vaswani et al.,
2017) to produce both generic and contextualized
word embeddings using self-attention. Many mod-
els now exist with various architectures, mono-
lingual or multilingual, pre-trained or fine-tuned
(Naseem et al., 2021; Ding et al., 2023).
In this work, our primary objective is to in-
troduce a large-scale embedding benchmark for
1French table on: https://huggingface.co./spaces/
mteb/leaderboard
French to enable the research community and indus-
try to select the most relevant embedding methods
based on one’s specific needs, such as being open-
source, versatile or targeted toward a particular task,
having a small embedding dimension, the ability to
process long texts or their performance. To achieve
this goal, we undertake significant efforts in col-
lecting datasets to conduct a broad comparison of
models. We ensure that the datasets cover various
tasks within a common, easy-to-use framework,
and we create three new quality-checked datasets
to enhance this collection. We select a diverse
range of models, including prominent French and
multilingual models deemed most efficient. The re-
sults of our study already enable the community to
make informed model selections, whether for gen-
eral purposes or specific tasks. Additionally, our
implementation is open to the community and fea-
tures a public leaderboard, allowing the results to
evolve with new models or datasets. With this first
large-scale comparison, we perform an in-depth
analysis of the results, confirming well-known find-
ings such as the correlation between performance
and model/embedding dimensions and uncovering
interesting nuances.
2 Related Work
Sentence Embeddings Sentence embeddings are
required for many language tasks, such as Semantic
Textual Similarity (STS) and knowledge retrieval.
Many models have been proposed in the litera-
ture, leveraging pooling strategies (Devlin et al.,
2019; Muennighoff, 2022) or similarity fine-tuning
(Reimers and Gurevych, 2019) using a contrastive
framework (Gao et al., 2021; Neelakantan et al.,
2022; Ni et al., 2021; Wang et al., 2022; Zhang
et al., 2023), leveraging prompts (Wang et al., 2023)
or a two steps training process (Chen et al., 2024;
Lee et al., 2024). Few French-language models
have been proposed in the literature (Martin et al.,
1
arXiv:2405.20468v2 [cs.CL] 17 Jun 2024 | 0 | 0 | arxiv4.pdf |
2019; Le et al., 2020). Most French models for
sentence embeddings have been developed by the
open-source community2, by fine-tuning models
like CamemBERT(Martin et al., 2019) or Crois-
santLLM(Faysse et al., 2024).
Benchmarks Embedding models are generally
compared on specific tasks, such as information
retrieval, STS or reranking (Thakur et al., 2021;
Agirre et al., 2016; Wang et al., 2021). Other
works evaluate embedding models on multiple
tasks (Wang et al., 2018; et al., 2022; Conneau and
Kiela, 2018) or compare meta-embeddings (García-
Ferrero et al., 2021). The most comprehensive
benchmark to date is MTEB (Muennighoff et al.,
2022). MTEB still has a critical limit: it mainly
focuses on English. Some initiatives already ex-
tended this benchmark to other languages, such as
Chinese (Xiao et al., 2024) and German (Wehrli
et al., 2024). Our work comes with the same am-
bition for French. It relies on the MTEB structure
that provides a solid basis for analysis and extends
it to a new language.
3 MTEB for French
In this section, we describe the datasets and the
models that we propose for the French extension
of MTEB. We also list the research questions we
want to discuss with the results.
3.1 New Datasets
We identified 7 datasets relevant to French in the ex-
isting MTEB, which we assume are of good quality.
We complemented these with 8 external relevant
datasets proposed in the literature, such as BSARD
(Louis and Spanakis, 2022) and Alloprof (Lefebvre-
Brossard et al., 2023), which are proven to be good
quality. We created 3 new ones presented in Table 1
and assessed their quality with various procedures
and metrics. In addition to all performed checks,
we run multiple models on these datasets and pro-
vide results to show that they are neither trivial nor
impossible to solve (see Tables 10, 11, 12 and 13).
Therefore, as of today, our French MTEB
runs on 18 datasets. Some datasets are framed
differently according to the task category they
are used with. For example, MasakhaNEWS
dataset (Adelani et al., 2023) is used for
both Classification (MasakhaNEWSClassification)
and Clustering (MasakhaNEWSClusteringS2S and
2Models on the HuggingFace hub: sentence-camebert,
sentence_croissant_alpha_v0.3, Solon-embeddings-large-0.1.
MasakhaNEWSClusteringP2P). Table 3 shows de-
tails of each task data used for running the bench-
mark.
This section describes the 3 new datasets we in-
troduce, quality checks performed and an analysis
of the semantic similarities between datasets.
3.1.1 Syntec (Retrieval)
The Syntec French collective bargaining agree-
ment3 comprises around 90 articles. Despite its
topic, the language used does not feature the speci-
ficity of the legal vocabulary, making the data
suitable for benchmarking general-purpose mod-
els. The articles have been scraped for use as doc-
uments. Four annotators were divided into two
groups. Each group was given half of the articles
and asked to choose an article and write a question
about it. Each annotator wrote 25 questions. Thus,
a hundred questions have been manually created
and paired with the articles containing the answer4.
Examples of the dataset are available in the ap-
pendix Figure 5. This dataset could also be used
for text classification, clustering or topic modeling.
Regarding quality checks, every article’s integrity
has been reviewed while manually creating ques-
tions. We also manually checked that the questions
could only be answered using the annotated article.
3.1.2 HAL (Clustering)
Hyper Articles en Ligne (HAL) is a French open
archive of scholarly documents from all academic
fields. Scrapping this resource, we fetched 85,000
publications in French5. We extracted IDs, titles
and the author’s choice among domain labels. The
last 2 are provided by authors when submitting
their papers to HAL. Since domain annotations are
provided, the dataset can be used for many tasks,
such as topic modeling or text classification. To en-
sure the dataset quality is suitable for a benchmark,
further data cleaning has been performed:
• Duplicates are eliminated, retaining unique
publications for each field.
• Irrelevant titles (due to API indexing mistakes)
or titles in languages other than French have
been manually removed.
3https://www.syntec.fr/convention-collective/
4https://huggingface.co./datasets/lyon-nlp/
mteb-fr-retrieval-syntec-s2p
5https://huggingface.co./datasets/lyon-nlp/
clustering-hal-s2s
2 | 1 | 1 | arxiv4.pdf |
Dataset Syntec HAL SummEvalFr
Samples 100 queries
90 documents
26233 samples
10 classes
100 texts
1100 human summaries
1600 machine summaries
Creation process Scraping of Syntec col-
lective bargaining agree-
ment with articles as doc-
uments. Writing queries
corresponding to articles.
Scraping of HAL arti-
cles with id, title and do-
main. Further cleaning
with deduplication, lan-
guage filtering and class
subsampling.
Translation from English
to French with Deepl of
the SummEval dataset.
Annotation process 4 annotators divided into
2 groups. Each group was
given half of the articles
and asked to choose an ar-
ticle and ask a question
about it. Each annotator
wrote 25 questions.
Annotations provided by
authors when submitting
their paper. They choose
the domainbetween exist-
ing academic fields.
Detailed annotation pro-
cess provided in Fabbri
et al. (2021).
Quality checks Human verification of an-
notations.
Baseline models for clas-
sification and topic model-
ing.
Correlation between
BLEU and ROUGE
scores of the French
and the original English
datasets. LLM as-a-judge
translation rating and
human verification.
Table 1: New datasets details with the number of samples, the creation process, the annotation process and the
quality checks. All datasets are test splits.
• Samples belonging to domain classes with
less than 500 samples were removed, which
leads us to keep only 10 classes.
• Subsampling was performed on 2 classes con-
taining more than 10k samples each to lower
the number of samples and mitigate the unbal-
ance of the dataset.
More details about this process are provided in the
appendix A.2 along with some extracts in Figure
6. We make the dataset publicly available in both
their raw and clean versions. We use this dataset in
a clustering setup to cluster publications by their
title and use the domain as ground truth. To ensure
the quality of this dataset, we run 3 baseline mod-
els for classification: TF-IDF + SVM, a fine-tuned
Camembert (Martin et al., 2019) and GPT-4 lever-
aging In-Context Learning (ICL). Furthermore, we
run one baseline model for topic modeling: Latent
Dirichlet Allocation (LDA) (Blei et al., 2003) and
report scores in the appendix A.2.
3.1.3 SummEvalFr (Summarization)
The original SummEval dataset (Fabbri et al., 2021)
consists of 100 news articles from the CNN/Dai-
lyMail dataset. Each article has 11 human-written
summaries and 16 machine-generated summaries
annotated by 8 people with a score for coherence,
consistency, fluency, and relevance. We trans-
lated it from English to French using DeepL API6.
Since MTEB evaluation is based on the embedding
similarity between machine-generated and human-
generated summaries, we propose to compute the
ROUGE (Lin, 2004) and BLEU (Papineni et al.,
2002) metrics between machine and human sum-
maries for both French and English version. In Ta-
ble 2, we report the average of the scores as well as
their correlations between the two languages. The
correlation is high (above 0.7), showing that the
word and n-gram overlap between human and ma-
chine summaries is highly preserved in the French
version. One may argue that computing the met-
ric on fully translated texts (human and machine
summaries are both translated from English) may
introduce biases and not assess the quality of the
translations. For this purpose, we ensure the French
human summaries are correctly translated from En-
glish. We use an LLM as-a-judge (Zheng et al.,
6https://www.deepl.com
3 | 2 | 2 | arxiv4.pdf |
2023) where given the original human summary
in English and its translation in French, the model
rates the quality of the translation from 0 to 10,
with 0 being of very bad quality and 10 being ex-
cellent. The prompt is available in Figure 8. Ad-
ditionally, we manually check random translations
with ratings between 9 and 10 to ensure the rating
is relevant. We do the same for all translations with
a score less than 9 and correct them7 (see the rating
distribution in Table 6).
Dataset BLEU ROUGE-1 ROUGE-2 ROUGE-L
SummEval 0.205 0.292 0.099 0.193
SummEvalFr 0.276 0.302 0.117 0.194
Correlation En-Fr 0.70 0.85 0.80 0.84
Table 2: Average ROUGE and BLUE scores computed
between machine summaries and human summaries
for the original English SummEval and its translation
to French. The correlations of the individual scores
between English and French are also reported.
3.1.4 Data for the Reranking task
The reranking task, as evaluated in MTEB, requires
datasets composed of a set of queries, each as-
sociated with relevant and irrelevant documents.
Despite our efforts, we found no French dataset
that natively exhibits such a structure. Thus, to
evaluate this task, we built data for the reranking
task based on the Syntec and Alloprof (Lefebvre-
Brossard et al., 2023) datasets. These already fea-
ture queries and labeled relevant documents. Irrele-
vant ones were added using the following process:
• To avoid bias, we use the BM25 algorithm
(Robertson and Jones, 1976) (which is a deter-
ministic method) to rank documents in terms
of relevance regarding each query.
• The top 10 documents that are not labeled as
relevant constitute the negative samples.
We recognize that this process leads to a high cor-
relation between the retrieval and reranking tasks.
We still think it is essential to make the latter avail-
able, with an open door to future improvement8.
7SummEvalFr available at: https://huggingface.co./
datasets/lyon-nlp/summarization-summeval-fr-p2p
8SyntecReranking available at: https:
//huggingface.co/datasets/lyon-nlp/
mteb-fr-reranking-syntec-s2p and AlloprofRerank-
ing available at: https://huggingface.co./datasets/
lyon-nlp/mteb-fr-reranking-alloprof-s2p
3.1.5 Similarity analysis
We investigate the proximity between the datasets’
topics to give insights about the benchmark con-
tents. The methodology introduced by Muen-
nighoff et al. (2022), i.e. computing an average
embedding of samples from each dataset, is used to
build a dataset-similarity matrix (displayed in ap-
pendix Figure 3). The distances between averaged
embedding vectors of each dataset (which range
from 0.89 to 1 in Figure 3) remain hard to interpret
into a dataset semantic proximity. Thus, we com-
plement this by observing the dataset’s clouds of
embedding in a 2D plane using PCA in Figure 4.
Figures 4 and 3 seem to correlate, showing high
similarity between two datasets when the same
underlying data is used in different tasks. Dataset
topics are pretty close, with some exceptions, such
as the Syntec dataset. As more datasets are added
to the benchmark, this analysis will help select new
data that do not produce redundant results. It may
also help to understand the link between the results
and the datasets’ topics.
3.2 Models
For comparison on our benchmark, we selected
various models to fulfil three objectives.
• Quantity: The aim was to compare a substan-
tial number of models (51 in total) to provide
comprehensive results, facilitating the com-
munity in selecting effective French models.
• Relevance: It was imperative to include
top performers from the MTEB benchmark
(Muennighoff et al., 2022). We mainly se-
lected multilingual models and some English
models to asses their language-transferring
abilities. Additionally, we integrated natively
French transformer-based models such as
CamemBERT (Martin et al., 2019),FlauBERT
(Le et al., 2020) and even the very recent
CroissantLLM (Faysse et al., 2024).
• Variety: Diverse model types were included
to offer an insightful analysis across vari-
ous model characteristics (dimension, training
strategy, etc.).
In line with the third objective, we explicit below
the studied characteristics of embedding models
that will be discussed with the results.
• Embedding dimension:This critical element
influences the expressiveness of the represen-
4 | 3 | 3 | arxiv4.pdf |
tation and, in practical applications, the under-
lying storage and compute costs. We selected
models with embedding dimensions ranging
from 384 to 4096.
• Sequence length: Being the number of to-
kens that a model can consider as input, the
sequence length is important as it impacts the
unit that can be encoded (sentence, paragraph,
document). However, encoding overly long
sequences requires efficiently storing the rele-
vant information into a single vector. Among
the selected methods, this criterion varies
from 128 tokens to 32768.
• Model parameters:Often correlated with the
two first characteristics, parameter count is im-
portant for practical applications as it affects
usability on resource-efficient machines. The
selected models have a number of parameters
ranging from 20 million (∼100Mb in float32)
to 7 billion (∼28Gb).
• Language: This is a major feature of lan-
guage models. Some are monolingual, and
others are multilingual. Language is usually
acquired during pre-training, but sometimes,
models familiarize themselves with new lan-
guages at tuning. For the benchmark, we
selected French models, as well as bilingual
or multilingual models. We also included a
few ones that claimed to be English (e.g. all-
MiniLM-L12-v29).
• Model types:There are several strategies to
generate text embeddings such as aggregat-
ing (e.g. with average pooling) token-level
embeddings from raw pre-trained models, or
adding an extra contrastive learning step on a
sentence similarity task with, optionally, ad-
ditional transformation layers. We included
models of all types in our benchmark, summa-
rizing the model type information under two
relevant criteria: finetuned vs pretrained, and
trained for sentence similarity or not.
The selected models are visible in Figure 1, and
all of their characteristics are summarized in ap-
pendix Table 7. Overall, the selection includes the
best models from the sentence transformers frame-
work (Reimers and Gurevych, 2019), the most pop-
ular French NLP models (Le et al., 2020; Martin
9https://huggingface.co./sentence-transformers/
all-MiniLM-L12-v2
et al., 2019), their variants optimized for semantic
similarity (Reimers and Gurevych, 2019), numer-
ous multilingual models performing at the top on
MTEB (e.g E5 and T5), Bloom variants (Zhang
et al., 2023), models based on very recent power-
ful LLMs (Wang et al., 2023; Faysse et al., 2024)
and finally the proprietary models of OpenAI, Co-
here and V oyage. Certain models were selected in
multiple sizes to isolate the dimensionality effect
effectively. We provide information on the mod-
els’ licenses as reported in the Hugging Face hub10.
However, we encourage readers to conduct further
research before utilizing a model.
3.3 Evaluation
For the sake of homogeneity, models are evalu-
ated using the same metrics per task as in MTEB
(Muennighoff et al., 2022): Classification (Accu-
racy), Bitext mining (F1 score), Pair classification
(AP), Clustering (V measure), Reranking (MAP),
Retrieval (NDCG@10), Summarization and STS
(Spearman correlation based on cosine similarity).
BitextMining tasks are excluded from the aver-
age performance scores and therefore the figures,
as this task evaluates 2 languages instead of one,
and this benchmark focuses only on one language
(French). We present the results for both DiaBlaBi-
textMining and FloresBitextMining in Table 12.
Using the overall benchmark results, our goal
will be to answer the following research questions:
Q1: Is a model outstanding on all tasks?
As we are trying to find out whether one embed-
ding model is statistically better than the others for
French, the objective will also be to analyze the
performance of the models by tasks to facilitate
model choice for specific applications.
Q2: Are there any links between the model charac-
teristics and performance?
In section 3.2, we undertook the substantial task of
gathering the characteristics of all evaluated mod-
els. The goal here will be to analyze their impact
on performance and draw conclusions about, for
example, the relationship between embedding di-
mension and model ranking on the benchmark.
Q3: Do monolingual models have multilingual ca-
pabilities?
We interrogate the ability of a model trained exclu-
sively in one language to perform well in another
language.
Q4: Are there any correlations between datasets
10https://huggingface.co./models
5 | 4 | 4 | arxiv4.pdf |
with respect to model ranking?
To go further than the correlation analysis among
datasets regarding their topics (see section 3.1.5),
subsequent analysis will be conducted regarding
how they rank models. Additionally, complemen-
tary insights will be derived from examining cor-
relations of models relative to their strengths and
weaknesses across different datasets.
4 Results and discussion
In this section, we present the results through the
prism of our research questions.
Q1: Is there a model that outstands on all
tasks?
Models performances for each task are presented
in appendix Tables 9, 10, 11, 12 and 13. Figure
1 shows the critical difference diagram of average
score ranks.
As in MTEB (Muennighoff et al., 2022), no
model claims state-of-the-art in all tasks even if
the text-embedding-3-large model is in first place
on average on all tasks (see Table 9). It ranks
first for the classification and reranking tasks. For
the clustering task, text-embedding-ada-002 is the
best model. The models voyage-code-2, text-
embedding-3-small and mistral-embed share the
top positions in the retrieval task ranking. For the
pair classification task, laser2 is ahead of its com-
petitors. Finally, sentence-camembert-large leads
on the STS task and multilingual-e5-small has the
best results for summarization.
Figure 1 shows a global model comparison
across all datasets. The models are arranged hori-
zontally according to their performance, with the
best models on the left. The black bars repre-
sent the statistical equivalence between the mod-
els’ performances. The statistically equivalent
top performers for this benchmark are OpenAI’s
models text-embedding-3-large, text-embedding-3-
small and text-embedding-ada-002. Interestingly,
many models do not show a significant perfor-
mance gap between their base and large flavours.
Some French models stand out among the multi-
lingual models, such as Solon-embeddings-large-
0.1, sentence_croissant_alpha_v0.3 and sentence-
camembert-large.
Q2: Are there any links between model
characteristics and performance?
The Spearman correlations between the average
rank of the models and their characteristics are the
following:
• Tuned for sentence similarity: 0.727
• Finetuned vs pretrained: 0.544
• Model number of parameters: 0.49
• Embedding dimension: 0.452
• Closed source: 0.449
• Max sequence length: 0.336
• Multilingual: 0.103
• English: 0.025
• English but tuned on other languages: -0.025
• French: -0.134
• Bilingual: -0.135
Additionally, all cross-correlations between charac-
teristics are reported in appendix Figure 10.
As expected, the score most strongly correlates
with whether the evaluated models were trained on
a sentence similarity task. Of course, this criterion
is connected to the more general Finetuned one.
The only top-performing models solely pre-trained
are from the E5 family, where the pre-training is,
in fact, contrastive and optimized for similarity.
Conversely, models pre-trained on token-level tasks
and generating embeddings via pooling appear less
well-suited for the benchmark tasks.
Furthermore, we observe a performance correla-
tion with the embedding dimension and the model’s
number of parameters, which are often correlated
themselves. This appears very clearly on the rela-
tive ranking of E5 and T5 models (see Figure 1).
However, some small models perform very well
on the benchmark, such as the standard version
of the multilingual universal sentence encoder or
Solon-embeddings-base-1.0. Notably, the maxi-
mum sequence length, while an important criterion
for generative tasks with LLMs, is less correlated
with performance than the other dimensions. This
can be explained by many datasets containing rel-
atively small texts (see appendix Table 3 showing
that 14 datasets have less than 50 tokens).
Regarding language, it is surprising that good
performance is not particularly correlated with
French models in particular. In reality, the other
aspects of the models, such as being fine-tuned
6 | 5 | 5 | arxiv4.pdf |
0.2 0.4 0.6 0.8
text-embedding-3-large (0.087)
text-embedding-ada-002 (0.15)
text-embedding-3-small (0.17)
mistral-embed (0.19)
bge-m3 (0.22)
voyage-code-2 (0.24)
e5-mistral-7b-instruct (0.24)
Solon-embeddings-large-0.1 (0.25)
sentence_croissant_alpha_v0.3 (0.26)
sentence-t5-xxl (0.27)
embed-multilingual-v3.0 (0.27)
sentence-camembert-large (0.29)
bge-m3-custom-fr (0.3)
sentence_croissant_alpha_v0.2 (0.31)
multilingual-e5-large (0.31)
Solon-embeddings-base-0.1 (0.34)
multilingual-e5-base (0.34)
sentence-t5-xl (0.36)
voyage-2 (0.41)
sentence-croissant-llm-base (0.42)
paraphrase-multilingual-mpnet-base-v2 (0.43)
embed-multilingual-light-v3.0 (0.43)
multilingual-e5-small (0.44)
sentence-t5-large (0.45)
sentence-flaubert-base (0.46)
universal-sentence-encoder-multilingual-3 (0.49)
(0.94) flaubert_large_cased
(0.92) flaubert_base_uncased
(0.91) xlm-roberta-base
(0.86) xlm-roberta-large
(0.86) flaubert_base_cased
(0.86) udever-bloom-560m
(0.84) camembert-base
(0.78) bert-base-multilingual-cased
(0.75) distilbert-base-25lang-cased
(0.75) camembert-large
(0.74) distilbert-base-en-fr-cased
(0.74) distilbert-base-fr-cased
(0.71) multi-qa-MiniLM-L6-cos-v1
(0.69) all-MiniLM-L12-v2
(0.69) all-MiniLM-L6-v2
(0.67) laser2
(0.64) bert-base-multilingual-uncased
(0.64) udever-bloom-1b1
(0.62) text2vec-base-multilingual
(0.59) sentence-camembert-base
(0.56) distiluse-base-multilingual-cased-v2
(0.54) sentence-t5-base
(0.54) paraphrase-multilingual-MiniLM-L12-v2
(0.52) universal-sentence-encoder-multilingual-large-3
(0.51) LaBSE
Statistically equivalent performance
Lower
performance
Better
performance
Figure 1: Critical difference diagram representing the significant rank gaps between models. The axis represents the
normalized average rank of the models (lower is better). The black bars indicate that the difference in models’ rank
is not statistically significant, i.e. lower than the critical difference.
for similarity, prevail. Nevertheless, we can high-
light the excellent performance of a few French
models such as sentence-camembert and sentence-
croissant and Solon-embeddings.
Lastly, we emphasize that closed-source models
perform well on this benchmark (text-embeddings,
mistral-embed and voyage), but we lack informa-
tion about their characteristics. As more open-
source well-performing models get added in the
future, we could expect this correlation to decrease.
Note that the correlation between sequence length
and performance could be dragged by closed-
source models that have generally larger sequence
lengths.
Q3: Do monolingual models have multilingual
capabilities?
Multilingual French English +
tuning on
other languages
Bilingual English
Language
0.2
0.4
0.6
0.8Average performance
Model perfromance vs language
Figure 2: Model performance depending on the lan-
guage of the data they have been trained on.
We also studied the capabilities of models on the
French language when the language of the training
data varies. It is surprising to note the absence of a
clear correlation between the language the model
is trained on and its performance on French, as
shown by the large standard deviation in Figure 2.
Furthermore, monolingual models trained exclu-
sively on English such as voyage-code-2 show
very good results on French datasets compared
to models trained exclusively on French such as
flaubert derivatives and distilbert-base-fr-cased
(see Table D.1).
This is explained by the fact that a large part of the
selected French models generate embeddings using
a pooling strategy. Only a few are sentence trans-
former models, for which the pooled representation
is part of the model and trained with it, leading to
higher-quality embeddings. This is endorsed by
the excellent results of sentence-camembert-large,
a sentence transformer model trained on French
corpus and confirms the recent findings in terms of
model architecture (Gao et al., 2021).
Finally, it should be noted that a significant portion
of the French data used to train the selected French
models actually comes from English datasets
that have been machine translated (May, 2021).
Despite the tremendous progress of machine
translation, it is well known that the generated
data may be unrepresentative of the language
used by native speakers and cause a reduced final
performance (Barbosa et al., 2021).
7 | 6 | 6 | arxiv4.pdf |
Q4: Are there any correlations between
datasets with respect to model ranking?
The datasets correlation w.r.t model ranking are
presented in appendix Figure 12. Except for
two datasets (MasakhaNEWSClusteringP2P, Sum-
mEvalFr), the correlations, on average, are high.
There is still enough diversity to make each dataset
interesting for the French MTEB benchmark. Two
groups (SyntecReranking/ SyntecRetrieval, Mas-
siveScenarioClassification/ MTOPDomainClassi-
fication/ MassiveIntentClassification) exhibit no-
tably high correlations ( ∼0.97). It is interesting
to point out some sub-diagonal correlation blocks.
The datasets being arranged by task indicate that
models behave slightly more similarly within the
same task than between two different tasks. This
underscores the importance of having multiple
tasks in the benchmark to select general-purpose
models. For readers interested in specific tasks,
it is more relevant to examine task-specific rank-
ings rather than the overall one. The complemen-
tary results of model correlations w.r.t to strengths
and weaknesses on datasets are displayed in ap-
pendix Figure 11. Strong correlations in behavior
emerge among the variants of the same models
(e.g. DistilBERT, sentence-croissant, sentence-t5,
e5, etc.). Correlations are also generally observed
among numerous models trained using the sentence
transformers framework (Reimers and Gurevych,
2019), as well as proprietary models, e.g. from
Cohere and OpenAI. Conversely, these models fine-
tuned for sentence similarity, show minimal cor-
relation with pre-trained models for which token-
embedding pooling techniques are employed.
5 Conclusion and perspectives
In this work, we introduce a large-scale embed-
ding benchmark for French to enable the research
community and industry to select the most relevant
embedding methods based on their specific needs.
We undertake significant efforts in collecting 15
datasets and create 3 new quality-checked ones to
enhance this collection. The whole French bench-
mark runs on 26 tasks. We select a diverse range of
51 models, including prominent French and multi-
lingual models deemed most efficient to conduct a
broad comparison. Our implementation is open to
the community and features a public leaderboard,
allowing the results to evolve with new models or
datasets. After an in-depth analysis of the results,
OpenAI models perform significantly better than
the other models. However, other models should be
considered for their performance on specific tasks,
being open source or having a small embedding
dimension.
This work opens several doors for future im-
provements. By examining dataset diversity in
terms of topics and model ranking, we observe
that the benchmark would benefit from additional
datasets that introduce higher diversity. Beyond
classification, many tasks focus on semantic simi-
larity, explaining the strong performance of models
trained for similarity. Exploring novel tasks in the
generative spectrum or evaluating token embed-
dings (contextualized or not) on tasks like Named
Entity Recognition could be an interesting path
for future exploration. There are also opportuni-
ties for improvements on the model side. With
numerous existing models that could be added to
the leaderboard and many new proposals awaiting.
For instance, we can already see the promising ca-
pabilities of early variants of recent models (Faysse
et al., 2024) and expect that future proposals will
come to compete strongly with closed-source mod-
els. Ultimately, we hope to see the emergence of
other language-specific MTEB variants (e.g. for
high-resource languages like Spanish and German),
enabling a more comprehensive evaluation of mul-
tilingual model performance.
6 Limitations
Native French resources unavailability The
availability of resources natively in French is an
obvious limitation of our work. Regarding mod-
els, there are far fewer options than with more
widespread languages such as English. Indeed,
most of the existing French embedding models we
found are trained using either older architectures
or methods, unlike most recent multilingual mod-
els such as NV-Embed-v1 (Lee et al., 2024) or e5-
mistral-7b-instruct (Wang et al., 2023). Comparing
models by family would be beneficial, particularly
for evaluating French models against multilingual
models on the same architecture using the same
training technique. Resource limitations also ap-
ply to datasets. For example, the summarization
task dataset is translated, which can be less relevant
than a natively French dataset. We have also built
datasets for reranking tasks using existing ones
from retrieval task because we could not find any
in French. This construction process introduces a
bias as the model performance on both tasks may be
8 | 7 | 7 | arxiv4.pdf |
correlated (see Figure 12). We preferred to propose
datasets even if they could introduce biases rather
than not address the task in the benchmark. Note
that each task type can be considered individually.
We hope additional resources will be developed
in the French-speaking community to enrich our
comparison.
Benchmark validity over time As with all
benchmarks, their reliability over time can be dis-
cussed as the field evolves fast. The models se-
lected for the analysis conducted in this paper are
those available at this time, new outperforming
models will be created and shall be evaluated. Our
work extends MTEB and thus simplifies the ad-
dition of new datasets for evaluation and allows
running new models. With this effort, we hope
this will simplify the evaluation of new models pro-
posed by the community to keep our work up to
date.
Data contamination issues Bias may exist for
models that use the training sets of the provided
evaluation datasets for their training. It consider-
ably improves their performance on the benchmark,
favouring them over other models. This is particu-
larly worrying for models that do not communicate
about the datasets used during training, such as pro-
prietary models. Generally speaking, it would be
interesting to calculate the similarity between the
datasets used to train the models and those used to
test them to check that they are far enough apart to
draw general conclusions.
Focus on sentence embeddings Finally, like the
original version of MTEB, the comparison focuses
mainly on sentence embeddings. Other tasks could
be added to cover word embeddings and, therefore,
more NLP tasks.
Acknowledgements
We would like to thank Wikit 11 and Esker12 for
providing compute and funding this research.
References
David Ifeoluwa Adelani, Marek Masiak, Israel Abebe
Azime, Jesujoba Oluwadara Alabi, Atnafu Lam-
bebo Tonja, Christine Mwase, Odunayo Ogun-
depo, Bonaventure F. P. Dossou, Akintunde
Oladipo, Doreen Nixdorf, Chris C. Emezue,
11https://www.wikit.ai/
12https://www.esker.com/
Sana Al-Azzawi, Blessing K. Sibanda, Davis
David, Lolwethu Ndolela, Jonathan Mukiibi,
Tunde Oluwaseyi Ajayi, Tatiana Moteu Ngoli, Brian
Odhiambo, Abraham Toluwase Owodunni, Nnae-
meka Obiefuna, Shamsuddeen Hassan Muham-
mad, Saheed Salahudeen Abdullahi, Mesay Gemeda
Yigezu, Tajuddeen Rabiu Gwadabe, Idris Abdulmu-
min, Mahlet Taye Bame, Oluwabusayo Olufunke
Awoyomi, Iyanuoluwa Shode, Tolulope Anu Ade-
lani, Habiba Abdulganiy Kailani, Abdul-Hakeem
Omotayo, Adetola Adeeko, Afolabi Abeeb, An-
uoluwapo Aremu, Olanrewaju Samuel, Clemen-
cia Siro, Wangari Kimotho, Onyekachi Raphael
Ogbu, Chinedu E. Mbonu, Chiamaka Ijeoma Chuk-
wuneke, Samuel Fanijo, Jessica Ojo, Oyinkansola F.
Awosan, Tadesse Kebede Guge, Sakayo Toadoum
Sari, Pamela Nyatsine, Freedmore Sidume, Oreen
Yousuf, Mardiyyah Oduwole, Ussen Kimanuka,
Kanda Patrick Tshinu, Thina Diko, Siyanda Nx-
akama, Abdulmejid Tuni Johar, Sinodos Gebre,
Muhidin A. Mohamed, Shafie Abdi Mohamed,
Fuad Mire Hassan, Moges Ahmed Mehamed, Evrard
Ngabire, and Pontus Stenetorp. 2023. Masakhanews:
News topic classification for african languages. In
International Joint Conference on Natural Language
Processing.
Eneko Agirre, Carmen Banea, Daniel Cer, Mona Diab,
Aitor Gonzalez-Agirre, Rada Mihalcea, German
Rigau, and Janyce Wiebe. 2016. SemEval-2016
task 1: Semantic textual similarity, monolingual
and cross-lingual evaluation. In Proceedings of the
10th International Workshop on Semantic Evaluation
(SemEval-2016), pages 497–511, San Diego, Califor-
nia. Association for Computational Linguistics.
Arthur Barbosa, Máverick Ferreira, Rafael Fer-
reira Mello, Rafael Dueire Lins, and Dragan Ga-
sevic. 2021. The impact of automatic text transla-
tion on classification of online discussions for social
and cognitive presences. In LAK21: 11th Interna-
tional Learning Analytics and Knowledge Confer-
ence, LAK21, page 77–87, New York, NY , USA.
Association for Computing Machinery.
Rachel Bawden, Eric Bilinski, Thomas Lavergne, and
Sophie Rosset. 2021. Diabla: A corpus of bilingual
spontaneous written dialogues for machine transla-
tion. Language Resources and Evaluation, 55:635–
660.
David M Blei, Andrew Y Ng, and Michael I Jordan.
2003. Latent dirichlet allocation. Journal of machine
Learning research, 3(Jan):993–1022.
Jianlv Chen, Shitao Xiao, Peitian Zhang, Kun Luo, Defu
Lian, and Zheng Liu. 2024. Bge m3-embedding:
Multi-lingual, multi-functionality, multi-granularity
text embeddings through self-knowledge distillation.
Xi Chen, Ali Zeynali, Chico Camargo, Fabian Flöck,
Devin Gaffney, Przemyslaw Grabowicz, Scott Hale,
David Jurgens, and Mattia Samory. 2022. SemEval-
2022 task 8: Multilingual news article similarity. In
Proceedings of the 16th International Workshop on
9 | 8 | 8 | arxiv4.pdf |
Semantic Evaluation (SemEval-2022), pages 1094–
1106, Seattle, United States. Association for Compu-
tational Linguistics.
Alexis Conneau and Douwe Kiela. 2018. Senteval: An
evaluation toolkit for universal sentence representa-
tions. ArXiv, abs/1803.05449.
Mathias Creutz. 2018. Open subtitles paraphrase corpus
for six languages. In Proceedings of the Eleventh In-
ternational Conference on Language Resources and
Evaluation (LREC 2018), Miyazaki, Japan. European
Language Resources Association (ELRA).
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. Bert: Pre-training of deep
bidirectional transformers for language understand-
ing. In North American Chapter of the Association
for Computational Linguistics.
Ning Ding, Yankai Lin, Zhiyuan Liu, and Maosong
Sun. 2023. Sentence and document representation
learning. In Representation Learning for Natural
Language Processing, pages 81–125. Springer Na-
ture Singapore Singapore.
Aarohi Srivastava et al. 2022. Beyond the imitation
game: Quantifying and extrapolating the capabilities
of language models. ArXiv, abs/2206.04615.
Alexander R Fabbri, Wojciech Kry´sci´nski, Bryan Mc-
Cann, Caiming Xiong, Richard Socher, and Dragomir
Radev. 2021. Summeval: Re-evaluating summariza-
tion evaluation. Transactions of the Association for
Computational Linguistics, 9:391–409.
Manuel Faysse, Patrick Fernandes, Nuno M. Guerreiro,
António Loison, Duarte M. Alves, Caio Corro, Nico-
las Boizard, João Alves, Ricardo Rei, Pedro H. Mar-
tins, Antoni Bigata Casademunt, François Yvon, An-
dré F. T. Martins, Gautier Viaud, Céline Hudelot,
and Pierre Colombo. 2024. Croissantllm: A truly
bilingual french-english language model.
Jack FitzGerald, Christopher Hench, Charith Peris,
Scott Mackie, Kay Rottmann, Ana Sanchez, Aaron
Nash, Liam Urbach, Vishesh Kakarala, Richa Singh,
Swetha Ranganath, Laurie Crist, Misha Britan,
Wouter Leeuwis, Gokhan Tur, and Prem Natara-
jan. 2023. MASSIVE: A 1M-example multilin-
gual natural language understanding dataset with
51 typologically-diverse languages. In Proceedings
of the 61st Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 4277–4302, Toronto, Canada. Association for
Computational Linguistics.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Simcse: Simple contrastive learning of sentence em-
beddings. In Conference on Empirical Methods in
Natural Language Processing.
Iker García-Ferrero, Rodrigo Agerri, and German Rigau.
2021. Benchmarking meta-embeddings: What works
and what does not. In Findings of the Association
for Computational Linguistics: EMNLP 2021, pages
3957–3972, Punta Cana, Dominican Republic. Asso-
ciation for Computational Linguistics.
Naman Goyal, Cynthia Gao, Vishrav Chaudhary, Peng-
Jen Chen, Guillaume Wenzek, Da Ju, Sanjana Kr-
ishnan, Marc’Aurelio Ranzato, Francisco Guzmán,
and Angela Fan. 2021. The flores-101 evaluation
benchmark for low-resource and multilingual ma-
chine translation.
Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Max-
imin Coavoux, Benjamin Lecouteux, Alexandre Al-
lauzen, Benoît Crabbé, Laurent Besacier, and Didier
Schwab. 2020. Flaubert: Unsupervised language
model pre-training for french.
Chankyu Lee, Rajarshi Roy, Mengyao Xu, Jonathan
Raiman, Mohammad Shoeybi, Bryan Catanzaro, and
Wei Ping. 2024. Nv-embed: Improved techniques
for training llms as generalist embedding models.
Antoine Lefebvre-Brossard, Stephane Gazaille, and
Michel C. Desmarais. 2023. Alloprof: a new french
question-answer education dataset and its use in an
information retrieval case study.
Haoran Li, Abhinav Arora, Shuohui Chen, Anchit
Gupta, Sonal Gupta, and Yashar Mehdad. 2021.
MTOP: A comprehensive multilingual task-oriented
semantic parsing benchmark. In Proceedings of the
16th Conference of the European Chapter of the Asso-
ciation for Computational Linguistics: Main Volume,
pages 2950–2962, Online. Association for Computa-
tional Linguistics.
Chin-Yew Lin. 2004. ROUGE: A package for auto-
matic evaluation of summaries. In Text Summariza-
tion Branches Out, pages 74–81, Barcelona, Spain.
Association for Computational Linguistics.
Antoine Louis and Gerasimos Spanakis. 2022. A statu-
tory article retrieval dataset in French. In Proceed-
ings of the 60th Annual Meeting of the Association
for Computational Linguistics (Volume 1: Long Pa-
pers), pages 6789–6803, Dublin, Ireland. Association
for Computational Linguistics.
Louis Martin, Benjamin Muller, Pedro Ortiz Suarez,
Yoann Dupont, Laurent Romary, Eric Villemonte
de la Clergerie, Djamé Seddah, and Benoît Sagot.
2019. Camembert: a tasty french language model. In
Annual Meeting of the Association for Computational
Linguistics.
Philip May. 2021. Machine translated multilingual sts
benchmark dataset.
Julian McAuley and Jure Leskovec. 2013. Hidden fac-
tors and hidden topics: understanding rating dimen-
sions with review text. In Proceedings of the 7th
ACM Conference on Recommender Systems, RecSys
’13, page 165–172, New York, NY , USA. Association
for Computing Machinery.
10 | 9 | 9 | arxiv4.pdf |
Tomas Mikolov, Kai Chen, Gregory S. Corrado, and
Jeffrey Dean. 2013. Efficient estimation of word
representations in vector space. In International Con-
ference on Learning Representations.
Niklas Muennighoff. 2022. Sgpt: Gpt sentence
embeddings for semantic search. arXiv preprint
arXiv:2202.08904.
Niklas Muennighoff, Nouamane Tazi, Loic Magne, and
Nils Reimers. 2022. Mteb: Massive text embedding
benchmark. In Conference of the European Chapter
of the Association for Computational Linguistics.
Usman Naseem, Imran Razzak, Shah Khalid Khan,
and Mukesh Prasad. 2021. A comprehensive survey
on word representation models: From classical to
state-of-the-art word representation language models.
Transactions on Asian and Low-Resource Language
Information Processing, 20(5):1–35.
Arvind Neelakantan, Tao Xu, Raul Puri, Alec Rad-
ford, Jesse Michael Han, Jerry Tworek, Qiming Yuan,
Nikolas Tezak, Jong Wook Kim, Chris Hallacy, et al.
2022. Text and code embeddings by contrastive pre-
training. arXiv preprint arXiv:2201.10005.
Jianmo Ni, Gustavo Hernández Ábrego, Noah Constant,
Ji Ma, Keith B. Hall, Daniel Cer, and Yinfei Yang.
2021. Sentence-t5: Scalable sentence encoders from
pre-trained text-to-text models.
Kishore Papineni, Salim Roukos, Todd Ward, and Wei-
Jing Zhu. 2002. Bleu: a method for automatic evalu-
ation of machine translation. In Proceedings of the
40th Annual Meeting of the Association for Compu-
tational Linguistics, pages 311–318, Philadelphia,
Pennsylvania, USA. Association for Computational
Linguistics.
Nils Reimers and Iryna Gurevych. 2019. Sentence-bert:
Sentence embeddings using siamese bert-networks.
In Conference on Empirical Methods in Natural Lan-
guage Processing.
Stephen E. Robertson and Karen Spärck Jones. 1976.
Relevance weighting of search terms. J. Am. Soc. Inf.
Sci., 27:129–146.
Thomas Scialom, Paul-Alexis Dray, Sylvain Lamprier,
Benjamin Piwowarski, and Jacopo Staiano. 2020.
MLSUM: The multilingual summarization corpus.
In Proceedings of the 2020 Conference on Empirical
Methods in Natural Language Processing (EMNLP),
pages 8051–8067, Online. Association for Computa-
tional Linguistics.
Nandan Thakur, Nils Reimers, Andreas Rücklé, Ab-
hishek Srivastava, and Iryna Gurevych. 2021. BEIR:
A heterogenous benchmark for zero-shot evalu-
ation of information retrieval models. CoRR,
abs/2104.08663.
Ashish Vaswani, Noam M. Shazeer, Niki Parmar, Jakob
Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz
Kaiser, and Illia Polosukhin. 2017. Attention is all
you need. In Neural Information Processing Systems.
Alex Wang, Amanpreet Singh, Julian Michael, Felix
Hill, Omer Levy, and Samuel R. Bowman. 2018.
Glue: A multi-task benchmark and analysis plat-
form for natural language understanding. In Black-
boxNLP@EMNLP.
Kexin Wang, Nils Reimers, and Iryna Gurevych. 2021.
TSDAE: Using transformer-based sequential denois-
ing auto-encoderfor unsupervised sentence embed-
ding learning. In Findings of the Association for
Computational Linguistics: EMNLP 2021 , pages
671–688, Punta Cana, Dominican Republic. Associa-
tion for Computational Linguistics.
Liang Wang, Nan Yang, Xiaolong Huang, Binxing
Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder,
and Furu Wei. 2022. Text embeddings by weakly-
supervised contrastive pre-training. arXiv preprint
arXiv:2212.03533.
Liang Wang, Nan Yang, Xiaolong Huang, Linjun Yang,
Rangan Majumder, and Furu Wei. 2023. Improving
text embeddings with large language models. arXiv
preprint arXiv:2401.00368.
Silvan Wehrli, Bert Arnrich, and Christopher Irrgang.
2024. German text embedding clustering benchmark.
Shitao Xiao, Zheng Liu, Peitian Zhang, Niklas Muen-
nighoff, Defu Lian, and Jian-Yun Nie. 2024. C-pack:
Packaged resources to advance general chinese em-
bedding.
Yinfei Yang, Yuan Zhang, Chris Tar, and Jason
Baldridge. 2019. PAWS-X: A cross-lingual adversar-
ial dataset for paraphrase identification. In Proceed-
ings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th Inter-
national Joint Conference on Natural Language Pro-
cessing (EMNLP-IJCNLP), pages 3687–3692, Hong
Kong, China. Association for Computational Linguis-
tics.
Xin Zhang, Zehan Li, Yanzhao Zhang, Dingkun Long,
Pengjun Xie, Meishan Zhang, and Min Zhang. 2023.
Language models are universal embedders. ArXiv,
abs/2310.08232.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan
Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin,
Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang,
Joseph E. Gonzalez, and Ion Stoica. 2023. Judging
llm-as-a-judge with mt-bench and chatbot arena.
11 | 10 | 10 | arxiv4.pdf |
A Supplementary materials for datasets
A.1 All datasets
Table 3 displays the size of each dataset along with
the average number of tokens per sample and their
references. The dataset’s content was tokenized
using cl100k_base encoding. For Retrieval, the
two numbers refer to the queries and the docu-
ments. For Reranking, the three numbers refer to
the queries, the pairs of queries with relevant docu-
ments and the pairs of queries with irrelevant ones,
respectively. The pairs of queries and documents
are obtained from the 90 documents extracted. For
SummEvalFr, the three numbers refer to the texts,
human and machine summaries, respectively.
Figure 3 represents the semantic similarity be-
tween each dataset. The methodology was as fol-
lows: 90 random samples per dataset are embedded
using the multilingual-e5-large model. The embed-
dings of each dataset’s samples are averaged. The
similarity between each dataset is then calculated
using cosine similarity as in (Muennighoff et al.,
2022).
We complement this analysis by observing the
dataset’s clouds of embedding in a 2D plane using
PCA in Figure 4.
A.2 Created datasets
Syntec Figure 5 shows an extract from the Syntec
dataset with a document and a query relative to this
document.
HAL Figure 6 is an extract from the HAL
dataset. Table 4 lists the distribution of classes
(domain field) for the HAL dataset on raw
subset and mteb_eval subset, which is used
for MTEB evaluation. Labels descriptions
can be found at this URL: https://api.archives-
ouvertes.fr/ref/domain/?q=*:*&rows=393 or in Ta-
ble 4. After pre-processing, mteb_eval covers titles
from 10 domains as classes with less than 500 sam-
ples were removed. In the MTEB evaluation subset
of the dataset, titles composed of 2 words or less
have been removed (371 samples), resulting in an
average word count of 13.4. Figure 7 shows the
word count distribution per title. Furthermore, the
dataset has been cleaned up by manually remov-
ing all non-French titles. Additionally, it can be
observed in Table 4 that in the original raw dataset,
the shs and sdv classes represent by far the majority
of the dataset samples with respectively 58706 sam-
ples (73%) and 11049 samples (13%). In order to
mitigate the class imbalance while preserving the
majority of those classes, they have been randomly
subsampled to 6701 and 4803 samples. Further-
more, baseline models have been trained and tested
to assess the usability of this dataset in other tasks,
such as classification and topic modeling. Table 5
shows the results obtained.
SummEvalFr Extracts of humans and machine
summaries translated in French from SummEvalFr
and the original ones in English from SummEval
(Fabbri et al., 2021) are shown in Figure 9. As ex-
plained in section 3.1.3, we use a LLM to evaluate
the quality of translations for human summaries,
we provide the prompt used with GPT-4 for this
evaluation in Figure 8.
Table 6 shows the distribution of ratings given
by the LLM. With the scale being 10, we man-
ually verify random samples rated above 9. We
verify all samples with ratings under 9 and those
with no provided rating (N/A) due to the triggering
of the OpenAI content management policy. The
LLM suggests that 60 samples are not correctly
translated. These were verified manually, and after
checking, less than 10 samples only needed to be
corrected.
B Supplementary materials for
correlation analysis
This section presents various correlations computed
based on the model results on the proposed bench-
mark.
Figure 10 represents cross-correlations between
models’ performances and their studied character-
istics as a heatmap.
Figure 11 represents the Spearman correlations
in terms of performance across models.
Figure 12 represents the Spearman correlations
in terms of performance across datasets.
C Supplementary materials for models
We present in this section the model characteristics
we collected for the 46 evaluated models.
For evaluating prompt-based models such as
intfloat/e5-mistral-instruct-7b, we provide the
prompts we used in Table 8.
D Evaluation results
This section presents the results obtained for each
model on each task. To be relevant, we used the
same metrics as in MTEB, which varies from one
type of task to another:
12 | 11 | 11 | arxiv4.pdf |
Dataset x Task Average # tokens# samples Reference LicenseAmazonReviewsClassification49.6 5000 McAuley and Leskovec (2013) N/AMasakhaNEWSClassification1398.2 422 Adelani et al. (2023) AFL-3.0MassiveIntentClassification11.4 2974 FitzGerald et al. (2023) N/AMassiveScenarioClassification11.4 2974 FitzGerald et al. (2023) N/AMTOPDomainClassification12.5 3193 Li et al. (2021) N/AMTOPIntentClassification 12.5 3193 Li et al. (2021) N/AAlloProfClusteringP2P 1021.8 2556 Lefebvre-Brossard et al. (2023) MITAlloProfClusteringS2S 8.8 2556 Lefebvre-Brossard et al. (2023) MITHALClusteringS2S 25.6 26233 Introduced by our paper Apache-2.0MasakhaNEWSClusteringP2P1398.1 422 Adelani et al. (2023) AFL-3.0MasakhaNEWSClusteringS2S21.7 422 Adelani et al. (2023) AFL-3.0MLSUMClusteringP2P 1062.1 15828 Scialom et al. (2020) OtherMLSUMClusteringS2S 20.8 15828 Scialom et al. (2020) OtherOpusparcusPC 9.7 1007 Creutz (2018) CC-BY-NC-4.0PawsX 34.9 2000 Yang et al. (2019) OtherSTSBenchmarkMultilingualSTS18.4 1379 May (2021) N/ASTS22 722.1 104 Chen et al. (2022) N/ASICKFr 15.1 4906 https://huggingface.co./datasets/Lajavaness/SICK-frApache-2.0DiaBLaBitextMining 12.02 5748 Bawden et al. (2021) CC-BY-SA-4.0FloresBitextMining 33.42 1012 Goyal et al. (2021) CC-BY-SA-4.0AlloprofReranking 48.3 - 1179.4 - 1196.42316 - 2975 - 22064 Lefebvre-Brossard et al. (2023) MITSyntecReranking 19.2 - 402.2 - 467.2100 - 100 - 917 Introduced by our paper Apache-2.0AlloprofRetrieval 48.31 - 1117.912316 - 2556 Lefebvre-Brossard et al. (2023) MITBSARDRetrieval 144.03 - 24530.8222 - 22600 Louis and Spanakis (2022) CC-BY-NC-SA-4.0SyntecRetrieval 19.22 - 295.65 100 - 90 Introduced by our paper Apache-2.0SummEvalFr 657.08 - 71.18 - 107.56100 - 1100 - 1600 Created from Fabbri et al. (2021) MIT
Table 3: Details of the data used for each task. The average number of tokens of texts is computed using the
cl100k_base tokenizer. For Reranking, the three numbers refer to the queries, the pairs of queries with relevant
documents and the pairs of queries with irrelevant ones, respectively. The pairs of queries and documents are
obtained from the 90 dataset’s documents. For Retrieval datasets, the two numbers refer to the queries and the
documents, respectively. For SummEvalFr, the three numbers refer to the texts, human and machine summaries.
References to all the datasets used are available.
AmazonReviewsClassification
MasakhaNEWSClassification
MassiveIntentClassification
MassiveScenarioClassification
MTOPDomainClassification
MTOPIntentClassification
AlloProfClusteringP2P
AlloProfClusteringS2S
HALClusteringS2S
MasakhaNEWSClusteringP2P
MasakhaNEWSClusteringS2S
MLSUMClusteringP2P
MLSUMClusteringS2S
AlloprofRetrieval
BSARDRetrieval
SyntecRetrieval
OpusparcusPC
PawsX
AlloprofReranking
SyntecReranking
SICKFr
STS22
STSBenchmarkMultilingualSTS
SummEvalFr
AmazonReviewsClassification
MasakhaNEWSClassification
MassiveIntentClassification
MassiveScenarioClassification
MTOPDomainClassification
MTOPIntentClassification
AlloProfClusteringP2P
AlloProfClusteringS2S
HALClusteringS2S
MasakhaNEWSClusteringP2P
MasakhaNEWSClusteringS2S
MLSUMClusteringP2P
MLSUMClusteringS2S
AlloprofRetrieval
BSARDRetrieval
SyntecRetrieval
OpusparcusPC
PawsX
AlloprofReranking
SyntecReranking
SICKFr
STS22
STSBenchmarkMultilingualSTS
SummEvalFr
0.95
0.96 0.95
0.97 0.95 1
0.96 0.95 0.99 0.99
0.96 0.94 0.99 0.99 1
0.96 0.97 0.95 0.95 0.94 0.94
0.96 0.94 0.98 0.98 0.97 0.97 0.95
0.96 0.95 0.97 0.97 0.97 0.96 0.96 0.98
0.95 1 0.95 0.95 0.94 0.94 0.97 0.94 0.95
0.95 0.97 0.97 0.97 0.97 0.96 0.95 0.97 0.97 0.97
0.95 0.98 0.94 0.95 0.94 0.94 0.97 0.94 0.95 0.98 0.96
0.95 0.98 0.94 0.94 0.94 0.94 0.97 0.94 0.95 0.98 0.96 1
0.96 0.97 0.95 0.95 0.95 0.95 1 0.96 0.96 0.97 0.95 0.96 0.97
0.95 0.95 0.94 0.95 0.94 0.94 0.96 0.95 0.96 0.95 0.95 0.94 0.94 0.96
0.93 0.94 0.93 0.93 0.93 0.92 0.95 0.93 0.94 0.94 0.93 0.94 0.94 0.95 0.97
0.96 0.93 0.97 0.97 0.97 0.97 0.94 0.97 0.96 0.93 0.96 0.93 0.93 0.94 0.94 0.92
0.96 0.95 0.96 0.96 0.96 0.96 0.96 0.96 0.97 0.95 0.97 0.94 0.94 0.96 0.96 0.93 0.96
0.96 0.97 0.95 0.95 0.94 0.94 1 0.96 0.96 0.97 0.95 0.97 0.97 1 0.96 0.95 0.94 0.96
0.92 0.93 0.92 0.92 0.92 0.91 0.94 0.92 0.93 0.93 0.92 0.94 0.94 0.94 0.97 1 0.9 0.92 0.94
0.94 0.92 0.95 0.95 0.95 0.95 0.92 0.94 0.94 0.92 0.95 0.91 0.91 0.92 0.92 0.89 0.95 0.96 0.92 0.88
0.95 0.99 0.95 0.95 0.95 0.95 0.97 0.95 0.96 0.99 0.97 0.99 0.99 0.97 0.95 0.95 0.94 0.95 0.97 0.94 0.92
0.96 0.95 0.97 0.97 0.97 0.97 0.95 0.97 0.97 0.95 0.97 0.94 0.94 0.95 0.95 0.92 0.97 0.98 0.95 0.91 0.99 0.95
0.95 0.97 0.96 0.96 0.96 0.95 0.96 0.94 0.95 0.97 0.97 0.96 0.96 0.95 0.95 0.93 0.94 0.97 0.95 0.93 0.95 0.97 0.97
0.90
0.92
0.94
0.96
0.98
1.00
Figure 3: Cosine similarity between tasks’ data. Ninety random samples per task’s data are embedded using the
multilingual-e5-small model. The embeddings of each task’s data sample are averaged. The similarity between each
dataset is then calculated using cosine similarity as in (Muennighoff et al., 2022).
13 | 12 | 12 | arxiv4.pdf |
Figure 4: 2D projection of tasks’ data. 90 random samples per task’s data are embedded using multlingual-e5-small
model (Wang et al., 2022). The embeddings are reduced to 2 dimensions using PCA. The centroid of each task’s
data is represented, along with the ellipse showing the standard deviation along each axis.
Label # raw #mteb_evalDescription
shs 58706 6701 Human and social sciences (Sci-
ences humaines et sociales)
sdv 11049 4803 Life science [Biology] (Sciences du
vivant [Biologie])
spi 3601 3451 Engineering science (Sciences de
l’ingénieur [Physics])
info 3446 3263 Computer Science (Informatique)
sde 2830 2754 Environment science (Sciences de
l’environnement)
phys 2003 1926 Physics ( Physique)
sdu 1177 1158 Planet and Universe [Physics]
(Planète et Univers [Physique])
math 862 824 Mathematics ( Mathématiques)
chim 764 734 Chemistry ( Chimie)
scco 652 619 Cognitive sciences (Sciences cogni-
tives)
qfin 183 N/A Economy and quantitative finance
(Économie et finance quantitative
stat 52 N/A Statistics ( Statistiques)
other 18 N/A Other ( Autre)
stic 14 N/A N/A
nlin 12 N/A Non-linear Science [Physics] (Sci-
ence non linéaire [Physique])
electromag 3 N/A Electro-magnetism (Electro-
magnétisme)
instrum 2 N/A Instrumentation [Physics] (Instru-
mentation [Physique])
image 1 N/A Image
Table 4: Distribution of classes in HAL the raw and
mteb_eval subsets of the dataset.
Task type Model Score
Classification (F1-score)TF-IDF + LR 0.60 (±0.002)TF-IDF + SVC 0.61 (±0.001)CamemBERT (fine-tuned)*0.6 (±0.008)GPT-4 (ICL)** 0.30
Topic Modeling TF-IDF + LDA 0.49 (Coherence)-8.23 (Perplexity)
Table 5: Baselines results for HAL on a classification
task and topic modeling.
* CamemBERT was finetuned for 5 epochs with learn-
ing rate of 1e−4 (+ lr scheduler) and a batch size of 64.
** Due to limited budget, we evaluate GPT-4 ICL ca-
pabilities on a limited subset of our dataset (600 first
samples from the test set that is generated using the
same seed as for other experiments).
• Bitext Mining: F1 score
• Classification: Accuracy
• Clustering: V measure
• Pair Classification: Average Precision (AP)
• Reranking: Mean Average Precision (MAP)
• Retrieval: Normalized Discounted Cumula-
tive Gain at k (NDCG@k)
• STS: Spearman correlation based on cosine
similarity
14 | 13 | 13 | arxiv4.pdf |
Document
id article-14
url https://www.syntec.fr/convention-
collective/resiliation-du-contrat-
de-travail/#article-14
title Article 14 : Préavis pendant la péri-
ode d’essai
section Résiliation du contrat de travail
content Modification Avenant n ° 7 du
5/07/1991 Au cours de cette péri-
ode, les deux parties peuvent se sé-
parer avec un préavis d’une journée
de travail pendant le premier mois.
Après le premier mois, le temps
de préavis réciproque sera d’une
semaine par mois complet passé
dans l’entreprise. Après le pre-
mier mois, le temps de préavis ré-
ciproque sera d’une semaine par
mois passé dans l’entreprise. Le
préavis donne droit au salarié de
s’absenter pour la recherche d’un
emploi dans les conditions fixées à
l’article 16. Le salarié sera payé au
prorata du temps passé pendant la
période d’essai.
Query
article article-14
question Quel est le préavis en période
d’essai ?
Figure 5: Extracts of Syntec dataset.
hal_id Domain Title
hal-02899209 shs La transformation
digitale du manage-
ment des ressources
humaines et de ses
enjeux pour les
entreprises
tel-03993881 math Sur l’approximation
numérique de
quelques problèmes
en mécanique des
fluides
Figure 6: Extracts of HAL dataset.
Figure 7: Distribution of the word count per title in HAL
dataset, mteb_eval subset.
"""
You will be given a couple of texts in
English and their translation in French.
Your task is to provide a 'rating' score on
how well the system translated the
English text into French.
Give your answer as a float on a scale of 0
to 10, where 0 means that the
system_translation is bad and does not
represent what is being said in the
original English text, and 10 means that
the translation is good and represents
the original English text.
No need to mind the quality of the text as
original English text may be of bad
quality.
Provide your feedback as follows:
Feedback:::
Total rating: (your rating, as a float
between 0 and 10)
Now here are the English and French texts.
Original text in English: {english_text}
Translation in French: {french_translation}
Feedback:::
Total rating:
"""
Figure 8: Prompt used for LLM as-judge evaluation of
SummEval dataset translation.
15 | 14 | 14 | arxiv4.pdf |
Summary
type
Original
(SummEval)
Translated
(Sum-
mEvalFr)
Human
summary
The whale,
Varvara, swam
a round trip
from Russia to
Mexico, nearly
14,000 miles.
The previous
record was set
by a humpback
whale that
migrated more
than 10,000
miles.
La baleine,
Varvara, a
parcouru à la
nage un trajet
aller-retour
entre la Russie
et le Mexique,
soit près de
14 000 milles.
Le précédent
record avait
été établi par
une baleine
à bosse qui
avait migré sur
plus de 10 000
miles.
Machine
summary
north pacific
gray whale has
earned a spot
in the record
for the longest
migration of a
mammal ever
recorded . the
whale , named
varvara , swam
nearly 14,000
miles from
the guinness
worlds records
. the record
was set by a
whale whale
whale that
swam a mere
10,190-mile
round trip . the
north coast
of mexico is
russian for
"barbara".
la baleine
grise du paci-
fique nord a
obtenu une
place dans le
record de la
plus longue
migration d’un
mammifère
jamais en-
registrée. la
baleine, nom-
mée varvara,
a nagé près
de 14 000
miles depuis
les records
du monde
guinness. le
record a été
établi par une
baleine baleine
qui a nagé
un voyage
aller-retour
de seulement
10 190 miles.
la côte nord
du mexique
est le nom
russe pour
"barbara".
Figure 9: Extracts of SummEvalFr dataset.
Quality Rating # samples
Good quality
10.0 186
9.5 661
9.0 193
Not good enough
8.5 16
8.0 5
7.5 7
7.0 3
6.0 3
5.0 2
4.0 1
3.0 1
2.0 3
N/A 19
Table 6: Ratings provided by the LLM judge for
the quality of human summaries translations of Sum-
mEvalFr from English to French.
• Summarization: Spearman correlation based
on cosine similarity
D.1 Average performance per task type
Table 9 presents the average performance of each
model on each task type.
D.2 Evaluation results per task
Tables 10, 11 12 and 13 present the models’ perfor-
mance on each task type. Table 10 presents the per-
formance on classification and pair classification
tasks. Table 11 presents the reranking and retrieval
performance. Table 12 presents the performance
on bitext mining, semantic textual similarity and
summarization. Table 13 presents the performance
on the clustering tasks.
16 | 15 | 15 | arxiv4.pdf |
Model ranking
Finetuned vs pretrained
Model number of parameters
Max sequence length
Embedding dimension
T uned for sentence similarity
Bilingual
English
English +
tuning on
other languages
French
Multilingual
Closed source
Model ranking
Finetuned vs pretrained
Model number of parameters
Max sequence length
Embedding dimension
T uned for sentence similarity
Bilingual
English
English +
tuning on
other languages
French
Multilingual
Closed source
0.6
0.4
0.2
0.0
0.2
0.4
0.6
0.8
1.0
Figure 10: Heatmap representing cross-correlations between models’ characteristics and models’ performances.
17 | 16 | 16 | arxiv4.pdf |
bge-m3
distilbert-base-25lang-cased
distilbert-base-en-fr-cased
distilbert-base-fr-cased
sentence-camembert-large
sentence-flaubert-base
Solon-embeddings-base-0.1
Solon-embeddings-large-0.1
sentence-croissant-llm-base
bert-base-multilingual-cased
bert-base-multilingual-uncased
camembert-base
camembert-large
sentence-camembert-base
embed-multilingual-light-v3.0
embed-multilingual-v3.0
flaubert_base_cased
flaubert_base_uncased
flaubert_large_cased
e5-mistral-7b-instruct
multilingual-e5-base
multilingual-e5-large
multilingual-e5-small
udever-bloom-1b1
udever-bloom-560m
laser2
bge-m3-custom-fr
sentence_croissant_alpha_v0.2
sentence_croissant_alpha_v0.3
mistral-embed
LaBSE
all-MiniLM-L12-v2
all-MiniLM-L6-v2
distiluse-base-multilingual-cased-v2
multi-qa-MiniLM-L6-cos-v1
paraphrase-multilingual-MiniLM-L12-v2
paraphrase-multilingual-mpnet-base-v2
sentence-t5-base
sentence-t5-large
sentence-t5-xl
sentence-t5-xxl
text2vec-base-multilingual
text-embedding-3-large
text-embedding-3-small
text-embedding-ada-002
voyage-2
voyage-code-2
universal-sentence-encoder-multilingual-3
universal-sentence-encoder-multilingual-large-3
xlm-roberta-base
xlm-roberta-large
model
bge-m3
distilbert-base-25lang-cased
distilbert-base-en-fr-cased
distilbert-base-fr-cased
sentence-camembert-large
sentence-flaubert-base
Solon-embeddings-base-0.1
Solon-embeddings-large-0.1
sentence-croissant-llm-base
bert-base-multilingual-cased
bert-base-multilingual-uncased
camembert-base
camembert-large
sentence-camembert-base
embed-multilingual-light-v3.0
embed-multilingual-v3.0
flaubert_base_cased
flaubert_base_uncased
flaubert_large_cased
e5-mistral-7b-instruct
multilingual-e5-base
multilingual-e5-large
multilingual-e5-small
udever-bloom-1b1
udever-bloom-560m
laser2
bge-m3-custom-fr
sentence_croissant_alpha_v0.2
sentence_croissant_alpha_v0.3
mistral-embed
LaBSE
all-MiniLM-L12-v2
all-MiniLM-L6-v2
distiluse-base-multilingual-cased-v2
multi-qa-MiniLM-L6-cos-v1
paraphrase-multilingual-MiniLM-L12-v2
paraphrase-multilingual-mpnet-base-v2
sentence-t5-base
sentence-t5-large
sentence-t5-xl
sentence-t5-xxl
text2vec-base-multilingual
text-embedding-3-large
text-embedding-3-small
text-embedding-ada-002
voyage-2
voyage-code-2
universal-sentence-encoder-multilingual-3
universal-sentence-encoder-multilingual-large-3
xlm-roberta-base
xlm-roberta-large
model
Model Correlation Heatmap (Spearman)
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Figure 11: Heatmap representing the Spearman correlations in terms of performance across models.
18 | 17 | 17 | arxiv4.pdf |
MassiveScenarioClassification
MassiveIntentClassification
MasakhaNEWSClassification
MTOPIntentClassification
MTOPDomainClassification
AmazonReviewsClassification
MasakhaNEWSClusteringS2S
MasakhaNEWSClusteringP2P
MLSUMClusteringS2S
MLSUMClusteringP2P
HALClusteringS2S
AlloProfClusteringS2S
AlloProfClusteringP2P
PawsX
OpusparcusPC
SyntecReranking
AlloprofReranking
SyntecRetrieval
BSARDRetrieval
AlloprofRetrieval
STSBenchmarkMultilingualSTS
STS22
SICKFr
SummEvalFr
MassiveScenarioClassification
MassiveIntentClassification
MasakhaNEWSClassification
MTOPIntentClassification
MTOPDomainClassification
AmazonReviewsClassification
MasakhaNEWSClusteringS2S
MasakhaNEWSClusteringP2P
MLSUMClusteringS2S
MLSUMClusteringP2P
HALClusteringS2S
AlloProfClusteringS2S
AlloProfClusteringP2P
PawsX
OpusparcusPC
SyntecReranking
AlloprofReranking
SyntecRetrieval
BSARDRetrieval
AlloprofRetrieval
STSBenchmarkMultilingualSTS
STS22
SICKFr
SummEvalFr
Dataset Correlation Heatmap (Spearman)
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
Figure 12: Heatmap representing the correlation regarding model performance across tasks.
19 | 18 | 18 | arxiv4.pdf |
Model Finetuned Language # params Size (Gb) Seq. Len. Emb. dim. License Sentence simbert-base-multilingual-cased No multilingual 1,78e+08 0.71 512 768 Apache-2.0 Nobert-base-multilingual-uncased No multilingual 1,67e+08 0.67 512 768 Apache-2.0 Nocamembert-base No french 1,11e+08 0.44 514 768 MIT Nocamembert-large No french 3,37e+08 1.35 514 1024 MIT Nosentence-camembert-base Yes french 1,11e+08 0.44 128 768 Apache-2.0 Yessentence-camembert-large Yes french 3,37e+08 1.35 514 1024 Apache-2.0 Yessentence-flaubert-base Yes french 1,37e+08 0.55 512 768 Apache-2.0 Yesembed-multilingual-light-v3.0 N/A multilingual N/A N/A 512 384 Closed source N/Aembed-multilingual-v3.0 N/A multilingual N/A N/A 512 1024 Closed source N/Aflaubert-base-cased No french 1,38e+08 0.55 512 768 MIT Noflaubert-base-uncased No french 1,37e+08 0.55 512 768 MIT Noflaubert-large-cased No french 3,73e+08 1.49 512 1024 MIT Nodistilbert-base-25lang-cased No multilingual 1,08e+08 0.43 512 768 Apache-2.0 Nodistilbert-base-en-fr-cased No bilingual 6,86e+07 0.27 512 768 Apache-2.0 Nodistilbert-base-fr-cased No french 6,17e+07 0.25 512 768 Apache-2.0 Nomultilingual-e5-base No multilingual 2,78e+08 1.11 512 768 MIT Yesmultilingual-e5-large No multilingual 5,60e+08 2.24 512 1024 MIT Yesmultilingual-e5-small No multilingual 1,18e+08 0.47 512 384 MIT Yese5-mistral-7b-instruct Yes english-plus 7,11e+09 28.44 32768 4096 MIT Yesudever-bloom-1b1 Yes multilingual 1,07e+09 4.26 2048 1536 bloom-rail-1.0 Yesudever-bloom-560m Yes multilingual 5,59e+08 2.24 2048 1024 bloom-rail-1.0 Yeslaser2 Yes multilingual 4,46e+07 0.18 N/A 1024 BSD License Yesall-MiniLM-L12-v2 Yes english-plus 3,34e+07 0.13 128 384 Apache-2.0 Yesall-MiniLM-L6-v2 Yes english-plus 2,27e+07 0.09 256 384 Apache-2.0 Yesdistiluse-base-multilingual-cased-v2 Yes multilingual 1,35e+08 0.54 128 512 Apache-2.0 YesLaBSE Yes multilingual 4,72e+08 1.89 256 768 Apache-2.0 Yesmulti-qa-MiniLM-L6-cos-v1 Yes english 2,27e+07 0.09 512 384 N/A Yesparaphrase-multilingual-MiniLM-L12-v2 Yes multilingual 1,18e+08 0.47 128 384 Apache-2.0 Yessentence-t5-base Yes multilingual 1,10e+08 0.44 256 768 Apache-2.0 Yessentence-t5-large Yes multilingual 3,36e+08 1.34 256 768 Apache-2.0 Yessentence-t5-xl Yes multilingual 1,24e+09 4.97 256 768 Apache-2.0 Yessentence-t5-xxl Yes multilingual 4,87e+09 19.46 256 768 Apache-2.0 Yestext2vec-base-multilingual Yes multilingual 1,18e+08 0.47 256 384 Apache-2.0 Yestext-embedding-ada-002 N/A multilingual N/A N/A 8191 1536 Closed source N/Atext-embedding-3-small N/A multilingual N/A N/A 8191 1536 Closed source N/Atext-embedding-3-large N/A multilingual N/A N/A 8191 3072 Closed source N/Amistral-embed N/A multilingual N/A N/A 16384 1024 Closed source N/Auniversal-sentence-encoder-multilingual-3 Yes multilingual 6,89e+07 0.28 N/A 512 Apache-2.0 Yesuniversal-sentence-encoder-multilingual-large-3 Yes multilingual 8,52e+07 0.34 N/A 512 Apache-2.0 Yesxlm-roberta-base No multilingual 2,78e+08 1.11 514 768 MIT Noxlm-roberta-large No multilingual 5,60e+08 2.24 514 1024 MIT Nosentence-croissant-llm-base Yes french 1,28e+09 5.12 256 2048 MIT Yesparaphrase-multilingual-mpnet-base-v2 No multilingual 2,78e+08 1.11 128 768 Apache-2.0 Yesvoyage-2 N/A english N/A N/A 4000 1024 Closed source N/Avoyage-code-2 N/A english N/A N/A 16000 1536 Closed source N/ASolon-embeddings-large-0.1 Yes french 5.60e+08 2.239561728 512.0 1024.0 MIT YesSolon-embeddings-base-0.1 Yes french 2.78e+08 1.112174592 512.0 768.0 MIT Yessentence-croissant-alpha-v0.3 Yes french 1.28e+09 5.11954944 1024.0 2048.0 MIT Yessentence-croissant-alpha-v0.2 Yes french 1.28e+09 5.11954944 1024.0 2048.0 MIT Yesbge-m3 Yes multilingual 5.68e+08 2.271019008 8192.0 1024.0 MIT Yesbge-m3-custom-fr Yes multilingual 5.68e+08 2.271019008 8192.0 1024.0 MIT Yes
Table 7: Models included in the benchmark with their main characteristics. The size in Gb is estimated using the
number of parameters counted as float32 numbers. Sentence sim refers to the fact that the model was trained on a
task that favors semantic similarity.
Task type Prompt
Classification "Classify the following task: "
Clustering "Identify the topic or theme based on the text: "
Retrieval "Retrieve semantically similar text: "
Reranking "Re-rank the following text: "
Pair Classification "Classify the following pair of text: "
STS "Determine the similarity between the following text:
"
Summarization "Summarize the following text: "
Bitext Mining "Translate the following text: "
Table 8: Prompts used for the evaluation of e5-mistral-7b-instruct.
20 | 19 | 19 | arxiv4.pdf |
Average
BitextMining
Classification
Clustering
PairClassification
Reranking
Retrieval
STS
Summarization
bge-m3 0.68 0.95 0.69 0.43 0.77 0.81 0.65 0.81 0.31
distilbert-base-25lang-cased 0.43 0.65 0.46 0.37 0.69 0.34 0.10 0.53 0.31
distilbert-base-en-fr-cased 0.43 0.65 0.46 0.38 0.69 0.34 0.10 0.54 0.31
distilbert-base-fr-cased 0.41 0.45 0.46 0.38 0.69 0.34 0.10 0.54 0.31
sentence-camembert-large 0.65 0.90 0.66 0.43 0.77 0.72 0.56 0.82 0.31
sentence-flaubert-base 0.59 0.80 0.61 0.41 0.76 0.65 0.43 0.79 0.31
Solon-embeddings-base-0.1 0.64 0.95 0.67 0.43 0.76 0.78 0.41 0.78 0.31
Solon-embeddings-large-0.1 0.67 0.96 0.69 0.42 0.77 0.79 0.63 0.80 0.30
sentence-croissant-llm-base 0.62 0.91 0.65 0.43 0.77 0.68 0.52 0.76 0.29
bert-base-multilingual-cased 0.44 0.75 0.46 0.34 0.70 0.38 0.10 0.50 0.29
bert-base-multilingual-uncased 0.49 0.76 0.48 0.41 0.70 0.46 0.19 0.56 0.31
camembert-base 0.35 0.18 0.42 0.34 0.68 0.31 0.02 0.57 0.30
camembert-large 0.37 0.26 0.49 0.36 0.65 0.34 0.07 0.59 0.17
sentence-camembert-base 0.57 0.72 0.57 0.36 0.74 0.66 0.43 0.78 0.29
embed-multilingual-light-v3.0 0.63 0.89 0.61 0.39 0.74 0.76 0.55 0.78 0.31
embed-multilingual-v3.0 0.66 0.94 0.67 0.41 0.77 0.79 0.54 0.81 0.31
flaubert _base _cased 0.34 0.23 0.25 0.27 0.67 0.36 0.08 0.52 0.31
flaubert _base _uncased 0.31 0.12 0.23 0.22 0.68 0.40 0.09 0.43 0.29
flaubert _large _cased 0.27 0.11 0.25 0.25 0.65 0.30 0.01 0.33 0.29
e5-mistral-7b-instruct 0.68 0.95 0.64 0.50 0.76 0.82 0.64 0.79 0.31
multilingual-e5-base 0.65 0.95 0.65 0.43 0.75 0.75 0.56 0.78 0.31
multilingual-e5-large 0.66 0.95 0.66 0.40 0.76 0.76 0.59 0.81 0.31
multilingual-e5-small 0.63 0.94 0.60 0.39 0.75 0.73 0.52 0.78 0.32
udever-bloom-1b1 0.47 0.52 0.55 0.35 0.74 0.43 0.28 0.62 0.29
udever-bloom-560m 0.36 0.32 0.30 0.29 0.71 0.39 0.11 0.51 0.24
laser2 0.52 0.95 0.58 0.30 0.82 0.44 0.13 0.67 0.31
bge-m3-custom-fr 0.66 0.94 0.67 0.40 0.77 0.79 0.59 0.80 0.30
sentence _croissant _alpha _v0.2 0.66 0.92 0.66 0.44 0.80 0.77 0.61 0.74 0.30
sentence _croissant _alpha _v0.3 0.67 0.92 0.66 0.46 0.79 0.78 0.65 0.77 0.31
mistral-embed 0.68 0.92 0.69 0.46 0.78 0.80 0.68 0.80 0.31
LaBSE 0.59 0.96 0.65 0.39 0.74 0.61 0.33 0.74 0.30
all-MiniLM-L12-v2 0.51 0.48 0.52 0.34 0.72 0.68 0.43 0.67 0.27
all-MiniLM-L6-v2 0.50 0.40 0.52 0.35 0.71 0.65 0.38 0.68 0.28
distiluse-base-multilingual-cased-v2 0.60 0.94 0.64 0.39 0.72 0.69 0.40 0.75 0.28
multi-qa-MiniLM-L6-cos-v1 0.49 0.38 0.51 0.33 0.72 0.64 0.39 0.67 0.28
paraphrase-multilingual-MiniLM-L12-v2 0.60 0.93 0.60 0.39 0.74 0.68 0.44 0.75 0.29
paraphrase-multilingual-mpnet-base-v2 0.63 0.94 0.63 0.40 0.76 0.74 0.50 0.78 0.30
sentence-t5-base 0.59 0.83 0.58 0.41 0.72 0.70 0.45 0.75 0.30
sentence-t5-large 0.62 0.90 0.62 0.42 0.76 0.73 0.51 0.75 0.30
sentence-t5-xl 0.65 0.91 0.65 0.43 0.78 0.76 0.55 0.77 0.32
sentence-t5-xxl 0.67 0.94 0.67 0.44 0.79 0.78 0.60 0.78 0.30
text2vec-base-multilingual 0.57 0.92 0.56 0.34 0.79 0.59 0.32 0.78 0.29
text-embedding-3-large 0.71 0.96 0.74 0.48 0.80 0.86 0.73 0.81 0.30
text-embedding-3-small 0.69 0.95 0.70 0.49 0.77 0.81 0.68 0.79 0.30
text-embedding-ada-002 0.69 0.95 0.69 0.51 0.77 0.82 0.67 0.78 0.30
voyage-code-2 0.67 0.86 0.67 0.47 0.77 0.81 0.68 0.78 0.28
universal-sentence-encoder-multilingual-3 0.60 0.94 0.64 0.43 0.72 0.68 0.35 0.75 0.28
universal-sentence-encoder-multilingual-large-3 0.59 0.95 0.66 0.37 0.74 0.67 0.33 0.74 0.28
xlm-roberta-base 0.36 0.48 0.31 0.28 0.68 0.30 0.01 0.51 0.29
xlm-roberta-large 0.35 0.35 0.31 0.29 0.69 0.35 0.03 0.49 0.29
Table 9: Average performance of models per task category.
21 | 20 | 20 | arxiv4.pdf |
MassiveScenario
MassiveIntent
MasakhaNEWS
MTOPIntent
MTOPDomain
AmazonReviews
PawsX
OpusparcusPC
Classification PairClassification
bge-m3 0.73 0.67 0.77 0.62 0.89 0.45 0.60 0.93
distilbert-base-25lang-cased 0.44 0.35 0.68 0.35 0.62 0.29 0.51 0.86
distilbert-base-en-fr-cased 0.44 0.35 0.68 0.35 0.62 0.29 0.51 0.86
distilbert-base-fr-cased 0.44 0.35 0.68 0.35 0.62 0.29 0.51 0.86
sentence-camembert-large 0.70 0.64 0.74 0.61 0.87 0.38 0.61 0.94
sentence-flaubert-base 0.63 0.59 0.71 0.53 0.79 0.40 0.58 0.93
Solon-embeddings-base-0.1 0.70 0.65 0.75 0.62 0.87 0.41 0.59 0.93
Solon-embeddings-large-0.1 0.71 0.67 0.76 0.69 0.89 0.42 0.60 0.94
sentence-croissant-llm-base 0.65 0.59 0.79 0.63 0.86 0.35 0.63 0.91
bert-base-multilingual-cased 0.44 0.37 0.64 0.38 0.64 0.29 0.53 0.87
bert-base-multilingual-uncased 0.44 0.38 0.76 0.39 0.64 0.29 0.53 0.87
camembert-base 0.39 0.31 0.66 0.29 0.58 0.30 0.52 0.83
sentence-camembert-base 0.61 0.52 0.70 0.43 0.77 0.36 0.57 0.92
sentence-camembert-large 0.69 0.63 0.81 0.59 0.86 0.38 0.60 0.95
embed-multilingual-light-v3.0 0.59 0.56 0.83 0.50 0.81 0.39 0.57 0.91
embed-multilingual-v3.0 0.67 0.63 0.83 0.61 0.86 0.42 0.61 0.94
flaubert_base_cased 0.11 0.07 0.71 0.09 0.26 0.25 0.52 0.82
flaubert_base_uncased 0.11 0.06 0.63 0.09 0.28 0.24 0.53 0.82
flaubert_large_cased 0.23 0.16 0.56 0.10 0.24 0.22 0.54 0.75
e5-mistral-7b-instruct 0.70 0.60 0.75 0.53 0.82 0.44 0.60 0.92
multilingual-e5-base 0.66 0.61 0.80 0.56 0.85 0.41 0.57 0.93
multilingual-e5-large 0.68 0.64 0.79 0.59 0.86 0.42 0.59 0.94
multilingual-e5-small 0.61 0.56 0.78 0.46 0.81 0.40 0.56 0.93
udever-bloom-1b1 0.50 0.43 0.81 0.51 0.69 0.35 0.62 0.86
udever-bloom-560m 0.22 0.15 0.68 0.16 0.35 0.27 0.60 0.82
laser2 0.59 0.53 0.66 0.57 0.76 0.34 0.70 0.94
bge-m3-custom-fr 0.75 0.67 0.70 0.61 0.90 0.42 0.61 0.93
sentence_croissant_alpha_v0.2 0.70 0.64 0.76 0.61 0.89 0.38 0.67 0.93
sentence_croissant_alpha_v0.3 0.70 0.65 0.76 0.59 0.88 0.36 0.65 0.93
mistral-embed 0.70 0.63 0.81 0.66 0.90 0.42 0.62 0.93
LaBSE 0.65 0.60 0.77 0.62 0.84 0.39 0.55 0.94
all-MiniLM-L12-v2 0.54 0.45 0.72 0.39 0.76 0.28 0.56 0.87
all-MiniLM-L6-v2 0.51 0.43 0.74 0.40 0.75 0.27 0.55 0.87
distiluse-base-multilingual-cased-v2 0.67 0.60 0.77 0.56 0.85 0.36 0.51 0.92
multi-qa-MiniLM-L6-cos-v1 0.50 0.43 0.76 0.37 0.73 0.27 0.57 0.88
paraphrase-multilingual-MiniLM-L12-v2 0.65 0.58 0.76 0.48 0.78 0.37 0.57 0.92
paraphrase-multilingual-mpnet-base-v2 0.68 0.62 0.78 0.52 0.80 0.40 0.58 0.93
sentence-t5-base 0.60 0.51 0.81 0.44 0.75 0.37 0.55 0.89
sentence-t5-large 0.64 0.57 0.80 0.48 0.80 0.41 0.60 0.91
sentence-t5-xl 0.66 0.61 0.80 0.54 0.85 0.44 0.63 0.92
sentence-t5-xxl 0.69 0.66 0.79 0.58 0.86 0.46 0.64 0.94
text2vec-base-multilingual 0.58 0.52 0.74 0.45 0.72 0.34 0.66 0.92
text-embedding-3-large 0.76 0.71 0.82 0.74 0.93 0.46 0.65 0.96
text-embedding-3-small 0.73 0.68 0.76 0.68 0.91 0.43 0.61 0.94
text-embedding-ada-002 0.71 0.65 0.82 0.64 0.89 0.44 0.60 0.94
voyage-code-2 0.70 0.63 0.82 0.59 0.88 0.42 0.61 0.93
universal-sentence-encoder-multilingual-3 0.70 0.61 0.82 0.54 0.85 0.34 0.52 0.91
universal-sentence-encoder-multilingual-large-3 0.73 0.66 0.72 0.64 0.88 0.35 0.54 0.93
xlm-roberta-base 0.23 0.14 0.60 0.19 0.44 0.27 0.51 0.85
xlm-roberta-large 0.24 0.16 0.66 0.15 0.37 0.27 0.53 0.84
Table 10: Performance of each model for Classification and Pair Classification.
22 | 21 | 21 | arxiv4.pdf |
SyntecReranking
AlloprofReranking
SyntecRetrieval
BSARDRetrieval
AlloprofRetrieval
Reranking Retrieval
bge-m3 0.88 0.74 0.85 0.60 0.49
distilbert-base-25lang-cased 0.39 0.29 0.18 0.11 0.01
distilbert-base-en-fr-cased 0.39 0.29 0.18 0.11 0.01
distilbert-base-fr-cased 0.39 0.29 0.18 0.11 0.01
sentence-camembert-large 0.82 0.63 0.79 0.56 0.33
sentence-flaubert-base 0.81 0.48 0.69 0.42 0.18
Solon-embeddings-base-0.1 0.85 0.71 0.81 0.00 0.41
Solon-embeddings-large-0.1 0.87 0.72 0.85 0.58 0.47
sentence-croissant-llm-base 0.78 0.57 0.74 0.52 0.30
bert-base-multilingual-cased 0.43 0.32 0.19 0.10 0.02
bert-base-multilingual-uncased 0.59 0.33 0.35 0.16 0.06
camembert-base 0.36 0.26 0.06 0.00 0.00
camembert-large 0.36 0.33 0.18 0.01 0.02
sentence-camembert-base 0.74 0.58 0.69 0.39 0.22
embed-multilingual-light-v3.0 0.82 0.70 0.77 0.52 0.35
embed-multilingual-v3.0 0.84 0.74 0.79 0.44 0.38
flaubert_base_cased 0.43 0.29 0.21 0.02 0.02
flaubert_base_uncased 0.49 0.30 0.22 0.03 0.02
flaubert_large_cased 0.32 0.29 0.02 0.00 0.01
e5-mistral-7b-instruct 0.90 0.74 0.83 0.64 0.45
multilingual-e5-base 0.83 0.67 0.80 0.53 0.36
multilingual-e5-large 0.83 0.69 0.81 0.59 0.38
multilingual-e5-small 0.82 0.65 0.76 0.52 0.27
udever-bloom-1b1 0.48 0.39 0.41 0.32 0.12
udever-bloom-560m 0.47 0.31 0.24 0.06 0.02
laser2 0.49 0.39 0.29 0.08 0.03
bge-m3-custom-fr 0.85 0.74 0.79 0.53 0.45
sentence_croissant_alpha_v0.2 0.82 0.72 0.79 0.60 0.45
sentence_croissant_alpha_v0.3 0.82 0.74 0.80 0.66 0.49
mistral-embed 0.81 0.78 0.79 0.68 0.57
LaBSE 0.68 0.55 0.55 0.23 0.20
all-MiniLM-L12-v2 0.69 0.67 0.61 0.34 0.33
all-MiniLM-L6-v2 0.67 0.63 0.60 0.27 0.28
distiluse-base-multilingual-cased-v2 0.75 0.62 0.65 0.29 0.27
multi-qa-MiniLM-L6-cos-v1 0.65 0.63 0.58 0.30 0.30
paraphrase-multilingual-MiniLM-L12-v2 0.73 0.62 0.66 0.38 0.27
paraphrase-multilingual-mpnet-base-v2 0.81 0.67 0.76 0.43 0.31
sentence-t5-base 0.76 0.63 0.67 0.40 0.28
sentence-t5-large 0.78 0.68 0.71 0.47 0.35
sentence-t5-xl 0.81 0.71 0.74 0.50 0.40
sentence-t5-xxl 0.82 0.75 0.79 0.56 0.46
text2vec-base-multilingual 0.63 0.56 0.50 0.26 0.19
text-embedding-3-large 0.92 0.80 0.87 0.73 0.60
text-embedding-3-small 0.89 0.74 0.87 0.66 0.52
text-embedding-ada-002 0.89 0.76 0.86 0.64 0.52
voyage-code-2 0.87 0.76 0.83 0.68 0.53
universal-sentence-encoder-multilingual-3 0.74 0.62 0.70 0.00 0.35
universal-sentence-encoder-multilingual-large-3 0.69 0.64 0.64 0.00 0.34
xlm-roberta-base 0.32 0.28 0.03 0.00 0.00
xlm-roberta-large 0.39 0.31 0.07 0.01 0.01
Table 11: Performance of each model for Retrieval and Reranking.
23 | 22 | 22 | arxiv4.pdf |
Flores_fr-en
Flores_en-fr
DiaBla_fr-en
STSBenchmarkMultilingual
STS22
SICKFr
SummEvalFr
BitextMining STS Summarization
bge-m3 1.00 1.00 0.85 0.82 0.82 0.78 0.31
distilbert-base-25lang-cased 0.92 0.91 0.11 0.57 0.41 0.62 0.31
distilbert-base-en-fr-cased 0.92 0.91 0.11 0.57 0.42 0.62 0.31
distilbert-base-fr-cased 0.63 0.65 0.06 0.57 0.43 0.62 0.31
sentence-camembert-large 0.99 1.00 0.70 0.86 0.82 0.78 0.31
sentence-flaubert-base 0.96 0.97 0.47 0.86 0.74 0.78 0.31
Solon-embeddings-base-0.1 1.00 1.00 0.85 0.79 0.81 0.75 0.31
Solon-embeddings-large-0.1 1.00 1.00 0.87 0.80 0.83 0.77 0.30
sentence-croissant-llm-base 1.00 1.00 0.74 0.79 0.79 0.70 0.29
bert-base-multilingual-cased 0.97 0.98 0.30 0.52 0.39 0.59 0.29
bert-base-multilingual-uncased 0.95 0.98 0.36 0.55 0.56 0.58 0.31
camembert-base 0.26 0.25 0.04 0.55 0.61 0.54 0.30
sentence-camembert-base 0.90 0.90 0.36 0.82 0.78 0.74 0.29
sentence-camembert-large 0.99 1.00 0.68 0.86 0.82 0.78 0.31
embed-multilingual-light-v3.0 1.00 1.00 0.66 0.76 0.83 0.76 0.31
embed-multilingual-v3.0 1.00 1.00 0.83 0.82 0.83 0.79 0.31
flaubert_base_cased 0.31 0.36 0.02 0.37 0.65 0.54 0.31
flaubert_base_uncased 0.25 0.08 0.03 0.33 0.55 0.42 0.29
flaubert_large_cased 0.15 0.17 0.01 0.16 0.49 0.35 0.29
e5-mistral-7b-instruct 1.00 1.00 0.85 0.83 0.76 0.79 0.31
multilingual-e5-base 1.00 1.00 0.85 0.81 0.78 0.76 0.31
multilingual-e5-large 1.00 1.00 0.85 0.83 0.80 0.79 0.31
multilingual-e5-small 1.00 1.00 0.82 0.79 0.80 0.76 0.32
udever-bloom-1b1 0.75 0.78 0.03 0.50 0.77 0.60 0.29
udever-bloom-560m 0.50 0.37 0.08 0.37 0.61 0.55 0.24
laser2 1.00 1.00 0.86 0.70 0.65 0.65 0.31
bge-m3-custom-fr 1.00 1.00 0.83 0.81 0.82 0.76 0.30
sentence_croissant_alpha_v0.2 1.00 1.00 0.75 0.73 0.79 0.69 0.30
sentence_croissant_alpha_v0.3 1.00 1.00 0.77 0.78 0.81 0.72 0.31
mistral-embed 1.00 1.00 0.75 0.80 0.83 0.76 0.31
LaBSE 1.00 1.00 0.88 0.75 0.78 0.70 0.30
all-MiniLM-L12-v2 0.71 0.62 0.10 0.67 0.70 0.63 0.27
all-MiniLM-L6-v2 0.62 0.56 0.03 0.65 0.77 0.62 0.28
distiluse-base-multilingual-cased-v2 1.00 1.00 0.83 0.77 0.76 0.72 0.28
multi-qa-MiniLM-L6-cos-v1 0.55 0.50 0.09 0.64 0.75 0.62 0.28
paraphrase-multilingual-MiniLM-L12-v2 1.00 1.00 0.78 0.80 0.71 0.75 0.29
paraphrase-multilingual-mpnet-base-v2 1.00 1.00 0.81 0.85 0.74 0.76 0.30
sentence-t5-base 0.97 0.96 0.55 0.74 0.78 0.72 0.30
sentence-t5-large 0.99 0.99 0.71 0.78 0.75 0.73 0.30
sentence-t5-xl 0.99 0.99 0.76 0.79 0.77 0.75 0.32
sentence-t5-xxl 1.00 1.00 0.83 0.81 0.77 0.77 0.30
text2vec-base-multilingual 0.99 0.99 0.78 0.83 0.74 0.77 0.29
text-embedding-3-large 1.00 1.00 0.88 0.83 0.82 0.79 0.30
text-embedding-3-small 1.00 1.00 0.86 0.81 0.81 0.76 0.30
text-embedding-ada-002 0.99 0.99 0.86 0.78 0.81 0.76 0.30
voyage-code-2 1.00 0.99 0.60 0.79 0.80 0.74 0.28
universal-sentence-encoder-multilingual-3 1.00 1.00 0.82 0.75 0.78 0.71 0.28
universal-sentence-encoder-multilingual-large-3 1.00 1.00 0.84 0.78 0.71 0.74 0.28
xlm-roberta-base 0.70 0.53 0.21 0.46 0.57 0.49 0.29
xlm-roberta-large 0.65 0.26 0.13 0.42 0.55 0.50 0.29
Table 12: Performance of each model for Bitext Mining, Semantic Textual Similarity (STS) and Summarization.
24 | 23 | 23 | arxiv4.pdf |
MasakhaNEWSS2S
MasakhaNEWSP2P
MLSUMS2S
MLSUMP2P
HALS2S
AlloProfS2S
AlloProfP2P
Clustering
bge-m3 0.42 0.45 0.44 0.43 0.31 0.37 0.59
distilbert-base-25lang-cased 0.33 0.32 0.31 0.41 0.24 0.43 0.57
distilbert-base-en-fr-cased 0.34 0.34 0.31 0.41 0.25 0.42 0.57
distilbert-base-fr-cased 0.35 0.34 0.31 0.41 0.24 0.43 0.57
sentence-camembert-large 0.37 0.44 0.43 0.43 0.32 0.40 0.62
sentence-flaubert-base 0.30 0.49 0.41 0.41 0.32 0.40 0.57
Solon-embeddings-base-0.1 0.36 0.50 0.42 0.43 0.30 0.37 0.61
Solon-embeddings-large-0.1 0.31 0.46 0.43 0.43 0.32 0.37 0.63
sentence-croissant-llm-base 0.41 0.54 0.34 0.43 0.29 0.33 0.64
bert-base-multilingual-cased 0.24 0.24 0.32 0.41 0.25 0.43 0.51
bert-base-multilingual-uncased 0.42 0.50 0.31 0.43 0.26 0.35 0.61
camembert-base 0.27 0.44 0.27 0.41 0.16 0.29 0.54
camembert-large 0.33 0.42 0.35 0.44 0.03 0.34 0.59
sentence-camembert-base 0.31 0.36 0.27 0.36 0.25 0.39 0.59
embed-multilingual-light-v3.0 0.29 0.57 0.33 0.43 0.20 0.31 0.62
embed-multilingual-v3.0 0.32 0.53 0.35 0.45 0.24 0.36 0.64
flaubert_base_cased 0.21 0.42 0.17 0.39 0.04 0.14 0.53
flaubert_base_uncased 0.23 0.28 0.15 0.33 0.02 0.13 0.43
flaubert_large_cased 0.25 0.26 0.19 0.38 0.07 0.22 0.41
e5-mistral-7b-instruct 0.65 0.38 0.44 0.45 0.37 0.58 0.64
multilingual-e5-base 0.51 0.48 0.39 0.43 0.28 0.33 0.62
multilingual-e5-large 0.31 0.41 0.38 0.44 0.28 0.32 0.63
multilingual-e5-small 0.39 0.40 0.38 0.43 0.21 0.33 0.61
udever-bloom-1b1 0.27 0.40 0.30 0.44 0.16 0.27 0.62
udever-bloom-560m 0.21 0.38 0.25 0.36 0.08 0.22 0.54
laser2 0.30 0.32 0.27 0.35 0.12 0.26 0.48
bge-m3-custom-fr 0.42 0.29 0.42 0.42 0.31 0.39 0.58
sentence_croissant_alpha_v0.2 0.32 0.56 0.44 0.45 0.33 0.38 0.62
sentence_croissant_alpha_v0.3 0.38 0.58 0.44 0.44 0.35 0.41 0.60
mistral-embed 0.40 0.48 0.43 0.45 0.35 0.49 0.62
LaBSE 0.38 0.46 0.35 0.42 0.25 0.32 0.55
all-MiniLM-L12-v2 0.32 0.43 0.29 0.34 0.25 0.32 0.46
all-MiniLM-L6-v2 0.41 0.35 0.28 0.37 0.23 0.32 0.52
distiluse-base-multilingual-cased-v2 0.33 0.54 0.35 0.40 0.22 0.35 0.56
multi-qa-MiniLM-L6-cos-v1 0.27 0.54 0.26 0.35 0.14 0.26 0.49
paraphrase-multilingual-MiniLM-L12-v2 0.34 0.37 0.37 0.40 0.30 0.42 0.56
paraphrase-multilingual-mpnet-base-v2 0.31 0.42 0.38 0.41 0.31 0.45 0.54
sentence-t5-base 0.36 0.62 0.30 0.41 0.22 0.36 0.58
sentence-t5-large 0.31 0.59 0.32 0.42 0.25 0.40 0.62
sentence-t5-xl 0.32 0.63 0.34 0.42 0.27 0.41 0.60
sentence-t5-xxl 0.38 0.61 0.35 0.42 0.30 0.44 0.61
text2vec-base-multilingual 0.33 0.39 0.30 0.36 0.21 0.33 0.49
text-embedding-3-large 0.40 0.53 0.46 0.46 0.37 0.54 0.62
text-embedding-3-small 0.55 0.45 0.46 0.46 0.36 0.51 0.61
text-embedding-ada-002 0.49 0.68 0.42 0.45 0.35 0.54 0.65
voyage-code-2 0.35 0.57 0.41 0.45 0.35 0.51 0.62
universal-sentence-encoder-multilingual-3 0.40 0.61 0.36 0.44 0.24 0.38 0.57
universal-sentence-encoder-multilingual-large-3 0.40 0.24 0.38 0.41 0.23 0.38 0.54
xlm-roberta-base 0.24 0.29 0.24 0.40 0.09 0.20 0.52
xlm-roberta-large 0.22 0.34 0.19 0.43 0.06 0.21 0.57
Table 13: Performance of each model for Clustering.
25 | 24 | 24 | arxiv4.pdf |
NA VAIR 00·801·80
AERODYNAMICS FOR NAVAL
AVIATORS
BY
H. H. HURT, JR.
UNIVERSITY OF SOUTHERN CALIFORNIA
DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited.
DESTRUCTION NOTICE - For unclassified, limited documents, destroy by any method that will
prevent disclosure of contents or reconstruction of the document.
PUBLISHED BY DIRECTION OF COMMANDER, NAVAL AIR SYSTEMS COMMAND
REVISED JANUARY 1965
/3 | 0 | 0 | 00-80T-80.pdf |
Reproduction for non-military use of the information or illustrations contained in this
publication is not permitted without specific approval of the issuing service (NA VAIR
or USAF). The policy for use of Classified Publications is established for the Air Force
in AFR 205-1 and for the Navy in Navy Regulations, Article 1509 •
...------------- LIST OF CHANGED PAGES ISSUED
A
INSEIf LATEST C_ PAGES. DESTROY SUPERSEDED PAGES.
NOTE: The portion of the tut .ff'ecr:ecl by the current change ia indicated by • vertical line in the OUter margins
of the page.
• The aateritlt indicate. pagel dwtged, added or deleted by the turrent change,
ADDITIONAL COPIES OF THIS PUBLICATION MAY BE OBTAINED AS FOLLOWS,
USAF AC'flVITlES-In accordance with Technical Order No. 00-5-1.
NA VY ACTIVmE~UJe DO FORM U'" and fllbmit in accordance with the inKruC:JiODi contained in NAVSUP PUB
LICATION -4'7-Military Standard Requilitioning and Issue Procedures.
Fot information on othtl' available maurW Ind details of distribution refer to NAVSUP PUBLICATION 2002
SECTION VIII, PART c .. d NAVAIR OO·IOOA. '
NAVAIR | 1 | 1 | 00-80T-80.pdf |
NAVAIR 00-80T -80
02 JANUARY 1965
NAVAIR 00-80T -80 DATED 01 JAUARY 1965 CHANGED THE DISTRIBUTION STATEMENT
AND DESTRUCTION NOTICE ON THE TITLE PAGE. PLEASE REMOVE AND DISCARD
TITLE AND A PAGE AND REPLACE WITH ATTACHED CORRECTED COPY .
PLACE THIS NOTICE SHEET BEHIND TITLE PAGE AFTER COMPLETING REQUIRED
ACTION.
NOTICE
NOTICE
NOTICE
NOTICE
0801LP1093899
/3 | 2 | 2 | 00-80T-80.pdf |
3 | 3 | 00-80T-80.pdf |
Subsets and Splits