text
stringlengths 0
16.9k
| page_start
int64 0
825
| page_end
int64 0
825
| source_file
stringclasses 99
values |
---|---|---|---|
REROUTING LLM R OUTERS
A PREPRINT
Avital Shafran
The Hebrew University
of Jerusalem
Roei Schuster
Wild Moose
Thomas Ristenpart
Cornell Tech
Vitaly Shmatikov
Cornell Tech
ABSTRACT
LLM routers aim to balance quality and cost of generation by classifying queries and routing them to
a cheaper or more expensive LLM depending on their complexity. Routers represent one type of what
we call LLM control planes: systems that orchestrate use of one or more LLMs. In this paper, we
investigate routers’ adversarial robustness.
We first define LLM control plane integrity, i.e., robustness of LLM orchestration to adversarial in-
puts, as a distinct problem in AI safety. Next, we demonstrate that an adversary can generate query-
independent token sequences we call “confounder gadgets” that, when added to any query, cause LLM
routers to send the query to a strong LLM.
Our quantitative evaluation shows that this attack is successful both in white-box and black-box settings
against a variety of open-source and commercial routers, and that confounding queries do not affect
the quality of LLM responses. Finally, we demonstrate that gadgets can be effective while maintaining
low perplexity, thus perplexity-based filtering is not an effective defense. We finish by investigating
alternative defenses.
1 Introduction
Large language models (LLMs) exhibit remarkable capabilities on many tasks. Today, hundreds of open-source and
proprietary LLMs are available at different prices, ranging from expensive, state-of-the-art models to cheaper, smaller,
less capable ones. LLM operators typically provide API access to their models (especially higher-quality models) on a
pay-per-query basis. This imposes non-trivial costs on LLM-based applications and systems.
Developers who want to integrate LLMs into their applications must therefore consider both utility and cost. They want
to maximize the quality of responses to their queries while minimizing the cost. The two objectives conflict with each
other: larger models tend to generate higher-quality answers but charge more per query. For example, at the time of
this writing, GPT-3.5-turbo costs $0.5/$1.5 per 1M input/output tokens, GPT-4o-mini $0.15/$0.6, GPT-4o $2.5/$10,
o1-preview $15/$60. The difference in quality between models is not uniform across queries. For some queries, even a
cheap model can generate an acceptable response. More complex queries require an expensive model to obtain a quality
answer.
A natural solution to balancing performance and economic considerations is to take advantage of the availability of mul-
tiple LLMs at different price-performance points. Recently proposed LLM routingsystems [5, 12, 27, 47, 53] orchestrate
two or more LLMs and adaptively route each query to the cheapest LLM they deem likely to generate a response of
sufficient quality. In the two-LLM case, let Ms be an expensive, high-quality model and Mw a weaker, lower-grade one.
Given query q, the routing algorithm R(·) applies a classifier to q that outputs 0 if Mw is sufficient for answering q, or 1
if Ms is required. The system then routes q accordingly.
LLM routing is an example of a general class of systems we call LLM control planes, which orchestrate the use of multiple
LLMs to process inputs, as further described in Section 2.
Our contributions. First, we introduce LLM control plane integrityas a novel problem in AI safety. Recently proposed
LLM control-plane algorithms are learned, calibrated classifiers (see Section 2). Their inputs are queries from potentially
adversarial users. Robustness of control-plane algorithms to adversarial queries is a new problem, distinct from adversarial
robustness of the underlying LLMs.
arXiv:2501.01818v1 [cs.CR] 3 Jan 2025 | 0 | 0 | arxiv1.pdf |
Figure 1: LLM routers classify queries and route complex ones to an expensive/strong model, others to a cheaper/weak
model. To control costs, LLM routers can be calibrated to maintain (for an expected workload) a specific ratio between
queries sent to the strong and weak models.
To initiate the study of this problem, we show that existing LLM routing algorithms are not adversarially robust. We
design, implement, and evaluate a method that generates query-independent adversarial token sequences we call “con-
founder gadgets.” If a gadget is added to any query, this query is routed to the strong model with high probability. Next,
we show that this attack is effective even in the transfer setting where the adversary does not have full knowledge of the
target LLM router (it is black-box), but has access to another router (e.g., an internally trained surrogate). We also evaluate
the integrity of commercial LLM routers, showing that they can be confounded as well.
Third, we investigate defenses. Our basic method generates gadgets that have anomalously high perplexity. Confounded
queries are thus easily distinguished from normal queries and can be filtered out by the routing system. Unfortunately, this
defense can be evaded by an adversary who incorporates a low-perplexity objective into the gadget generation algorithm,
producing gadgets that have low perplexity—and yet are effective at re-routing queries to the strong model. We also
discuss higher-level defenses, such as identifying users whose queries are routed to the strong model with abnormal
frequency.
Routing attacks can be deployed for various adversarial objectives, e.g., to ensure that the adversary always obtains the
highest-quality answer regardless of the target applications’s internal routing policies and cost constraints, or to mali-
ciously inflate the target’s LLM costs. As LLM control planes grow in importance and sophistication, we hope that this
work will motivate further research on their adversarial robustness.
2 LLM Control Planes and Routing
Inference using large language models (LLMs) is traditionally monolithic: a single model is applied to an input or se-
quence of inputs. This methodology can be sub-optimal for various reasons. State-of-the-art models are often expensive,
with API access to LLMs costing as much as several dollars for each query. Elsewhere, distinct LLMs may excel at dif-
ferent tasks, and selectively using them may improve overall quality on a diverse workload. Finally, combining multiple
LLMs, even all trained for similar tasks, may become increasingly prevalent as performance improvements of individual
LLMs plateaus [8–10].
Researchers and practitioners are therefore now developing inference architectures that use multiple LLMs to answer
queries. These LLMs are orchestrated by what we call an LLM control plane (borrowing the terminology from network-
ing [13]). The control plane may route queries or parts of queries to different LLMs, derive new strings to query to
underlying LLMs, combine answers from underlying LLMs, and more.
LLM routers. A prominent example of this emerging class of LLM control planes are LLM routers [27, 41, 47, 53, 59].
LLM routers decide which of the two (or, sometimes, more) LLMs to use to answer a query. In prescriptive routing,
the router applies some lightweight classifier to the input query that determines which underlying LLM to utilize for a
response. The classifier is itself a learned function that scores the complexity of the query. Deployments can then configure
a score threshold for when to route a query to the more expensive LLM. This threshold can be tuned using representative
workloads to achieve a desired cost-performance trade-off. Figure 1 shows the basic workflow of binary LLM routers.
Non-prescriptive routing [15, 20, 68] uses the responses from one or more underlying LLMs to determine which response
to return to the user. For example, FrugalGPT [20] submits the query to a sequence of models (ordered by price) called a
cascade, stopping when it obtains a response classified by the router as sufficient.
2 | 1 | 1 | arxiv1.pdf |
In contrast to routers motivated by controlling costs, several LLM router designs focus solely on improving quality of
responses [31, 45, 57, 58].
The LLM routers described thus far do not modify the queries or individual LLM responses. Other types of control planes
do. Ensemble approaches such as mixture-of-expert (MoE) [29, 30, 52, 56] architectures select a subset of underlying
models to apply to each token of a query and merge their responses. LLM synthesis [40] architectures operate similarly,
but route the entire query to a subset of underlying LLMs and merge their responses. These approaches reduce inference
costs by using fewer and/or less complex underlying models.
Applications of LLM routers. A key use case for LLM routers is to help LLM-based application reduce cost. Several
commercial routers, including Unify [12], Martian [5], NotDiamond [7], and others, offer this as a service. By replacing a
few lines of code, the application can send user queries to a router service, rather than directly to some LLM provider. The
service selects the optimal LLM and forwards the queries. Commercial router services claim that this results in significant
cost savings: up to 98% in the case of Martian [5], and 10× in the case of NotDiamond [7].
3 LLM Control Plane Integrity
In this section, we define LLM control plane integrity. Informally, it means that decisions made about underlying LLM
queries made by the control plane algorithms cannot be subverted by adversarial queries. Looking ahead, we will focus
on one class of control plane: predictive LLM routing as used to manage cost.
Formalizing control planes. An LLM control plane Rω is a potentially randomized algorithm. It is parameterized by
a string ω, called the parameters. It utilizes some number n of LLMs denoted by M. We will mostly focus on the
case of n = 2, and, for reasons that will be clear in a moment, use Ms (“strong”) and Mw (“weak”) to denote the two
underlying LLMs. Then inference on an input x ∈ Xfor some set X of allowed queries is performed by computing
a response via y ←$ RM
ω (x). Here we use ←$ to denote running R with fresh random coins; we use ← when R is
deterministic. We focus on inference for a single query, but it is straightforward to extend our abstraction for control
planes to include sessions: the controller would maintain state across invocations, potentially adapting its behavior as a
function of a sequence of queries and responses.
LLM control planes should, in general, be relatively computationally lightweight, at least compared to the underlying
LLMs. This is particularly so in the cost-motivated usage of control planes, as a computationally or financially expensive
control plane would eat into cost savings incurred by utilizing cheaper underlying LLMs for some queries. For example,
predictive binary routers use relatively simple classifiers to determine which of Ms or Mw should be used to respond to a
query.
Inference flow. Given a set of LLMs M, a control plane Rω, and an input x, an LLM inference flow is the sequence of
LLM invocations Mij (zj) for 1 ≤ j ≤ m and ij ∈ {w, s} made when executing RM
ω (x). Here m is the total number of
LLM invocations, and z1, . . . , zm are the queries made to the underlying LLMs. Should R be randomized, the sequence
and its length are random variables. An inference flow can be written as a transcript
T = (i1, z1), (i2, z2), . . . ,(im, zm)
of pairs of model indexes ij ∈ {w, s} and model inputs zj. Note that for simplicity we ignore the potential for paral-
lelization, assuming execution proceeds serially. For binary routers, we have m = 1 and T ∈ {(w, x), (s, x)}. We write
submitting a sequence of inferences ⃗ x= ⃗ x1, . . . , ⃗ xq to a control plane as
RM
ω (⃗ x) = (RM
ω (⃗ x1), . . . , RM
ω (⃗ xq))
where note that each invocation could result in multiple underlying LLM invocations. In the binary router case, however,
each invocation results in a single LLM invocation.
An inference flow policy dictates the control plane designer’s intention regarding use of the underlying models. For
example, an application may want to ensure that only a small fraction of queries go to the expensive model Ms. We can
define this as a predicate over a sequence of transcripts. In our binary router example, the policy can be more simply
defined as a predicate P over (input, model) pairs (⃗ x1, i1), . . . ,(⃗ xq, iq) since this fully defines the sequence of transcripts.
For example, a policy might specify that the strong model is used in at most an ϵ fraction of inferences:
P((⃗ x1, i1), . . . ,(⃗ xq, iq)) =
qX
j=1
I(ij)
q ≤ ϵ
3 | 2 | 2 | arxiv1.pdf |
where I(ij) = 1 if ij = s and I(ij) = 0 if ij = w. In other words, the predicate is that the fraction of queries routed to the
strong model is bounded by ϵ.
Control plane integrity. A control plane integrity adversaryis a randomized algorithm A that seeks to maliciously guide
inference flow.
In an unconstrained LLM control plane integrity attack, the adversary A seeks to generate inputs ⃗ x= ⃗ x1, . . . , ⃗ xq such
that running RM
ω (⃗ x) generates a transcript for which P((x1, i1), . . . ,(xq, iq)) = 0. This attack could be launched by an
adversary who wants to maximize inference costs for a victim application using an LLM router.
A harder setting requires input adaptation, where the adversary is given inputs x1, . . . , xq and it must find new inputs
ˆx1, . . . ,ˆxq for which the transcript resulting fromP((ˆx1, i1), . . . ,(ˆxq, iq)) = 0. There will be some competing constraint,
such as that xj and ˆxj are very similar for each j, or that the outputs yj ←$ RM
ω (xj) and ˆyj ←$ RM
ω (ˆxj) are close. In the
routing context, the adversary’s goal is to increase the fraction of queries that get routed to the strong model, in order to
improve the overall quality of responses, drive up the victim application’s inference costs, or both.
Relationship to evasion attacks. Evasion attacks [25, 43, 60] against an inference system (also called adversarial exam-
ples [32, 48, 49]) would, in our setting, seek to find a small modification∆ to an input x such that RM
ω (x + ∆) ̸= RM
ω (x)
where addition is appropriately defined based on input type (e.g., slight changes to text).
Our attack setting is not the same. The control plane integrity adversary seeks to maliciously control the inferenceflow, not
necessarily the output of inference. In an unconstrained attack, the adversary does not care what outputs are generated.
In the input adaptation attack, the adversary seeks to craft inputs that modify the inference flow yet do not change the
responses of the strong underlying LLM to the extent possible. Looking ahead, we will use evasion techniques in our
adaptation attacks against learned control plane routers, but, importantly, not the overall inference.
In the other direction, undermining LLM control plane integrity could be a stepping stone toward evasion attacks. For
example, if RM
ω is used to classify malicious content by combining LLMs each tuned to different types of harm categories,
then modifying inputs to force inference flows away from appropriate models could aid evasion. We leave evaluation of
how control-plane integrity attacks can enable evasion to future work.
Threat models. Within the context of control plane integrity attacks against LLM routers, we identify several threat
models that differ in terms of the adversary’s goals and their knowledge about the target control planeRM
ω .
In terms of goals, an adversary may seek to inflate the costs of a victim application that utilizes an LLM control plane.
As a kind of denial-of-service attack, such cost inflation would penalize the application developer who expects routing
to control costs. Another adversarial goal could be arbitrage: consider an application that charges X dollars per query,
whereas directly using Ms costs Y > X. The application’s lower rate X makes economic sense assuming it uses a router
to route the bulk of queries to a cheaper model Mw. An input adaptation attack in this setting can gain (indirect) access to
Ms, obtaining an arbitrage advantage of Y − X per query. To be effective, this arbitrage adversary would want to ensure
that adaptations do not lower response quality (i.e., it extracts all the value out of rerouting to Ms). As before, the victim
in this case is the application that relies on routing to lower its costs (unsuccessfully, under this attack).
We now discuss adversarial capabilities. We assume that our victim application’s prompt includes a substring that can be
controlled by the adversary. This represents many real-world apps such as chatbots, coding assistants, writing assistants,
and others, that insert user inputs into an LLM prompt. In crafting adversarial portions of prompts, an adversary may have
various levels of knowledge about the victim application’s router. We consider the following knowledge settings:
• White-box setting: The adversary knows the control plane algorithm and its parameters ω.
• Black-box (transfer) setting: The adversary does not know the control plane algorithm R and ω for the target model,
but knows instead another control plane algorithm R′
ω′ and its parameters. We refer to R′
ω′ as the surrogate. For
example, this could arise if an adversary trains their own router using available data. In this setting our attacks are
also zero-shot in that they do not require any interaction with the target control plane before the query that is being
rerouted.
4 Confounding Control Planes with Gadgets
We now turn to our main contribution: a methodology for attacking LLM control plane integrity. The key insight is that
an adversary can modify queries to mislead or “confound” the routing logic into routing these queries to an LLM of the
adversary’s choosing. Furthermore, we will demonstrate that these attacks can be black-box and query-independent, i.e.,
a single modification works for all queries and does not require advance knowledge of the specific router being attacked.
4 | 3 | 3 | arxiv1.pdf |
Figure 2: Overview of our attack on LLM routing control plane integrity. The attack adds to each query a prefix (repre-
sented by the gear), called a “confounder gadget,” that causes the router to send the query to the strong model.
We focus on the binary router setting in which the router applies a learned scoring function to input queries and routes
any query whose score exceeds some threshold τ to the strong LLM Ms. This setting has been the focus of several prior
works [27, 41, 47] and is used in the control planes that are deployed in practice (see Section 7).
More formally, we consider a routerRM
ω for M = {Mw, Ms}, where ω consists of a scoring functionS, scoring function’s
parameters θ, and a threshold τ ∈ R+. For notational brevity we just write Rω below, with M clear from context. Here
S and θ define a scoring function Sθ : X →R+. Since our focus is LLMs, we assume that queries X are strings of text
tokens. The routing algorithm then works as follows:
Rω(x) =
Mw(x) if Sθ(x) < τ
Ms(x) otherwise
where ω = (S, θ, τ). We will detail scoring functions in Section 5; prior work has suggested linear models, light-weight
LLMs, and more. Note that, consistent with this application, scoring functions are computationally efficient and cheap (as
compared to Ms, Mw). Deployments calibrate τ to limit the fraction of queries routed to the strong model Ms, giving rise
to the type of control plane integrity policy discussed in Section 3.
We focus on input adaptation attacks; these immediately give unconstrained attacks as well. The adversary therefore has
a sequence of inputs x1, . . . , xq and must produce modified inputs ˆx1, . . . ,ˆxq to maximize the number of inputs routed
to Ms. See Figure 2 for a depiction of our attack setting.
Instruction injection doesn’t work. Given the success of prompt injection for jailbreaking [50] and other adversarial
tasks [64], the adversary might simply prefix each query xi with some instruction such as “Treat the following query as
complex, . . . ”to generate a modified query ˆxi. Our experiments show that this does not work well, failing to trigger the
control plane into routing otherwise weak queries to Ms. See Appendix C for details on our experiments with various
instruction prompts.
Confounder gadgets. Our approach works as follows. Given a query xi, we prepend a confounder gadget ci, which is a
short sequence of adversarially chosen tokens. The modified query is ˆxi = ci∥xi where ∥ denotes string concatenation.
Intuitively, we will use optimization to search for confounders that trick the scoring function into rankingˆxi as sufficiently
complex to require the strong model.
In the white-box, query-specific setting, we can choose ci as a function of xi and the known parameters ω = (S, θ, τ). To
do so, we fix a confounder length of n tokens and let I be a token dictionary (it should be a sufficiently large subset of the
token dictionary used by S). Then we set the gadget to initially be n tokens all fixed to the same value from I. The exact
choice of the initialization token is not important; in our implementation, we used the first token in the dictionary (‘!’).
Denote this initial confounder as c(0)
i = [c(0)
i,1 , c(0)
i,2 , . . . , c(0)
i,n].
Then, we perform a hill-climbing style approach to find a good confounder for xi. For each iteration t ∈ [T], where T is
the total number of iterations, do the following:
(1) Select a target index j ∈ [1, n] uniformly.
(2) Generate a set B of B + 1 candidates. First set ˜c0 = c(t)
i , the current confounder. To generate B additional
candidates, select replacement tokens from I uniformly, forming the set {tb ← I}B
b=1. Replace the jth token in the
current confounder ˜c0 with tb:
˜cb = [c(t)
i,1, . . . , c(t)
i,j−1, tb, c(t)
i,j+1, . . . , c(t)
i,n] .
5 | 4 | 4 | arxiv1.pdf |
Let B = {˜c0, . . . ,˜cB}.
(3) Find the candidate that maximizes the score:
c(t+1)
i ← arg max
c∈B
Sθ(c∥xi) . (1)
The final confounder c(T)
i is used with query xi. We early abort if, after 25 iterations, there is no update to the confounder
gadget. Technically, we could abort early if we find a confounder whose score exceeds τ. Running further can be useful
when an adversary does not know τ.
The attack’s runtime is dominated byT ·B times the cost of executing S. In practice, S are designed to be fast (otherwise
routers would significantly increase the latency of applications that use them). We report precise timings later; in summary,
the attack is fast because we can set T to be relatively small and still find high-scoring confounders.
Due to the randomness in index and token selection, the method converges to different, yet similarly effective, confounder
gadgets on each run. Our evaluation will thus measure average performance over multiple gadgets.
Query-independent confounders. One downside of the per-query approach is that the adversary must repeat, for each
query, the search for a good confounder. In practice, the adversary might prefer a query-independent attack. Our con-
founder gadget approach extends to this setting readily: perform the search routine above for an empty query. In other
words, just ignore xi in the query-dependent attack above, replacing Sθ(c∥xi) in Eq. 1 with Sθ(c). This finds a sin-
gle query-independent confounder c that can be prefixed to all queries, i.e., ˆxi = c∥xi. We will show that this works
surprisingly well.
It is tempting to assume the reason a query-independent confounder works well is that a good scoring function should be
roughly monotonic in query extensions, i.e., one might expect thatSθ(c∥x) ≥ Sθ(c) for almost any suffixx. This intuition
is not correct. In our experiments, we found that Sθ(c∥x) < Sθ(c) for many x and some of the routers discussed below.
Nevertheless, by ensuring that Sθ(c) is pretty high (set the number of iterationsT higher) the resulting query-independent
confounder works well. That is, we at least get that Sθ(c∥x) > Sθ(x).
The black-box setting: confounders that transfer. Finally, the attacks so far are in the white-box setting, where the
attacker can optimize directly against Sθ. While in some cases routing control planes will be public knowledge, in others,
including the proprietary control planes we explore in Section 7, they are hidden. This gives rise to the black-box setting.
While an attacker might seek to perform model extraction attacks [43, 65] to learn θ, we instead explore attacks that
transfer from one router to another.
In more detail, we assume the adversary has access to a router R′
ω′ , called the surrogate, that is trained on data similar to
that used for the target router. Then the attack is the same as above, except that we use the surrogate’s scoring function
S′
θ′ instead of the target’s Sθ. Again, we will see that this works surprisingly well: the query-independent confounders
found for the surrogate transfer to successfully reroute queries against the target router.
Putting it all together. In summary, our methodology for input adaptation attacks is:
(1) (Preprocessing) Develop a single query-independent confounder gadget c, using either the target router or surrogate
to score the confounder.
(2) (Input adaptation) For each query xi, submit ˆxi = c∥xi instead to obtain a response ˆyi.
The confounder is applied to all queries, i.e., the adversary does not need to guess whether the original query would
have been routed to the weak or strong model. In the rest of the paper, we demonstrate the confounders rarely result in
“downgrades,” i.e., rerouting of queries from the strong to weak model.
We have experimented with variations of this approach that don’t work quite as well, for example adding c as a suffix
instead of a prefix. See Appendix B for details.
5 Open-Source Routers: Experimental Setup
To evaluate efficacy of confounder gadgets generated using the method from Section 4, we perform experiments with
several LLM routers. This section explains our experimental setup for the open-source routers proposed in the research
literature [47]; results of this evaluation appear in Section 6. In Section 7, we discuss experiments with proprietary,
commercial routers. Figure 3 shows the summary of our experimental setup.
6 | 5 | 5 | arxiv1.pdf |
Routers Notation
Similarity-weighted ranking RSW
Matrix factorization RMF
BERT classifier RCLS
LLM scoring RLLM
LLM pair Strong (Ms) Weak (Mw)
1 Llama-3.1-8B 4-bit Mixtral 8x7B
2 Llama-3.1-8B Mistral-7B-Instruct-v0.3
3 Llama-3.1-8B Llama-2-7B-chat-hf
4 GPT-4-1106-preview 4-bit Mixtral 8x7B
Benchmark Description
MT-Bench [71] 160 open-ended questions
MMLU [35] 14,042 multi-choice questions
GSM8K [24] 1,319 grade-school math problems
Figure 3: Summary of our setup for routers, underlying LLMs, and benchmark datasets used in the experiments.
In all experiments, we assume that the adversary’s goal is to reroute queries to the strong model. In Appendix E, we
evaluate efficacy of the attack when the goal is to reroute to the weak model.
Target routers. We focus our evaluation on the four prescriptive routing algorithms proposed by Ong et al. [47],
which provides open-source code and trained parameters, and does so for a representative variety of routing ap-
proaches: similarity-based classification [41, 59], an MLP constructed via matrix factorization [59], BERT-based clas-
sification [27, 53, 59], and a fine-tuned LLM.
The routers we evaluate were trained in a supervised fashion using a set of reference (training) queries whose performance
score on each of the considered models is known. The scores were computed from a collection of human pairwise rankings
of model answers for each of the queries. We note that while the routers we consider are all learned using this training
set, there is no reason to believe a non-learning-based approach (e.g., rule based) to routing would be more adversarially
robust.
We now outline the routing methods considered in this work. See Ong et al. [47] for their full implementation details.
Similarity-weighted ranking: The first method is based on the Bradley-Terry (BT) model [17]. For a given user query,
this model derives a function to compute the probability of the weak model being preferred over the strong model. The
probability-function expressions all share parameters, which are optimized to minimize the sum of cross-entropy losses
over the training-set queries, where each element in the sum is weighted by the respective query’s similarity with the
user’s query (computed as embeddings cosine similarity, with the embedding derived using OpenAI’s text-embedding-3-
small [6]). We denote this method as RSW .
Matrix factorization: The second method is based on matrix factorization. The training queries are used to train a bilinear
function mapping a model’s embedding and a query’s embedding to a score corresponding to how well the model performs
on the query. Routing is done by computing the score of the input query for each model, and choosing the highest-scoring
model. We denote this method as RMF .
BERT classifier: The third method involves fine-tuning a classifier, based on the BERT-base architecture [26], to predict
which of the two models produces a better response for the given query or whether they do equally well (a tie). The
routing decision is based on the probability of the weak model providing a better response versus the strong model or the
tie. We denote this method as RCLS .
LLM classifier: The last method is based on asking an LLM to provide a score in the range 1–5 of how an AI expert
would struggle to respond to a given query based on the query’s complexity. For this, Ong et al. fine-tuned a Llama-3-8B
model [4] using their reference set of queries and corresponding scores. We denote this method as RLLM .
Underlying LLMs. In [47], Ong et al. trained the routers with GPT-4-1106-preview [14] as the strong model and Mixtral
8x7B [39] as the weak model. They report successful generalization between the underlying LLMs, stating that their
routers trained for a particular strong-weak LLM pair can be used with other strong-weak LLM pairs.
To allow our evaluation to scale, we use as the strong model Ms the open-sourced Llama-3.1-8B [3] and as Mw the
4-bit quantized version of Mixtral 8x7B (for efficiency reasons). This reduced the cost of our experiments by avoiding
expensive GPT API calls and lowering the computational costs of Mixtral. Unless mentioned otherwise, all of our results
7 | 6 | 6 | arxiv1.pdf |
will be evaluated with respect to this pair, which we refer to as LLM pair 1. We performed more limited experiments with
the original strong, weak model pair (LLM pair 4) and had similar success in rerouting.
We additionally performed experiments with two further weaker models, in order to better evaluate the case where weak
models produce much lower-quality responses for queries (compared to the strong model). In particular, we define LLM
pair 2 as the strong model plus Mistral-7B-Instruct-v0.3 [38] and LLM pair 3 as the strong model plus Llama-2-7B-chat-
hf [63]. The weaker models in pairs 2 and 3 were chosen to represent smaller (Mistral 7B) and older-generation (Llama-2)
models: according to the Chatbot Arena LLM ranking leaderboard [1, 21], Llama-3.1-8B is ranked in the 58th place,
Mixtral 8x7B at the 88th place, Mistral-7B at the 108th place, and Llama-2-7B at the 125th place.
The LLM strong-weak pairs with which we performed experiments are summarized in Figure 3.
Evaluation datasets. We will evaluate our attacks using three standard LLM benchmarks as workloads: MT-Bench [71],
a dataset of 160 open-ended questions, MMLU [35], a dataset of 14,042 multi-choice questions, and GSM8K [24], a
dataset of 1,319 grade-school math problems. Note that Ong et al. [47] flagged that some data points are “contaminated”,
i.e., they are too similar to the ones used in their training of the routers. We use these datasets without these contaminated
elements, resulting in 72 MT-bench queries, 14,037 MMLU queries, and 1,307 GSM8K queries.
For MMLU and GSM8K, we will require that the LLMs respond in a predefined format so we can parse and compare
the responses to ground-truth answers. To facilitate this, we prepended formatting instructions to the query, inserted as
a prefix before the gadget in the case of confounded queries. In other words, a confounded query ends up defined as
ˆxi = instr∥c∥xi for instruction template instr, confounder gadget c, and original query xi. Thus in this case we model
a scenario where the adversary only controls a part of the prompt rather than the entire prompt. See Appendix B for
formatting examples and ablations.
Router calibration. For each workload, we must calibrate each router by setting the threshold τ to achieve some target
fraction ϵ of queries routed to the strong model. Note that the calibration process we use is agnostic to the underlying
LLM pair. We therefore must define 12 distinct thresholds, one for each router, dataset pair. For our experiments here,
we set ϵ = 0.5, meaning the goal is to have about half the queries routed to the strong model. This reflects an application
developer that seeks to control for costs, even if it may mean sacrificing some performance for some workloads.
To calibrate for MT-bench, we use the Chatbot Arena [21] dataset as the calibration set, computing the threshold using
the 55 K queries for which Ong et al. precomputed the scoring function outputs. To calibrate for MMLU and GSM8K,
we select 1,000 queries uniformly at random and uses these to set thresholds. Looking ahead, we do not use these queries
during evaluation of the attacks.
Note that it important that the distribution of calibration queries be similar to the distribution of the target workload (and,
in our experiments, the test queries). We observed that the Chatbot Arena-based threshold did not transfer well to MMLU
and GSM8K, resulting in the majority of queries (≈ 98%) routed to the strong model.
6 Rerouting Open-Source Routers
We now empirically evaluate our rerouting attack against the open-source routers described in the previous section. Unless
otherwise specified, our evaluation focuses on the query-independent attack setting where the attacker first finds a fixed
set of gadgets and then uses them to attack arbitrarily many queries. This is the conservative setting, and query-specific
gadgets — which carry a higher computational cost — generally work better.
In Appendix C we evaluate optimization-free alternatives for generating our confounding gadgets, and show they signifi-
cantly underperform our optimization-based approach.
White-box confounder gadget generation. Following our attack framework described in Section 4, we construct a
query-independent control-plane gadget designed to confuse each router. We start with the white-box setting, setting the
batch size to B = 32 and the number of iterations to T = 100, ignoring thresholds. We generate four sets of n = 10
gadgets, i.e., ten for each router. Examples of generated gadgets can be found in Appendix A.
When reporting scores below, we therefore report the average over the n gadgets used with all 72 MT-bench queries, 100
randomly selected MMLU queries, and 100 randomly selected GSM8K queries. None of these testing queries were used
in the training of the routers or their calibration.
Runtime and convergence. Figure 4 shows the convergence rates for 10 different gadgets, against different routing
algorithms. The overall average number of iterations before convergence is 58. Generation against RSW converges the
8 | 7 | 7 | arxiv1.pdf |
0 20 40 60
Iterations
0.220
0.225
0.230
0.235
0.240
0.245Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9
(a) RSW
0 20 40 60
Iterations
0.2
0.4
0.6
0.8Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9 (b) RMF
0 20 40 60
Iterations
0.5
0.6
0.7
0.8
0.9Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9 (c) RCLS
0 20 40 60
Iterations
0.4
0.5
0.6
0.7
0.8Routing score
Attack #0
Attack #1
Attack #2
Attack #3
Attack #4
Attack #5
Attack #6
Attack #7
Attack #8
Attack #9 (d) RLLM
Figure 4: Convergence of gadget generation against different routing algorithms.
RSW RMF RCLS RLLM
Upgrade Strong Upgrade Strong Upgrade Strong Upgrade Strong
MT-Bench 100 ± 0 81 → 100 ± 0 100 ± 0 58 → 100 ± 0 100 ± 0 67 → 100 ± 0 73 ± 5 57 → 88 ± 2
MMLU 90 ± 1 43 → 94 ± 1 78 ± 4 53 → 90 ± 2 100 ± 0 47 → 100 ± 0 95 ± 1 53 → 98 ± 1
GSM8K 98 ± 0 52 → 99 ± 0 100 ± 0 54 → 100 ± 0 100 ± 0 56 → 100 ± 0 94 ± 3 53 → 97 ± 1
Table 1: The white-box attack’s rerouting success rate. “Upgrade” is the percentage of “Weak” queries successfully
rerouted to the strong model by adding a confounder gadget. “Strong” shows the change in the percentage of queries that
are routed to the strong LLM Ms before and after our rerouting attack.
fastest (50 iterations on average), RMF the slowest (66 iterations on average). Interestingly, the score of RSW does not
increase much during optimization but is still sufficient for a successful attack.
Runtime varies significantly when generating gadgets against different routing methods. On a machine with one A40 GPU,
4 CPUs, and 180G RAM, a single iteration takes 36.9 s, 8.4 s, 0.8 s, and 6.9 s for the RSW , RMF , RCLS , and RLLM
routers, respectively. On average, it takes around 31 minutes to generate a gadget for theRSW router, 9 minutes forRMF ,
50s for RCLS , and 6 minutes for RLLM .
Rerouting success rates. Recall that our attack adds the same confounder gadget to all queries. We start by evaluating the
reroute success rates: What fraction of confounded queries does the target router send to the strong model Ms? We show
the results for our attack in Table 1. Focusing first on the columns labeled ‘Strong’, the notation “ X → Y ± Z” relays
that for unmodified queries, X% are routed to the strong model Ms, and that for the n = 10 confounders we achieved an
average of Y % of queries sent to Ms. The standard error is Z. Note that while calibration targets ϵ = 0.5, we see some
natural variance in X for the test sets.
To breakdown success further, we additionally report the upgrade rate, which focuses on the percentage of queries that
were (a) originally routed to the weak model, and (b) routed to the strong model after they were modified with the
confounder gadget. Because in our attacks few queries get “downgraded” (confounders cause them to be rerouted to the
weak model instead of strong), the upgrade rate dictates the success rate.
As can be seen, the gadgets reroute almost all weak queries to the strong model. In most cases we see 100% success, or
close to it. The worst case still achieves 88% rerouting success, boosting the fraction of queries sent to the strong LLM by
1.5x. Rerouting fails only for some queries that even after confounding are sent to the weak model: the fixed gadget did
not sufficiently increase the router’s estimate of those queries’ complexity. This is the only source of error for the attack:
no queries in these experiments got “downgraded”, i.e., a query that would otherwise be sent to Ms ends up rerouted to
Mw. This also means that adding the confounder to every single query does not have negative impact on rerouting efficacy.
We report standard error values for both the upgrade rates and the total percentage of queries routed to the strong model.
The maximal standard error is in the low single digits, indicating similar success rates across gadgets.
Quality of attack responses. We now turn to evaluating the quality of the responses generated by the attack. Note that
because we have calibrated the routers to target ϵ = 0 .5, our attacks can improve response quality by rerouting to the
stronger model. In the other direction, our attacks add confounder gadgets which might degrade response quality.
9 | 8 | 8 | arxiv1.pdf |
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 13.8 12 .3 ± 0.2 12 .6 12 .3 ± 0.2 13 .1 12 .1 ± 0.2 12 .7 12 .7 ± 0.4
MMLU 20.4 20 .1 ± 0.1 20 .0 20 .3 ± 0.1 20 .2 20 .5 ± 0.1 21 .0 19 .6 ± 0.1
GSM8K 17.1 15 .1 ± 0.3 17 .0 15 .2 ± 0.3 17 .0 15 .0 ± 0.2 16 .4 15 .2 ± 0.3
Table 2: Average perplexity of responses to the original and confounded queries, in the white-box setting for LLM pair 1.
Response perplexity does not change significantly when adding the confounder gadget.
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 8.4 8 .3 ± 0.0 8.4 8 .4 ± 0.0 8.4 8 .3 ± 0.0 8.3 8 .2 ± 0.1
MMLU 61 66 ± 0 64 64 ± 1 63 65 ± 0 67 66 ± 0
GSM8K 46 64 ± 1 50 67 ± 1 50 63 ± 1 44 64 ± 1
Table 3: Average benchmark-specific scores of responses to the original and confounded queries, in the white-box setting
for LLM pair 1. Rerouting to the strong model improves quality of responses as long as there is a significant gap between
the benchmark performance of the weak and strong LLMs.
As a first measure of response quality, we compare the perplexity scores for unmodified responses and confounded query
responses. Text perplexity [37] is a well-known method for approximating “naturalness” of text sequences. Perplexity
can be computed using an LLM, we use GPT-2 [51] for this purpose as it is a standard choice [16, 69];1 Table 2 shows the
results. As can be seen, adding the confounder gadget to queries does not significantly change response perplexity. To the
extent that it does, it usually somewhat decreases response perplexity, i.e., makes it more “natural”. That said, perplexity
is a coarse measure of “naturalness,” and it does not measure whether the response is correct. In particular, responses of
strong and weak LLMs tend to have similar perplexities. We further discuss this issue in Appendix D.
We thus also evaluate using the following benchmark-specific metrics to assess response quality:
• MT-bench: We score the responses on a scale of 1–10 using an LLM-as-a-judge methodology [71]. We use
GPT-4o [2] as the judge and ask it to provide a score given a pair of a query and a corresponding response.
• MMLU: We parse the responses and compare the answer to the ground truth. In cases where the response did not
fit any known multi-choice format, we marked the response as a mistake. We report accuracy as the percentage of
responses that match the ground truth.
• GSM8K: similar to MMLU except questions are math rather than multiple choice, thus we parse the answers accord-
ing to the expected format.
Table 3 shows that, according to these metrics, in most cases responses to the confounded queries are no worse, and in
some cases even better, than responses to the original queries. We attribute the improvement on the GSM8K benchmark
to the fact that the strong model performs significantly better than the weak model on this benchmark (57% vs. 33%). On
the MT-bench and MMLU benchmarks, strong and weak models have comparable performance (8.5 vs. 7.6 for MT-bench
and 66% vs. 64% for MMLU), thus routing does not degrade quality of responses and, consequently, the attack cannot
improve it.
To further demonstrate that the attack improves the quality of responses when there is a significant gap between the weak
and strong LLMs, we perform an additional evaluation with Mistral-7B-Instruct-v0.3 [38] and Llama-2-7B-chat-hf [63]
as the weak LLMs (LLM pairs 2 and 3). Mistral-7B achieves 7.4, 57%, and 25% on MT-bench, MMLU, and GSM8K,
respectively. Llama-2-7B achieves 6.4, 44%, and 21%. Table 4 shows that the rerouting attack improves quality of
responses when either of these LLMs is the weak model, and in particular for the weaker Llama-2-7B model.
LLM responses are sometimes affected by the confounder gadget. In some cases, the LLM responded with, for example,
“I can’t answer that question as it appears to be a jumbled mix of characters”. Still, the response continued with “However,
I can help you with the actual question you’re asking,” followed by the actual answer. We observed very few cases where
an LLM refused to answer due to the presence of the gadget. In most cases, the response did not mention anything
1Some responses had abnormally high perplexity values (> 100), which we found do not correlate with quality, but these variations
disproportionately contribute to the average. We thus filter out such high-perplexity responses as outliers in both benign and attack
settings. We provide examples of filtered responses in Appendix D.
10 | 9 | 9 | arxiv1.pdf |
RSW RMF RCLS RLLM
Orig. Conf. Orig. Conf. Orig. Conf. Orig. Conf.
LLM pair 2
MT-Bench 8.5 8 .3 ± 0.0 8.4 8 .3 ± 0.1 8.4 8 .4 ± 0.1 8.4 8 .3 ± 0.1
MMLU 55 64 ± 1 63 64 ± 0 58 66 ± 1 62 66 ± 0
GSM8K 46 64 ± 1 51 67 ± 1 49 63 ± 1 38 63 ± 2
LLM pair 3
MT-Bench 8.4 8 .3 ± 0.0 8.1 8 .3 ± 0.1 8.3 8 .4 ± 0.1 8.1 8 .2 ± 0.1
MMLU 51 64 ± 1 57 63 ± 1 52 66 ± 1 59 66 ± 1
GSM8K 40 64 ± 1 44 67 ± 1 45 63 ± 1 37 64 ± 1
Table 4: Average benchmark-specific scores of responses to the original and confounded queries with Mistral-7B-Instruct-
v0.3 (LLM pair 2) or Llama-2-7B-chat-hf (LLM pair 3) as the weak model, in the white-box setting. Results further
emphasize that the rerouting attack improves quality of responses when there is a significant gap between the weak and
strong LLMs.
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
MT-Bench 99±1 88 ±5 45 ±5 100±0 96 ±2 39 ±3 100±0 79 ±9 51 ±5 100±0 83 ±5 85 ±7
MMLU 66±5 44 ±11 81 ±3 82±4 56 ±7 74 ±2 64±6 16 ±7 80 ±5 53±4 20 ±5 46 ±11
GSM8K 99±1 72 ±11 63 ±4 92±2 88 ±3 62 ±4 76±6 60 ±9 65 ±8 60±8 70 ±7 73 ±10
Table 5: Average upgrade rates for our attack in the black-box setting. This is the average percentage of queries rerouted
from the weak to strong model under the target router due to a confounder gadget generated using the surrogate. The
average downgrade rate (i.e., strong-to-weak rerouting) is 1.2% across all routers. Upgrade rates are lower than in the
white-box setting but still high, indicating that the attack transfers.
abnormal about the query. Intuitively, this reflects the fact that while LLMs are built to be robust to noisy inputs, the
router itself is not.
In summary, the attack is highly successful at rerouting queries from the weak to the strong model. Overall, quality
improves if there is a significant gap between the strong and weak LLMs used by the router. Either way, confounding has
no negative impact on the quality of responses.
Black-box attack results. Next, we consider the black-box attack, where the attacker does not know the algorithm
used by the target router. We assume that the attacker has access to another, surrogate router that it can use to generate
confounder gadgets. In effect, we evaluate transferability of the attack from a known, white-box router to unknown,
black-box routers.
Table 5 shows the results for all combinations of surrogate (denoted by ˆR) and target routers. For conciseness we focus
on the upgrade and downgrade rates for the remainder of this work. Upgrade rates are lower than in the white-box setting
but still high, indicating that the attack transfers. The LLM-based routing algorithm RLLM has the lowest rates, perhaps
because it is the most complex of the four. The downgrade rate is 0 in most cases and is 1.2% on average.
Table 6 shows that the black-box attack does not increase the average perplexity of responses as generated by LLM
pair 1. Table 7 shows that the attack does not decrease benchmark-specific scores, other than some small decrease in
some cases for the MMLU benchmark. For GSM8K, similar to the behaviour observed in the white-box setting, we see
an improvement with our attack due to the performance difference between the strong and weak models for this task. This
indicates that confounding affects only the routing, not the quality of responses. When the weak model is significantly
weaker than the strong model, i.e., LLM pairs 2 and 3, the attack can improve the quality of responses significantly.
Query-specific gadgets. By default, our gadget generation method is query-independent and the same gadget can be used
to reroute any query. An adversary with more resources may instead generate a dedicated gadget for each query (using
the same algorithm).
Table 8 and Table 9 show the results for the white-box and black-box settings, respectively. (Here, percentage numbers
are not averaged and there is no standard error since we used a single gadget per query.) The white-box results are nearly
perfect; the black-box results are often better but sometimes somewhat worse than those for query-independent gadgets.
We conjecture that this is due to some level of overfitting.
11 | 10 | 10 | arxiv1.pdf |
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
MT-Bench 0.4 0 .8 0 .6 1.4 0 .7 0 .3 1.7 0 .3 0 .7 0.8 −0.6 0 .0
MMLU 0.1 0 .8 1 .1 0.2 0 .2 1 .1 0.3 0 .8 0 .9 1.3 1 .2 0 .9
GSM8K 1.9 1 .7 0 .6 1.6 1 .7 0 .2 1.7 1 .0 0 .4 1.3 1 .3 1 .7
Table 6: Differences between average perplexity of responses to the original and confounded queries, in the black-box
setting, when the confounder gadget was generated for a different surrogate router than the target, for LLM pair 1. Positive
values indicate a lower average perplexity (more natural) of responses to the confounded queries; higher values are better
for the attacker. Standard errors were omitted for readability but are0.2 on average. As in the white-box setting, the attack
does not increase the average response perplexity.
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
LLM pair 1
MT-Bench −0.1 −0.1 0 .0 −0.1 −0.1 0 .0 −0.1 0 .0 0 .1 −0.2 −0.1 −0.2
MMLU −0.1 0 .3 −0.2 4.8 1 .0 0 .5 2.5 −1.3 −0.8 2.6 −0.9 0 .3
GSM8K 14.9 9 .6 15 .2 18.6 13 .8 14 .7 13.4 6 .8 12 .6 13.6 11 .3 10 .4
LLM pair 2
MT-Bench −0.1 −0.1 −0.1 −0.2 −0.2 −0.2 −0.1 −0.1 0 .0 −0.2 −0.2 −0.2
MMLU 1.6 4 .0 4 .2 7.9 5 .0 4 .4 5.0 −2.9 3 .2 5.2 −0.9 3 .8
GSM8K 13.6 8 .7 18 .5 18.9 14 .4 18 .3 13.1 4 .0 15 .5 11.3 8 .4 10 .8
LLM pair 3
MT-Bench 0.2 0 .0 0 .1 −0.1 −0.1 0 .0 0.0 0 .2 0 .2 −0.1 0 .1 −0.1
MMLU 5.0 6 .8 5 .8 11.3 9 .1 4 .7 8.1 −3.7 4 .8 7.8 0 .1 7 .2
GSM8K 20.5 13 .4 20 .9 24.3 18 .6 21 .6 17.9 11 .2 18 .9 16.7 15 .2 14 .2
Table 7: Differences between average benchmark specific scores of responses to the original and confounded queries,
when the confounder gadget was generated for a different surrogate router than the target (black-box setting) for three
LLM pairs. Positive values indicate a higher average score for responses to the confounded queries; higher values are
better for the attacker. Results are averaged across gadgets. Standard errors were omitted for readability and are on
average 0.1, 0.8, and 1.8 for MT-bench, MMLU and GSM8K, respectively. Aligned with the white-box setting, results
show almost no decrease in performance, and improvement when there is a performance gap for the LLM pair.
Results for LLM pair 4. As discussed in Section 5, we replace the strong model that was used by Ong et al. [47], GPT-4-
1106-preview (rank 28 in the Chatbot Arena leaderboard [1, 21]), with the open-sourced Llama-3.1-8B (rank 58) to reduce
the costs of our extensive set of evaluations. In this section we perform a smaller-scale evaluation of the quality-enhancing
attack performance when using GPT as the strong model, i.e., LLM pair 4. We evaluate this setting using three of the
n = 10 confounder gadgets for each router.
Table 10 shows the results across benchmarks in the white-box setting. Compared to the pair 1 setting (Table 3), the attack
results in a higher increase in benchmark performance. This further demonstrates higher attack effect on response quality
when the performance gap between the weak and strong models is higher.
7 Rerouting Commercial Routers
We evaluate our rerouting attack on several commercial routers: Unify [12], NotDiamond [7], OpenRouter [11], and
Martian [5]. These routers are available through black-box APIs. Therefore, we use our black-box attack with the
40 gadgets optimized for the open-sourced routers RSW , RMF , RCLS , and RLLM (10 per router). We perform this
evaluation using the MT-bench benchmark.
Unify. This router lets users specify a list of models from different providers and a metric configuration for routing
decisions. The available metrics are quality, time to first token, inter-token latency, and cost. The user can specify the
weight for each metric. Time, latency, and cost metrics are static and precomputed. The quality metric is computed for
12 | 11 | 11 | arxiv1.pdf |
RSW RMF RCLS RLLM
MT-Bench 100 100 100 100
MMLU 100 96 100 100
GSM8K 100 100 100 100
Table 8: Upgrade rates for query-specific gadgets, in the white-box setting. Results are nearly perfect, i.e. nearly all
confounded queries are routed to the strong model.
Surrogate ˆRSW ˆRMF ˆRCLS ˆRLLM
Target RMF RCLS RLLM RSW RCLS RLLM RSW SFM RLLM RSW RMF RCLS
MT-Bench 100 83 71 100 83 48 100 73 52 100 67 83
MMLU 96 57 89 95 43 83 74 13 83 77 11 30
GSM8K 100 68 74 100 73 68 81 65 70 88 54 64
Table 9: Upgrade rates for query-specific gadgets, in the black-box setting. In most cases results are better than in the
query-independent setting, at the cost of a more resource intensive process.
each query using a neural scoring function that was trained on prompts from several open datasets (e.g., Open Hermes [62])
and labeled using an LLM-as-a-judge [71].
For our evaluation, we configure the router to choose between GPT-4o [2] as the strong model and Mixtral 8x7B [39] as
the weak model. We focus on the cost and quality metrics, and set the weight of time and latency to 0 so that they are
not factored into routing decisions. We manually calibrate the weights to 1 for the quality metric and 0.02 for the cost
metric. These weights result in 49% of the original, unmodified queries being routed to the strong model and 51% to the
weak model, resulting in a total cost of $0.13 for the 72 MT-bench queries. Adding confounder gadgets generated for the
four open-sourced evaluated routers results in upgrade rates of 79%, 88%, 91%, and 89%, respectively, averaged across
10 gadgets. The downgrade rate is zero in all cases. In terms of costs, the addition of the confounder gadget increased
the cost to $0.22, $0.23, $0.22, and $0.21, respectively, averaged across 10 gadgets. In other words, the rerouting attack
increased the cost of processing the queries, on average, by a factor of 1.7×.
NotDiamond. This router lets users route their queries to a list of predefined models. Available objectives are to maximize
quality, or balance quality and cost, or balance quality and latency. The exact details of the routing logic are not specified.
We focus on cost-aware routing, for which the API docs state that “NotDiamond will automatically determine when a
query is simple enough to use a cheaper model without degrading the quality of the response.” NotDiamond provides a
router selection tool which gives the routing decision for a particular query without forwarding the query to the chosen
model (thereby incurring no costs). We use this for our evaluation—of course a real attack would target the NotDiamond
API when used for actual routing.
Similar to the Unify experiments, we set GPT-4o as the strong model and Mixtral-8x7b as the weak model. Cost-aware
routing routes 82% of the original queries to the strong model,18% to the weak model. Confounded queries generated for
RSW , RMF , RCLS , and RLLM achieve upgrade rates of 21%, 18%, 21%, and 15%, respectively. The downgrade rates
are 1–3%.
As opposed to our calibrated routers, NotDiamond aggressively routes to the stronger model even for unmodified queries
in most settings. We tried several strong/weak model pairs including GPT-4o/Mistral-7B-Instruct-v0.2, GPT-4o/GPT-4o-
mini, and Claude-3-Opus/Claude-3-Sonnet, and observed a similar 20%–80% split between strong and weak.
When we changed the strong model to OpenAI’s o1-mini and kept Mixtral-8x7b as the weak model, 54% of the original
queries were routed to the strong model, 46% to the weak model. In this setting, confounder gadgets yield 13–16%
upgrade rates and, on average, 3–6% downgrade rates. We conclude that while the attack is still effective, NotDiamond is
more robust than Unify.
OpenRouter. This framework offers a unified interface for LLMs, and additionally offers a system that routes users’
queries between three specific models: Llama-3-70b, Claude-3.5-Sonnet, and GPT-4o. Queries are routed “depending on
their size, subject, and complexity,” as described in the documentation.2
With OpenRouter, 96% of the original queries are routed to Llama, 4% to GPT, and none to Claude. Based on the pricing
and number of input-output tokens, the queries’ total cost is $0.03 for processing all evaluated queries. After adding
2https://openrouter.ai/openrouter/auto
13 | 12 | 12 | arxiv1.pdf |
RSW RMF RCLS RLLM
Original Confounded Original Confounded Original Confounded Original Confounded
MT-Bench 9.2 9 .2 ± 0.0 9.1 9 .3 ± 0.0 9.2 9 .1 ± 0.0 8.9 9 .1 ± 0.1
MMLU 76 84 ± 1 76 81 ± 0 76 84 ± 0 78 84 ± 1
GSM8K 62 86 ± 0 65 88 ± 1 68 90 ± 2 66 85 ± 2
Table 10: Benchmark-specific average scores of responses to the original and confounded queries with GPT-4-1106-
preview as the strong model (LLM pair 4), in the white-box setting. Results demonstrate a higher increase in performance
with respect to the LLM pair 1 setting, due to the larger performance gap between the models.
confounder gadgets, queries originally routed to GPT are still routed to GPT and no queries are ever routed to Claude. For
queries originally routed to Llama, some gadgets result in all of them being rerouted to GPT, and some have no impact.
Specifically, 4 out of the 10 gadgets we optimized using RSW caused all queries to be rerouted to GPT,2/10 using RMF ,
and 3/10 using RLLM . None of the gadgets optimized using RCLS had any impact on routing. In terms of costs, having
all queries being rerouted to GPT results with an average cost of $0.25, a greater than 8× increase over the cost of the
original queries. Given the lack of documentation of the routing algorithm being used, we are unsure what explains the
variability across gadgets.
Martian. This router is supposed to let the user provide a list of models and to specify the maximum amount the user is
willing to pay for a query or for 1M tokens. Unfortunately, as of November 14, 2024, the router appears to ignore the list
models provided by the user, and forwards the input to the same LLM regardless of it. We tested this in settings including
one, two, or multiple models. While responses do not specify which LLM was used, they were identical across settings,
so we excluded Martian from our evaluation. We notified Martian about the seemingly buggy behavior.
8 Defenses
Defenses against rerouting should be cheap. If the per-query cost of the defense is comparable to the per-query cost of a
strong LLM, deploying the defense will defeat the main purpose of LLM routing, which is to reduce the cost of responding
to queries.
Perplexity-based filtering. As explained in Section 6, perplexity is a measure of how “natural” the text looks. Perplexity-
based filtering has been suggested in many contexts as a defense against adversarial text inputs [16, 36]. This defense
computes the perplexity of multiple “trusted” texts, then compares it with the perplexity of the suspicious text. If the latter
is significantly higher, or above some predefined threshold, the text is considered adversarial. Specifically, we assume the
defender has access to a set of unmodified queries. The defender computes their perplexity values and uses these values
to establish a threshold. Given a new query, the defender checks if its perplexity exceeds the threshold. If so, the query
is flagged as adversarial. The defender can then decide how to handle such queries. Options include rejecting them or
routing them all to the weak model. Computing the perplexity of a query can be cheap to do, e.g., using GPT-2 as we do
in this work; this makes it viable for use as a defense that doesn’t undermine the benefits of routing.
To evaluate the effectiveness of such a defense against our attack, we compare the perplexity values of original and
confounded queries. Figure 5 presents histograms of perplexity values for both the original evaluated GSM8K queries and
their corresponding confounded versions, generated using one of the confounder gadgets, sampled uniformly at random.
Additionally, the figure displays the ROC curve for the defense that detects confounded queries by checking if their
perplexity exceeds a threshold. As can be seen, the confounded queries exhibit significantly higher perplexity values,
making them readily distinguishable from the original queries. For instance, in the case of the RSW router, setting the
threshold value at55 yields a false-positive rate of3% and a true-positive rate of97%. Results are similar for other gadgets
and benchmarks and were omitted due to space constraints.
Unfortunately, this defense can be evaded if an adversary incorporates a perplexity constraint into the gadget generation
process. To demonstrate the feasibility of this evasion strategy, we modify gadget generation to maximize the score of the
routing algorithm R and simultaneously aligning the the gadget’s perplexity to some predefined perplexity value. In more
detail, in each iteration t ∈ [T], we uniformly sample a target index j ∈ [1, n] and generate a set B of B + 1candidates as
explained in Section 4. We then modify Eq. 1 such that we now find the candidate that maximizes the difference between
the router’s score and the perplexity constraint for the confounder:
c(t+1) ← arg max
c∈B
Sθ(c∥xi) − α · |PPL(c) − ρ|
,
14 | 13 | 13 | arxiv1.pdf |
0 50 100 150 200 250 300
Perplexity
0
20
40
60
80Count
Original
Confounded
(a) RSW
20 40 60 80 100 120 140
Perplexity
0
10
20
30
40
50Count
Original
Confounded (b) RMF
50 100 150 200
Perplexity
0
10
20
30
40
50Count
Original
Confounded (c) RCLS
20 40 60 80 100
Perplexity
0
10
20
30
40
50Count
Original
Confounded (d) RLLM
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 1.00
(e) RSW
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.4
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 0.97 (f) RMF
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.4
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 0.99 (g) RCLS
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0TPR
AUC = 0.95 (h) RLLM
Figure 5: Perplexity of the original queries in the GSM8K benchmark compared to the perplexity of confounded queries
using a single uniformly sampled gadget. We additionally present the ROC curve of the defense that detects confounded
queries by checking if they cross a perplexity threshold, and it’s corresponding ROCAUC score. Confounded queries have
significantly higher perplexity values, and are thus easy to recognize and filter out.
where PPL(·) denotes the perplexity function computed using GPT-2, the value ρ denotes a target perplexity value to
which we want gadgets’ perplexity to be close, and the value α is a balancing coefficient. For the experiments below, we
set ρ to be the average perplexity value of 100 uniformly sampled queries3 from the GSM8K benchmark.
Figure 6 shows the results when setting α = 0.01, for the GSM8K benchmark and one confounder gadget. The results
demonstrate that modified queries can no longer be easily distinguished from normal queries by their perplexity alone.
For instance, in the case of the RSW router, setting the threshold value at 55 as before, no confounded queries are flagged
as anomalous, meaning the true-positive rate is zero. We note that there is some variability across gadgets. The average
ROCAUC scores of the defense across ten gadgets with standard deviation indicated parenthetically, are 0.66 (±0.04),
0.69 (±0.02), 0.71 (±0.02), and 0.69 (±0.03) for the RSW , RMF , RCLS , and RLLM routers, respectively.
At the same time, optimizing for low perplexity does not significantly impact the attack success rate. Table 11 compares
the average upgrade rates (over n = 10 gadgets) of the original perplexity-agnostic optimization approach from Section 4
and the perplexity-minimizing one described above. The attack efficacy might be improvable further by adjusting α to
find a sweet spot that avoids the defense effectively while ensuring high rerouting success rate.
The attack is not particularly sensitive to the choice of queries used to obtain the calibration value ρ. Although ρ was
computed using GSM8K queries, we observe similar performance when evaluating on the MT-bench and MMLU bench-
marks, with average ROCAUC scores of0.50 (±0.01), 0.51 (±0.01), 0.52 (±0), and 0.51 (±0.01) for MT-bench, and0.52
(±0.03), 0.54 (±0.02), 0.55 (±0.01), and 0.53 (±0.02) for MMLU. One might also try removing the calibration value al-
together, instead simply minimizing the gadget’s perplexity value. However, this can result with an “overshooting” effect,
where the perplexity value is significantly lower than that of normal queries, thereby making it still distinguishable from
standard queries.
In summary, perplexity-based filtering is not an effective defense against against rerouting.
3The perplexity calibration queries were chosen such that they do not overlap with the queries used for evaluation.
15 | 14 | 14 | arxiv1.pdf |
20 30 40 50
Perplexity
0.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0Count
Original
Confounded
(a) RSW
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (b) RMF
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (c) RCLS
20 30 40 50
Perplexity
0
5
10
15
20Count
Original
Confounded (d) RLLM
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.65
(e) RSW
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.73 (f) RMF
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.64 (g) RCLS
0.0 0.2 0.4 0.6 0.8 1.0
FPR
0.0
0.2
0.4
0.6
0.8
1.0TPR
AUC = 0.65 (h) RLLM
Figure 6: Perplexity values of the original and confounded queries, and the corresponding ROC curves of the defense that
detects confounded queries by checking if they cross a perplexity threshold, when the confounder gadget is optimized for
low perplexity, in the GSM8K benchmark and for one gadget sampled uniformly at random. Confounded queries have
similar perplexity values as the original queries, and can no longer be easily distinguished based on perplexity alone.
RSW RMF RCLS RLLM
Orig. PPL-opt. Orig. PPL-opt. Orig. PPL-opt. Orig. PPL-opt.
MT-Bench 100 ± 0 100 ± 0 100 ± 0 98 ± 2 100 ± 0 98 ± 1 73 ± 5 51 ± 8
MMLU 90 ± 1 59 ± 5 78 ± 4 74 ± 5 100 ± 0 66 ± 12 95 ± 1 89 ± 3
GSM8K 98 ± 0 70 ± 7 100 ± 0 98 ± 2 100 ± 0 88 ± 6 94 ± 3 81 ± 8
Table 11: Average upgrade rates for gadgets generated without (“Orig.”) and with (“PPL-opt.”) low-perplexity optimiza-
tion, for the balancing coefficient α = 0.01. In some cases, optimizing for low perplexity has a negative effect on the
attack success rate, however the attack can still be considered successful. A more careful choice ofα can potentially limit
the effect on the attack success.
LLM-based filtering. Even though adversarially modified queries cannot be easily detected using perplexity, they may
still be “unnatural.” A possible defense is to employ an oracle LLM to determine if the query is natural or not. This defense
requires the router to invoke an additional LLM for every processed query, which is computationally expensive in the case
of a high-quality open-sourced LLM or financially costly in the case of a high-quality commercial LLM. Therefore, this
defense is unlikely to be practical. Furthermore, it is possible to optimize gadgets so that they both have low perplexity
and appear “natural” to LLM evaluators [69].
Paraphrasing. Filtering defenses like those discussed above are passive. An active alternative is to paraphrase queries
using an oracle LLM. LLMs are trained to generate natural text and are thus likely to remove unnatural substrings when
paraphrasing a query. This defense is likely impractical for two reasons. First, and as with LLM-based filtering, it requires
16 | 15 | 15 | arxiv1.pdf |
an extra potentially expensive LLM invocation for each query processed by the router. Second, it may degrade the quality
of responses from the destination LLMs, which are sensitive to the phrasing of queries and prompts.
Detecting anomalous user workloads. Another possible defense requires the router to monitor individual user work-
loads, and identify those users whose queries are routed to the strongest model with an abnormally high frequency. The
router can then impose a user-specific threshold. Of course such workloads may have a benign explanation, e.g., the user’s
queries may be unusually complex. Even so, routers could potentially be designed to perform user-specific routing. For
example, one could imagine using per-user thresholds that are calibrated dynamically to attempt to maintain a consistent
fraction of queries being routed to the strong model.
Such user-specific routing would complicate implementations, and would make inaccurate decisions for a user until there
is sufficient data about their queries. The latter is relevant in adversarial settings, since such an approach would still be
circumventable should attackers be able to mount Sybil attacks in which the attacker creates a new user for, in the limit,
each query.
9 Related Work
Evasion attacks against ML systems. A large body of work has investigated evasion attacks against ML systems [25,
43, 60], also referred to as adversarial examples [32, 48, 49], and these attacks are now being explored in the context of
multi-modal LLMs [28] as well as text-only LLMs (for just one example, see [22]). We discussed in Section 3 how our
results compare: LLM control plane integrity is a distinct AI safety issue, but related in that: (1) control plane integrity
attacks may use evasion-style techniques, and (2) control plane integrity attacks might be useful for performing evasion.
Prompt injection against LLMs. Prompt injection is a class of attacks against LLMs in which the adversary manipulates
the prompt, i.e., the textual input fed directly to the LLM, causing the LLM to generate outputs that satisfy some adver-
sarial objective [50, 64]. Evasion attacks as discussed above can use prompt injection, jailbreaking attacks being a widely
explored example in which the adversary aims to bypass some safety guardrail included in the LLM system, such as “do
not output expletives” [23, 42, 54, 66, 72, 73].
Prompt injection is also used for extraction attacks that aim to infer some information from or about the model, for
example, the system prompt [50, 54, 70], training data samples [46], or model parameters [18]. In indirect prompt injection
attacks [33], the adversaries do not directly interact with the target LLM, and instead inject adversarial inputs into third-
party data, which is then added to the LLM prompt (intentionally or unintentionally) by the victim application and/or its
users. This relates to another category of attacks that target LLM-based applications, such as RAG systems, and invalidate
their integrity by exploiting the weaknesses of the underlying LLM [19, 55].
Our attacks also modify queries, but with a different aim than the above types of attacks: undermining the integrity of the
control plane routing, rather than the LLM itself. Future work might investigate indirect control plane integrity attacks
that, analogously to indirect prompt injection, serve to somehow trick users of a routing system into forming control-
plane-confounding queries.
Attacks against MoE. Mixture-of-Experts (MoE) architectures enable using multiple expert modules for processing a
given query with a lower computational cost by including an inner routing mechanism that in every layer routes different
tokens to a small number of experts [29, 30, 52, 56]. This can be thought of as an internal router within a single LLM,
rather than an external control plane that orchestrates multiple LLMs. MoE has increased in popularity as it allows to
build larger models at a fixed compute budget—not all parameters are used at the same time.
Hayes et al. [34] identified a vulnerability in MoE that can be exploited for a denial-of-service attack against MoE. Thus
control plane integrity issues appear to extend to the context of single-LLM MoE systems, and future work could explore
this connection further.
Yona et al. [67] presented a side-channel attack on MoE that enables an attacker to reveal other users’ prompts. We expect
that side-channel attacks against LLM control planes exist as well, for example, to infer which models are used via timing
of responses. Such attacks, which target confidentiality, are outside the scope of control plane integrity.
10 Conclusion
LLM routers balance quality and cost of LLM inference by routing different queries to different LLMs. They are an
example of a broader, emerging class of systems we call “LLM control planes” that aim to achieve various quality,
efficiency, and cost objectives by orchestrating use of multiple LLMs to respond to a query.
17 | 16 | 16 | arxiv1.pdf |
We introduced and defined a new safety property, LLM control plane integrity . Informally, this property holds if an
adversarial user cannot influence routing decisions made by the control plane. To show that existing LLM routers do not
satisfy this property, we designed, implemented, and evaluated a black-box optimization method for generating query-
independent “confounder gadgets.” When added to any query, the confounder gadget confuses the router into routing the
query to the adversary-chosen LLM.
We evaluated the efficacy of confounder gadgets on multiple open-source and commercial routers and demonstrated that
they successfully reroute queries without a negative impact on the quality of responses. We also discussed defenses against
these attacks and indicated directions for future research.
Acknowledgments
This research was supported in part by the Google Cyber NYC Institutional Research Program, the Israel Science Founda-
tion (Grant No. 1336/22), and the European Union (ERC, FTRC, 101043243). Views and opinions expressed are however
those of the author(s) only and do not necessarily reflect those of the European Union or the European Research Council.
Neither the European Union nor the granting authority can be held responsible for them.
18 | 17 | 17 | arxiv1.pdf |
References
[1] “Chatbot Arena LLM Leaderboard: Community-driven evaluation for best LLM and AI chatbots,” https://
huggingface.co/spaces/lmarena-ai/chatbot-arena-leaderboard, accessed: 2024-11-14.
[2] “Hello gpt-4o,” https://openai.com/index/hello-gpt-4o/, published: 2024-05-23.
[3] “Introducing Llama 3.1: Our most capable models to date,” https://ai.meta.com/blog/meta-llama-3-1/, published:
2024-07-23.
[4] “Introducing Meta Llama 3: The most capable openly available LLM to date,” https://ai.meta.com/blog/
meta-llama-3/, published: 2024-04-18.
[5] “Martian LLM router,” https://withmartian.com/.
[6] “New embedding models and API updates,” https://openai.com/index/new-embedding-models-and-api-updates,
published: 2024-01-25.
[7] “Notdiamond LLM router,” https://www.notdiamond.ai/.
[8] “OpenAI and others seek new path to smarter AI as current meth-
ods hit limitations,” https://www.reuters.com/technology/artificial-intelligence/
openai-rivals-seek-new-path-smarter-ai-current-methods-hit-limitations-2024-11-11, published: 2024-11-15.
[9] “OpenAI, Google and Anthropic are struggling to build more advanced AI,” https://www.bloomberg.com/news/
articles/2024-11-13/openai-google-and-anthropic-are-struggling-to-build-more-advanced-ai?sref=CrGXSfHu,
published: 2024-11-13.
[10] “OpenAI shifts strategy as rate of ‘GPT’ AI improvements slows,” https://www.theinformation.com/articles/
openai-shifts-strategy-as-rate-of-gpt-ai-improvements-slows, published: 2024-11-9.
[11] “Openrouter LLM router,” https://openrouter.ai/.
[12] “Unify LLM router,” https://unify.ai/.
[13] “What is a control plane?” https://www.ibm.com/think/topics/control-plane, published: 2024-10-31.
[14] J. Achiam, S. Adler, S. Agarwal, L. Ahmad, I. Akkaya, F. L. Aleman, D. Almeida, J. Altenschmidt, S. Altman,
S. Anadkat et al., “GPT-4 technical report,” arXiv preprint arXiv:2303.08774, 2023.
[15] P. Aggarwal, A. Madaan, A. Anand, S. P. Potharaju, S. Mishra, P. Zhou, A. Gupta, D. Rajagopal, K. Kappaganthu,
Y . Yanget al., “Automix: Automatically mixing language models,” arXiv preprint arXiv:2310.12963, 2023.
[16] G. Alon and M. Kamfonas, “Detecting language model attacks with perplexity,” arXiv preprint arXiv:2308.14132,
2023.
[17] R. A. Bradley and M. E. Terry, “Rank analysis of incomplete block designs: I. the method of paired comparisons,”
Biometrika, vol. 39, no. 3/4, 1952.
[18] N. Carlini, D. Paleka, K. D. Dvijotham, T. Steinke, J. Hayase, A. F. Cooper, K. Lee, M. Jagielski, M. Nasr, A. Conmy
et al., “Stealing part of a production language model,” arXiv preprint arXiv:2403.06634, 2024.
[19] H. Chaudhari, G. Severi, J. Abascal, M. Jagielski, C. A. Choquette-Choo, M. Nasr, C. Nita-Rotaru, and A. Oprea,
“Phantom: General trigger attacks on retrieval augmented language generation,” arXiv preprint arXiv:2405.20485,
2024.
[20] L. Chen, M. Zaharia, and J. Zou, “FrugalGPT: How to use large language models while reducing cost and improving
performance,” arXiv preprint arXiv:2305.05176, 2023.
[21] W.-L. Chiang, L. Zheng, Y . Sheng, A. N. Angelopoulos, T. Li, D. Li, B. Zhu, H. Zhang, M. Jordan, J. E. Gon-
zalez, and I. Stoica, “Chatbot arena: An open platform for evaluating LLMs by human preference,” in Forty-first
International Conference on Machine Learning (ICML), 2024.
[22] S. Cho, S. Jeong, J. Seo, T. Hwang, and J. C. Park, “Typos that broke the RAG’s back: Genetic attack on RAG
pipeline by simulating documents in the wild via low-level perturbations,”arXiv preprint arXiv:2404.13948, 2024.
[23] J. Chu, Y . Liu, Z. Yang, X. Shen, M. Backes, and Y . Zhang, “Comprehensive assessment of jailbreak attacks against
LLMs,” arXiv preprint arXiv:2402.05668, 2024.
[24] K. Cobbe, V . Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakanoet al.,
“Training verifiers to solve math word problems,”arXiv preprint arXiv:2110.14168, 2021.
[25] N. Dalvi, P. Domingos, Mausam, S. Sanghai, and D. Verma, “Adversarial classification,” inProceedings of the tenth
ACM SIGKDD international conference on Knowledge discovery and data mining, 2004.
19 | 18 | 18 | arxiv1.pdf |
[26] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for
language understanding,” in Proceedings of the 2019 Conference of the North American Chapter of the Association
for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 2019.
[27] D. Ding, A. Mallick, C. Wang, R. Sim, S. Mukherjee, V . R ¨uhle, L. V . Lakshmanan, and A. H. Awadallah, “Hybrid
LLM: Cost-efficient and quality-aware query routing,” in International Conference on Learning Representations
(ICLR), 2024.
[28] Y . Dong, H. Chen, J. Chen, Z. Fang, X. Yang, Y . Zhang, Y . Tian, H. Su, and J. Zhu, “How robust is Google’s Bard
to adversarial image attacks?” arXiv preprint arXiv:2309.11751, 2023.
[29] N. Du, Y . Huang, A. M. Dai, S. Tong, D. Lepikhin, Y . Xu, M. Krikun, Y . Zhou, A. W. Yu, O. Firat et al., “Glam:
Efficient scaling of language models with mixture-of-experts,” in International Conference on Machine Learning
(ICML), 2022.
[30] W. Fedus, B. Zoph, and N. Shazeer, “Switch transformers: Scaling to trillion parameter models with simple and
efficient sparsity,”Journal of Machine Learning Research (JMLR), 2022.
[31] T. Feng, Y . Shen, and J. You, “Graphrouter: A graph-based router for LLM selections,” arXiv preprint
arXiv:2410.03834, 2024.
[32] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in International
Conference on Learning Representations (ICLR), 2015.
[33] K. Greshake, S. Abdelnabi, S. Mishra, C. Endres, T. Holz, and M. Fritz, “Not what you’ve signed up for: Compro-
mising real-world LLM-integrated applications with indirect prompt injection,” in ACM AISec, 2023.
[34] J. Hayes, I. Shumailov, and I. Yona, “Buffer overflow in mixture of experts,”arXiv preprint arXiv:2402.05526, 2024.
[35] D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt, “Measuring massive multitask
language understanding,” in International Conference on Learning Representations (ICLR), 2021.
[36] N. Jain, A. Schwarzschild, Y . Wen, G. Somepalli, J. Kirchenbauer, P.-y. Chiang, M. Goldblum, A. Saha, J. Geip-
ing, and T. Goldstein, “Baseline defenses for adversarial attacks against aligned language models,” arXiv preprint
arXiv:2309.00614, 2023.
[37] F. Jelinek, “Interpolated estimation of Markov source parameters from sparse data,” 1980. [Online]. Available:
https://api.semanticscholar.org/CorpusID:61012010
[38] A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel,
G. Lample, L. Saulnier et al., “Mistral 7B,” arXiv preprint arXiv:2310.06825, 2023.
[39] A. Q. Jiang, A. Sablayrolles, A. Roux, A. Mensch, B. Savary, C. Bamford, D. S. Chaplot, D. d. l. Casas, E. B. Hanna,
F. Bressand et al., “Mixtral of experts,” arXiv preprint arXiv:2401.04088, 2024.
[40] D. Jiang, X. Ren, and B. Y . Lin, “LLM-Blender: Ensembling large language models with pairwise ranking and
generative fusion,” in Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics
(Volume 1: Long Papers), 2023.
[41] C.-H. Lee, H. Cheng, and M. Ostendorf, “OrchestraLLM: Efficient orchestration of language models for dialogue
state tracking,” in Proceedings of the 2024 Conference of the North American Chapter of the Association for Com-
putational Linguistics: Human Language Technologies (Volume 1: Long Papers), 2024.
[42] Y . Liu, G. Deng, Z. Xu, Y . Li, Y . Zheng, Y . Zhang, L. Zhao, T. Zhang, K. Wang, and Y . Liu, “Jailbreaking ChatGPT
via prompt engineering: An empirical study,” arXiv preprint arXiv:2305.13860, 2023.
[43] D. Lowd and C. Meek, “Adversarial learning,” in ACM International Conference on Knowledge Discovery in Data
Mining (SIGKDD), 2005.
[44] S. Merity, C. Xiong, J. Bradbury, and R. Socher, “Pointer sentinel mixture models,” in International Conference on
Learning Representations (ICLR), 2016.
[45] S. Narayanan Hari and M. Thomson, “Tryage: Real-time, intelligent routing of user prompts to large language
models,” arXiv e-prints, 2023.
[46] M. Nasr, N. Carlini, J. Hayase, M. Jagielski, A. F. Cooper, D. Ippolito, C. A. Choquette-Choo, E. Wallace,
F. Tram`er, and K. Lee, “Scalable extraction of training data from (production) language models,” arXiv preprint
arXiv:2311.17035, 2023.
[47] I. Ong, A. Almahairi, V . Wu, W.-L. Chiang, T. Wu, J. E. Gonzalez, M. W. Kadous, and I. Stoica, “RouteLLM:
Learning to route LLMs with preference data,” arXiv preprint arXiv:2406.18665, 2024.
20 | 19 | 19 | arxiv1.pdf |
[48] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, “Practical black-box attacks against
machine learning,” in Proceedings of the 2017 ACM on Asia conference on computer and communications security,
2017.
[49] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami, “The limitations of deep learning in
adversarial settings,” in IEEE European symposium on security and privacy (EuroS&P), 2016.
[50] F. Perez and I. Ribeiro, “Ignore previous prompt: Attack techniques for language models,” in NeurIPS ML Safety
Workshop, 2022.
[51] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are unsupervised
multitask learners,” https://cdn.openai.com/better-language-models/language models are unsupervised multitask
learners.pdf, 2019.
[52] C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, and N. Houlsby,
“Scaling vision with sparse mixture of experts,” Advances in Neural Information Processing Systems (NeurIPS) ,
2021.
[53] M. ˇSakota, M. Peyrard, and R. West, “Fly-swat or cannon? cost-effective language model choice via meta-modeling,”
in Proceedings of the 17th ACM International Conference on Web Search and Data Mining, 2024.
[54] S. Schulhoff, J. Pinto, A. Khan, L.-F. Bouchard, C. Si, S. Anati, V . Tagliabue, A. Kost, C. Carnahan, and J. Boyd-
Graber, “Ignore this title and HackAPrompt: Exposing systemic vulnerabilities of LLMs through a global prompt
hacking competition,” in EMNLP, 2023.
[55] A. Shafran, R. Schuster, and V . Shmatikov, “Machine against the RAG: Jamming retrieval-augmented generation
with blocker documents,” arXiv preprint arXiv:2406.05870, 2024.
[56] N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. Le, G. Hinton, and J. Dean, “Outrageously large neural net-
works: The sparsely-gated mixture-of-experts layer,” in International Conference on Learning Representations ,
2016.
[57] T. Shnitzer, A. Ou, M. Silva, K. Soule, Y . Sun, J. Solomon, N. Thompson, and M. Yurochkin, “Large language model
routing with benchmark datasets,” arXiv preprint arXiv:2309.15789, 2023.
[58] K. Srivatsa, K. K. Maurya, and E. Kochmar, “Harnessing the power of multiple minds: Lessons learned from LLM
routing,” arXiv preprint arXiv:2405.00467, 2024.
[59] D. Stripelis, Z. Hu, J. Zhang, Z. Xu, A. Shah, H. Jin, Y . Yao, S. Avestimehr, and C. He, “Tensoropera router: A
multi-model router for efficient LLM inference,” arXiv preprint arXiv:2408.12320, 2024.
[60] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus, “Intriguing properties of
neural networks,” arXiv preprint arXiv:1312.6199, 2013.
[61] G. Team, R. Anil, S. Borgeaud, J.-B. Alayrac, J. Yu, R. Soricut, J. Schalkwyk, A. M. Dai, A. Hauth, K. Millican
et al., “Gemini: a family of highly capable multimodal models,” arXiv preprint arXiv:2312.11805, 2023.
[62] Teknium, “Openhermes 2.5: An open dataset of synthetic data for generalist LLM assistants,” 2023. [Online].
Available: https://huggingface.co./datasets/teknium/OpenHermes-2.5
[63] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y . Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale
et al., “Llama 2: Open foundation and fine-tuned chat models,” arXiv preprint arXiv:2307.09288, 2023.
[64] S. Toyer, O. Watkins, E. A. Mendes, J. Svegliato, L. Bailey, T. Wang, I. Ong, K. Elmaaroufi, P. Abbeel, T. Darrell
et al., “Tensor Trust: Interpretable prompt injection attacks from an online game,” in International Conference on
Learning Representations (ICLR), 2023.
[65] F. Tram `er, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction
APIs,” in USENIX Security Symposium, 2016.
[66] A. Wei, N. Haghtalab, and J. Steinhardt, “Jailbroken: How does LLM safety training fail?” in Advances in Neural
Information Processing Systems (NeurIPS), 2023.
[67] I. Yona, I. Shumailov, J. Hayes, and N. Carlini, “Stealing user prompts from mixture of experts,” arXiv preprint
arXiv:2410.22884, 2024.
[68] M. Yue, J. Zhao, M. Zhang, L. Du, and Z. Yao, “Large language model cascades with mixture of thought represen-
tations for cost-efficient reasoning,” in International Conference on Learning Representations (ICLR), 2024.
[69] C. Zhang, T. Zhang, and V . Shmatikov, “Controlled generation of natural adversarial documents for stealthy retrieval
poisoning,” arXiv preprint arXiv:2410.02163, 2024.
[70] Y . Zhang, N. Carlini, and D. Ippolito, “Effective prompt extraction from language models,” in First Conference on
Language Modeling, 2024.
21 | 20 | 20 | arxiv1.pdf |
[71] L. Zheng, W.-L. Chiang, Y . Sheng, S. Zhuang, Z. Wu, Y . Zhuang, Z. Lin, Z. Li, D. Li, E. Xinget al., “Judging LLM-
as-a-judge with MT-Bench and chatbot arena,” Advances in Neural Information Processing Systems (NeurIPS) ,
2023.
[72] S. Zhu, R. Zhang, B. An, G. Wu, J. Barrow, Z. Wang, F. Huang, A. Nenkova, and T. Sun, “AutoDAN: Automatic
and interpretable adversarial attacks on large language models,” arXiv preprint arXiv:2310.15140, 2023.
[73] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson, “Universal and transferable adversarial attacks on aligned language
models,” arXiv preprint arXiv:2307.15043, 2023.
A Gadget Examples
Below are a few examples of the optimized confounder gadgets c we got, for the different routing algorithms:
• RSW :
legationbelongs967reglo’hui
(DictionaryizedNameantal bidi.numberOf
demanding RESULTS! XVI-ioibili Written Hormchriftlatable
• RMF :
041 personne.nama tomeTN Exact relac ¸˜ao tinha seis.Description
NUMAGEMENTMedian157 mut-camerauacao nije Hir Ter
• RCLS :
contiene DEM USERNAME fourteen chees781 consultants200 inici DOJ
571:
Ord:nth Norwegian Mercer docs Abr226 METADATA
• RLLM :
dated:frameifyumi345 Kurdasciiuzeiphertext
Midnightexecution431!784 below1 unwrap : / n / n
B Ablation Study
In this section, we evaluate the effect of different hyperparameters and design choices (in the white-box setting).
Prefix vs. suffix. As described in Section 4, we prepend the confounder gadget to the query. An alternative is to append
it. This is straightforward for MT-bench and GSM8K, but MMLU consists of multi-choice questions followed by a list
of possible answers, and the term “Answer:”. We insert the gadget at the end of the question text and before the possible
answers. If we append it at the very end, after “Answer:”, the LLM assumes the query was answered and in many cases
does not generate any output at all.
Table 12 shows that average upgrade rates are similar regardless of whether the gadget was inserted as a prefix or a suffix.
For MMLU, prefix works better. The downgrade rate is 0% in all cases.
22 | 21 | 21 | arxiv1.pdf |
RSW RMF RCLS RLLM
MT-Bench Prefix 100 ± 0 100 ± 0 100 ± 0 73 ± 5
Suffix 100 ± 0 100 ± 0 100 ± 0 84 ± 4
MMLU Prefix 90 ± 1 78 ± 4 100 ± 0 95 ± 1
Suffix 82 ± 2 63 ± 3 93 ± 1 93 ± 1
GSM8K Prefix 98 ± 0 100 ± 0 100 ± 0 100 ± 0
Suffix 94 ± 1 100 ± 0 100 ± 0 94 ± 3
Table 12: Average upgrade rates for different ways of adding the gadget to queries, in the white-box setting. Results are
similar in both methods, with a slight preference to the prefix approach.
RSW RMF RCLS RLLM
MT-Bench Uniform 100 ± 0 100 ± 0 100 ± 0 73 ± 5
Natural Prob. 100 ± 0 97 ± 2 100 ± 0 70 ± 5
MMLU Uniform 90 ± 1 78 ± 4 100 ± 0 95 ± 1
Natural Prob. 77 ± 2 41 ± 3 96 ± 2 87 ± 4
GSM8K Uniform 98 ± 0 100 ± 0 100 ± 0 94 ± 3
Natural Prob. 88 ± 2 92 ± 3 100 ± 0 83 ± 9
Table 13: Average upgrade rates for different ways of sampling candidate tokens during gadget generation, in the white-
box setting. Uniformly sampling the tokens yields better upgrade rates in most cases.
As mentioned in Section 5, to encourage the LLMs to follow the specific format in their responses (so they can be
parsed and compared with the ground-truth answers), we add a short prefix to the MMLU and GSM8K queries that
instructs the model how to respond. We phrase this instruction as follows: “ Answer the question using the format:
“Answer: [A/B/C/D]. Explanation: [EXPLANATION]” ” for the multi-choice queries of the MMLU benchmark, and a
similar version for GSM8K. We add this instruction after modifying the queries with the confounder gadget, i.e. the
instruction is prepended to the gadget.
An alternative to insert the instruction after the gadget but before the query, however we observed this to slighly underper-
form its counterpart. In the white-box setting we observe a slight decrease in the average (across all four routers) upgrade
rate from 91% to 89% for the MMLU benchmark, and from 98% to 91% for the GSM8K benchmark. In the black-box
setting, the average upgrade rate on MMLU reduces from 57% to 49% and on GSM8K from 73% to 64%.
Token sampling method. When generating the confounder gadget (see Section 4), we iteratively replace tokens with the
goal of maximizing the routing algorithm’s score for the gadget. Candidate replacement tokens are chosen uniformly at
random. An alternative is to choose candidates based on their probability of appearing in natural text. To evaluate this
method, we compute token probabilities by parsing and tokenizing the wikitext-103-raw-v1 dataset [44].
Table 13 shows that in most cases uniform sampling of replacement tokens yields better upgrade rates. We conjecture that
uniform sampling produces more unnatural text, confusing the router. For example, for the RSW routing algorithm, uni-
form sampling produces the following gadget: “legationbelongs967reglo’hui(DictionaryizedNameantal bidi.numberOf”,
whereas sampling according to natural probabilities produces “ total occurred According number Letar final Bab named
remainder”.
Number of tokens in the gadget. In our main evaluation, the gadgets are composed of n = 10 tokens. We evaluate the
effect of using less ( n = 5) or more ( n = 20 or n = 50) tokens. We observed that 5 tokens were insufficient to make
changes to the routing algorithm’s score and thus we were not able to optimize the gadget in this setting. As for 20 tokens,
we observe a a small improvement in the white-box setting, increase the average upgrade rate from 93.9% to 95.8%, and
a bigger improvement in the black-box setting, increase the average upgrade rate from 70.2% to 81.3%. Using 50 tokens
further increases the upgrade rates, to 98.2% in the white-box setting and 84.2% in the black box setting. The average
convergence rate increases as well, from 60 iterations for 10 tokens, to 70 for 20 tokens, and 100 for 50 tokens. Overall
this evaluation suggests that our rerouting attack can be even further improved by using longer gadgets, however it is
important to be careful not to make them too long to the point that they might degrade the performance of the underlying
LLM.
23 | 22 | 22 | arxiv1.pdf |
gadget RSW RMF RCLS RLLM
MT-Bench Init 7 3 8 3
Random 97 ± 2 37 ± 8 62 ± 10 38 ± 4
MMLU Init 21 4 0 13
Random 49 ± 5 6 ± 3 14 ± 7 68 ± 5
GSM8K Init 21 20 0 9
Random 58 ± 8 34 ± 8 37 ± 9 41 ± 7
Table 14: Average upgrade rates when the gadget is not optimized and is either defined to be the the initial set of tokens
or a set of uniformly sampled tokens. The optimization-based approach outperforms these optimization-free approaches.
intro type RSW RMF RCLS RLLM
Up. Down. Up. Down. Up. Down. Up. Down.
MT-Bench
Ours-1 100 0 0 31 33 8 26 7
Ours-2 100 0 0 60 75 0 35 5
Gemini 100 0 0 50 100 0 55 0
GPT 100 0 0 48 46 2 19 7
MMLU
Ours-1 28 0 0 57 2 47 0 42
Ours-2 32 0 0 66 19 26 0 42
Gemini 35 0 0 60 100 0 21 21
GPT 54 0 0 51 0 66 26 23
GSM8K
Ours-1 4 46 0 100 0 77 4 36
Ours-2 6 63 0 100 16 43 2 43
Gemini 4 56 0 100 98 0 9 9
GPT 4 77 0 100 0 95 6 25
Table 15: Average upgrade and downgrade rates of gadgets containing injected instructions to the router. This method
significantly underperforms the optimization-based approach in most cases.
C Optimization-Free Gadget Generation
We evaluate optimization-free alternatives to our black-box optimization method for generating confounder gadgets.
Fixed gadget. A simple way to create a gadget without resorting to optimization is to repeat n tokens. We use ! as the
initialization token, so the gadget in this case is !!!!!!!!!!. Another possibility is to select n tokens uniformly at random.
Table 14 shows the upgrade rates for both options, were in the latter setting we repeat the process 10 times and report the
average result and the standard error. While they are non-negligible, especially for the randomly sampled gadgets, they
significantly underperform the upgrade rates reported in Table 1 for optimized gadgets.
Instruction injection. Prompt injection is a known attack on LLMs [50, 64], thus we consider a gadget consisting of a
direct instruction to the router to treat the query as a complex one and obtain a high-quality response.
We evaluated 4 differently phrased instructions: two created manually and two generated by, respectively, Gemini [61]
and GPT-4o [2], denoted as “ours-1”, “ours-2”, “Gemini”, and “GPT”.
Table 15 reports the results. This method works well in a few cases but poorly in most. This highlights the difference
between attacking LLMs and attacking LLM routers.
D Perplexity issues
In Section 5 we present perplexity as one of the metrics we use for evaluating the effect of our attack over the quality of
the generated response. However, perplexity is intended to measure the naturalness of text, and as such it is ill-suited for
comparing the quality of multiple natural texts. This results with the perplexity values of the responses of both the weak
and the strong model being close and withing the margin of error. Figure 7 shows the distribution of perplexity values of
the clean responses generated by both models, and the ROCAUC score computed on these two sets of values. As can be
seen, the perplexity values are quite similar between both models, with ROCAUC scores ranging between0.38 to 0.47.
24 | 23 | 23 | arxiv1.pdf |
0 10 20 30 40 50 60 70
Perplexity
0
5
10
15
20Count
strong
weak
(a) MT-bench
ROCAUC=0.38
0 20 40 60
Perplexity
0.0
2.5
5.0
7.5
10.0
12.5
15.0
17.5
20.0Count
strong
weak
(b) MMLU
ROCAUC=0.47
0 20 40 60 80
Perplexity
0
5
10
15
20
25Count
strong
weak
(c) GSM8K
ROCAUC=0.38
Figure 7: Histograms of the perplexity values of clean responses generated by the strong and weak models.
RSW RMF RCLS RLLM
Up. Down. Up. Down. Up. Down. Up. Down.
MT-Bench 0 ± 0 24 ± 2 0 ± 0 67 ± 6 0 ± 0 29 ± 3 24 ± 3 1 ± 0
MMLU 8 ± 3 9 ± 2 0 ± 0 77 ± 7 0 ± 0 50 ± 4 55 ± 4 5 ± 1
GSM8K 4 ± 2 48 ± 9 1 ± 1 78 ± 11 0 ± 0 80 ± 4 21 ± 4 4 ± 2
Table 16: Upgrade and downgrade rates for the downgrading variant of our rerouting attack, where the goal is to reroute
queries to the weak model (white-box).
As mentioned in Section 5, throughout our evaluations we filter out responses with perplexity values higher than 100.
This is due to a few responses getting arbitrarily high perplexity values although corresponding to valid responses. For
example, for the query:
Suppose you are a mathematician and poet. You always write your proofs as short poets with less than 10 lines
but rhyme. Prove the square root of 2 is irrational number.
The weak model responses with:
In squares, two seeks a home, Two whole numbers, never roam. If it did, in pairs combined, A different square
would it find. But take root, two’s square, or four, Still leaves a remainder’s roar. The square root of two’s no child
of two, Its irrational nature ever true.
which results with an high perplexity value of 166. We also observed a few responses containing lists, code, or math
equations with abnormally high perplexity values.
E Rerouting to the Weak Model
In this section we evaluate the generality of our attack and show that generation of confounder gadgets can be optimized
for the opposite objective from what we consider so far: reroute queries to the weak model. For this, we repeat the same
optimization process as in Section 4 but minimize the router’s score. Table 16 shows the upgrade and downgrade rates
for this variant of the attack, in the white-box setting. In most cases we see a significant downgrade rate and a minimal
upgrade rate, meaning that most of the modified queries were routed to the weak model. One notable exception is the
LLM-based router RLLM , for which the attack does not work well. Future work will be needed to explore improving
confounder generation for this setting further.
25 | 24 | 24 | arxiv1.pdf |
A Primer in BERTology: What We Know About How BERT Works
Anna Rogers
Center for Social Data Science
University of Copenhagen
[email protected]
Olga Kovaleva
Dept. of Computer Science
University of Massachusetts Lowell
[email protected]
Anna Rumshisky
Dept. of Computer Science
University of Massachusetts Lowell
[email protected]
Abstract
Transformer-based models have pushed state
of the art in many areas of NLP, but our un-
derstanding of what is behind their success
is still limited. This paper is the first sur-
vey of over 150 studies of the popular BERT
model. We review the current state of knowl-
edge about how BERT works, what kind
of information it learns and how it is repre-
sented, common modifications to its training
objectives and architecture, the overparame-
terization issue and approaches to compres-
sion. We then outline directions for future
research.
1 Introduction
Since their introduction in 2017, Transformers
(Vaswani et al., 2017) have taken NLP by storm,
offering enhanced parallelization and better model-
ing of long-range dependencies. The best known
Transformer-based model is BERT (Devlin et al.,
2019); it obtained state-of-the-art results in numer-
ous benchmarks and is still a must-have baseline.
While it is clear that BERT works remarkably
well, it is less clear why, which limits further
hypothesis-driven improvement of the architecture.
Unlike CNNs, the Transformers have little cogni-
tive motivation, and the size of these models limits
our ability to experiment with pre-training and per-
form ablation studies. This explains a large number
of studies over the past year that attempted to un-
derstand the reasons behind BERT’s performance.
In this paper, we provide an overview of what
has been learned to date, highlighting the questions
which are still unresolved. We first consider the
linguistic aspects of it, i.e., the current evidence
regarding the types of linguistic and world knowl-
edge learned by BERT, as well as where and how
this knowledge may be stored in the model. We
then turn to the technical aspects of the model and
provide an overview of the current proposals to
improve BERT’s architecture, pre-training and fine-
tuning. We conclude by discussing the issue of
overparameterization, the approaches to compress-
ing BERT, and the nascent area of pruning as a
model analysis technique.
2 Overview of BERT architecture
Fundamentally, BERT is a stack of Transformer
encoder layers (Vaswani et al., 2017) which consist
of multiple self-attention "heads". For every input
token in a sequence, each head computes key, value
and query vectors, used to create a weighted repre-
sentation. The outputs of all heads in the same layer
are combined and run through a fully-connected
layer. Each layer is wrapped with a skip connection
and followed by layer normalization.
The conventional workflow for BERT consists
of two stages: pre-training and fine-tuning. Pre-
training uses two self-supervised tasks: masked
language modeling (MLM, prediction of randomly
masked input tokens) and next sentence prediction
(NSP, predicting if two input sentences are adjacent
to each other). In fine-tuning for downstream ap-
plications, one or more fully-connected layers are
typically added on top of the final encoder layer.
The input representations are computed as fol-
lows: each word in the input is first tokenized into
wordpieces (Wu et al., 2016), and then three em-
bedding layers (token, position, and segment) are
combined to obtain a fixed-length vector. Special
token [CLS] is used for classification predictions,
and [SEP] separates input segments.
Google1 and HuggingFace (Wolf et al., 2020)
provide many variants of BERT, including the orig-
inal "base" and "large" versions. They vary in the
number of heads, layers, and hidden state size.
1https://github.com/
google-research/bert
arXiv:2002.12327v3 [cs.CL] 9 Nov 2020 | 0 | 0 | arxiv2_taclccby4_license.pdf |
3 What knowledge does BERT have?
A number of studies have looked at the knowledge
encoded in BERT weights. The popular approaches
include fill-in-the-gap probes of MLM, analysis of
self-attention weights, and probing classifiers with
different BERT representations as inputs.
3.1 Syntactic knowledge
Lin et al. (2019) showed that BERT represen-
tations are hierarchical rather than linear , i.e.
there is something akin to syntactic tree structure
in addition to the word order information. Ten-
ney et al. (2019b) and Liu et al. (2019a) also
showed that BERT embeddings encode informa-
tion about parts of speech, syntactic chunks
and roles. Enough syntactic information seems
to be captured in the token embeddings themselves
to recover syntactic trees (Vilares et al., 2020; Kim
et al., 2020; Rosa and Mare ˇcek, 2019), although
probing classifiers could not recover the labels of
distant parent nodes in the syntactic tree (Liu et al.,
2019a). Warstadt and Bowman (2020) report evi-
dence of hierarchical structure in three out of four
probing tasks.
As far as how syntax is represented, it seems
that syntactic structure is not directly encoded
in self-attention weights. Htut et al. (2019) were
unable to extract full parse trees from BERT heads
even with the gold annotations for the root. Jawahar
et al. (2019) include a brief illustration of a depen-
dency tree extracted directly from self-attention
weights, but provide no quantitative evaluation.
However, syntactic information can be recov-
ered from BERT token representations. Hewitt
and Manning (2019) were able to learn transfor-
mation matrices that successfully recovered syn-
tactic dependencies in PennTreebank data from
BERT’s token embeddings (see also Manning et al.,
2020). Jawahar et al. (2019) experimented with
transformations of the [CLS] token using Tensor
Product Decomposition Networks (McCoy et al.,
2019a), concluding that dependency trees are the
best match among 5 decomposition schemes (al-
though the reported MSE differences are very
small). Miaschi and Dell’Orletta (2020) performs
a range of syntactic probing experiments with con-
catenated token representations as input.
Note that all these approaches look for the
evidence of gold-standard linguistic structures,
and add some amount of extra knowledge to the
probe. Most recently, Wu et al. (2020) proposed a
4168
[CLS]Forthosewhofollowsocialmedia
transitions
on
Capitol
Hill , this will be alittle
different
.
[CLS]
For
those
who
follow
social
media
transitions
on
Capitol
Hill
,
this
will
be
a
little
different
.
0
1
2
3
4
5
Figure 1: Heatmap of the impact matrix for the sen-
tence “For those who follow social media transitions
on Capitol Hill, this will be a little different.”
3 Visualization with Impact Maps
Before we discuss specific syntactic phenomena,
let us first analyze some example impact matri-
ces derived from sample sentences. We visual-
ize an impact matrix of a sentence by displaying
a heatmap. We use the term “impact map” to refer
to a heatmap of an impact matrix.
Setup. We extract impact matrices by feed-
ing BERT with 1,000 sentences from the English
Parallel Universal Dependencies (PUD) treebank
of the CoNLL 2017 Shared Task ( Zeman et al. ,
2017). We follow the setup and pre-processing
steps employed in pre-training BERT. An example
impact map is shown in Figure 1.
Dependency. We notice that the impact map
contains many stripes, which are short series of
vertical/horizontal cells, typically located along
the diagonal. Take the word “ different” as an ex-
ample (which is illustrated by the second-to-last
column in the impact matrix). We observe a clear
vertical stripe above the main diagonal. The inter-
pretation is that this particular occurrence of the
word “different” strongly affects the occurrences
of those words before it. These strong influences
are shown by the darker-colored pixels seen in the
second last column of the impact map. This ob-
servation agrees with the ground-truth dependency
tree, which selects “ different” as the head of all
remaining words in the phrase “ this will be a lit-
tle different.” We also observe similar patterns on
“transitions” and “Hill”. Such correlations lead us
to explore the idea of extracting dependency trees
from the matrices (see Section 4.1).
follow social media transitions on Capitol Hill
Figure 2: Part of the constituency tree.
Constituency. Figure 2 shows part of the con-
stituency tree of our example sentence generated
by Stanford CoreNLP ( Manning et al. , 2014). In
this sentence, “ media” and “ on” are two words
that are adjacent to “ transitions”. From the tree,
however, we see that “media” is closer to “transi-
tions” than “on” is in terms of syntactic distance.
If a model is syntactically uninformed, we would
expect “media” and “on” to have comparable im-
pacts on the prediction of “ transitions”, and vice
versa. However, we observe a far greater impact
(darker color) between “media” and “transitions”
than that between “on” and “transitions”. We will
further support this observation with empirical ex-
periments in Section 4.2.
Other Structures. Along the diagonal of the
impact map, we see that words are grouped into
four contiguous chunks that have specific intents
(e.g., a noun phrase – on Capitol Hill ). We also
observe that the two middle chunks have relatively
strong inter-chunk word impacts and thus a bond-
ing that groups them together, forming a larger
verb phrase. This observation suggest that BERT
may capture the compositionality of the language.
In the following sections we quantitatively eval-
uate these observations.
4 Syntactic Probe
We start with two syntactic probes – dependency
probe and constituency probe.
4.1 Dependency Probe
With the goal of exploring the extent dependency
relations are captured in BERT, we set out to an-
swer the following question: Can BERT outper-
form linguistically uninformed baselines in unsu-
pervised dependency parsing? If so, to what ex-
tent?
We begin by using the token-level perturbed
masking technique to extract an impact matrix F
for each sentence. We then utilize graph-based al-
gorithms to induce a dependency tree fromF, and
compare it against ground-truth whose annotations
4168
[CLS]Forthosewhofollowsocialmedia
transitions
on
Capitol
Hill , this will be alittle
different
.
[CLS]
For
those
who
follow
social
media
transitions
on
Capitol
Hill
,
this
will
be
a
little
different
.
0
1
2
3
4
5
Figure 1: Heatmap of the impact matrix for the sen-
tence “For those who follow social media transitions
on Capitol Hill, this will be a little different.”
3 Visualization with Impact Maps
Before we discuss specific syntactic phenomena,
let us first analyze some example impact matri-
ces derived from sample sentences. We visual-
ize an impact matrix of a sentence by displaying
a heatmap. We use the term “impact map” to refer
to a heatmap of an impact matrix.
Setup. We extract impact matrices by feed-
ing BERT with 1,000 sentences from the English
Parallel Universal Dependencies (PUD) treebank
of the CoNLL 2017 Shared Task ( Zeman et al. ,
2017). We follow the setup and pre-processing
steps employed in pre-training BERT. An example
impact map is shown in Figure 1.
Dependency. We notice that the impact map
contains many stripes, which are short series of
vertical/horizontal cells, typically located along
the diagonal. Take the word “ different” as an ex-
ample (which is illustrated by the second-to-last
column in the impact matrix). We observe a clear
vertical stripe above the main diagonal. The inter-
pretation is that this particular occurrence of the
word “different” strongly affects the occurrences
of those words before it. These strong influences
are shown by the darker-colored pixels seen in the
second last column of the impact map. This ob-
servation agrees with the ground-truth dependency
tree, which selects “ different” as the head of all
remaining words in the phrase “ this will be a lit-
tle different.” We also observe similar patterns on
“transitions” and “Hill”. Such correlations lead us
to explore the idea of extracting dependency trees
from the matrices (see Section 4.1).
follow social media transitions on Capitol Hill
Figure 2: Part of the constituency tree.
Constituency. Figure 2 shows part of the con-
stituency tree of our example sentence generated
by Stanford CoreNLP ( Manning et al. , 2014). In
this sentence, “ media” and “ on” are two words
that are adjacent to “ transitions”. From the tree,
however, we see that “media” is closer to “transi-
tions” than “on” is in terms of syntactic distance.
If a model is syntactically uninformed, we would
expect “media” and “on” to have comparable im-
pacts on the prediction of “ transitions”, and vice
versa. However, we observe a far greater impact
(darker color) between “media” and “transitions”
than that between “on” and “transitions”. We will
further support this observation with empirical ex-
periments in Section 4.2.
Other Structures. Along the diagonal of the
impact map, we see that words are grouped into
four contiguous chunks that have specific intents
(e.g., a noun phrase – on Capitol Hill ). We also
observe that the two middle chunks have relatively
strong inter-chunk word impacts and thus a bond-
ing that groups them together, forming a larger
verb phrase. This observation suggest that BERT
may capture the compositionality of the language.
In the following sections we quantitatively eval-
uate these observations.
4 Syntactic Probe
We start with two syntactic probes – dependency
probe and constituency probe.
4.1 Dependency Probe
With the goal of exploring the extent dependency
relations are captured in BERT, we set out to an-
swer the following question: Can BERT outper-
form linguistically uninformed baselines in unsu-
pervised dependency parsing? If so, to what ex-
tent?
We begin by using the token-level perturbed
masking technique to extract an impact matrix F
for each sentence. We then utilize graph-based al-
gorithms to induce a dependency tree fromF, and
compare it against ground-truth whose annotations
Figure 1: Parameter-free probe for syntactic knowledge:
words sharing syntactic subtrees have larger impact on
each other in the MLM prediction (Wu et al., 2020)
parameter-free approach based on measuring the
impact that one word has on predicting another
word within a sequence in the MLM task (Figure 1).
They concluded that BERT "naturally" learns
some syntactic information, although it is not
very similar to linguistic annotated resources.
The fill-in-the-gap probes of MLM showed that
BERT takes subject-predicate agreement into
account when performing the cloze task (Gold-
berg, 2019; van Schijndel et al., 2019), even for
meaningless sentences and sentences with distrac-
tor clauses between the subject and the verb (Gold-
berg, 2019). A study of negative polarity items
(NPIs) by Warstadt et al. (2019) showed thatBERT
is better able to detect the presence of NPIs(e.g.
"ever") and the words that allow their use (e.g.
"whether") than scope violations.
The above claims of syntactic knowledge are be-
lied by the evidence that BERT does not "under-
stand" negation and is insensitive to malformed
input. In particular, its predictions were not al-
tered2 even with shuffled word order, truncated
sentences, removed subjects and objects (Ettinger,
2019). This could mean that either BERT’s syn-
tactic knowledge is incomplete, or it does not
need to rely on it for solving its tasks. The latter
seems more likely, since Glavaš and Vuli´c (2020)
2See also the recent findings on adversarial triggers, which
get the model to produce a certain output even though they
are not well-formed from the point of view of a human reader
(Wallace et al., 2019a). | 1 | 1 | arxiv2_taclccby4_license.pdf |
report that an intermediate fine-tuning step with
supervised parsing does not make much difference
for downstream task performance.
3.2 Semantic knowledge
To date, more studies have been devoted to BERT’s
knowledge of syntactic rather than semantic phe-
nomena. However, we do have evidence from an
MLM probing study that BERT has some knowl-
edge of semantic roles (Ettinger, 2019). BERT
even displays some preference for the incorrect
fillers for semantic roles that are semantically re-
lated to the correct ones, as opposed to those that
are unrelated (e.g. "to tip a chef" is better than "to
tip a robin", but worse than "to tip a waiter").
Tenney et al. (2019b) showed that BERT en-
codes information about entity types, relations,
semantic roles, and proto-roles, since this infor-
mation can be detected with probing classifiers.
BERT struggles with representations of num-
bers. Addition and number decoding tasks showed
that BERT does not form good representations for
floating point numbers and fails to generalize away
from the training data (Wallace et al., 2019b). A
part of the problem is BERT’s wordpiece tokeniza-
tion, since numbers of similar values can be divided
up into substantially different word chunks.
Out-of-the-box BERT is surprisingly brittle to
named entity replacements: e.g. replacing names
in the coreference task changes 85% of predictions
(Balasubramanian et al., 2020). This suggests that
the model does not actually form a generic idea of
named entities, although its F1 scores on NER prob-
ing tasks are high (Tenney et al., 2019a). Broscheit
(2019) find that fine-tuning BERT on Wikipedia
entity linking "teaches" it additional entity knowl-
edge, which would suggest that it did not absorb all
the relevant entity information during pre-training
on Wikipedia.
3.3 World knowledge
The bulk of evidence about commonsense knowl-
edge captured in BERT comes from practitioners
using it to extract such knowledge. One direct prob-
ing study of BERT reports that BERT struggles
with pragmatic inference and role-based event
knowledge (Ettinger, 2019). BERT also struggles
with abstract attributes of objects, as well as visual
and perceptual properties that are likely to be as-
sumed rather than mentioned (Da and Kasai, 2019).
The MLM component of BERT is easy to
adapt for knowledge induction by filling in the
Language Models as Knowledge Bases?
Fabio Petroni1 Tim Rockt¨aschel1,2 Patrick Lewis1,2 Anton Bakhtin1
Yuxiang Wu1,2 Alexander H. Miller1 Sebastian Riedel1,2
1Facebook AI Research
2University College London
{fabiopetroni, rockt, plewis, yolo, yuxiangwu, ahm, sriedel}@fb.com
Abstract
Recent progress in pretraining language mod-
els on large textual corpora led to a surge
of improvements for downstream NLP tasks.
Whilst learning linguistic knowledge, these
models may also be storing relational knowl-
edge present in the training data, and may
be able to answer queries structured as “fill-
in-the-blank” cloze statements. Language
models have many advantages over structured
knowledge bases: they require no schema en-
gineering, allow practitioners to query about
an open class of relations, are easy to extend to
more data, and require no human supervision
to train. We present an in-depth analysis of the
relational knowledge already present (without
fine-tuning) in a wide range of state-of-the-
art pretrained language models. We find that
(i) without fine-tuning, BERT contains rela-
tional knowledge competitive with traditional
NLP methods that have some access to ora-
cle knowledge, (ii) BERT also does remark-
ably well on open-domain question answer-
ing against a supervised baseline, and (iii) cer-
tain types of factual knowledge are learned
much more readily than others by standard lan-
guage model pretraining approaches. The sur-
prisingly strong ability of these models to re-
call factual knowledge without any fine-tuning
demonstrates their potential as unsupervised
open-domain QA systems. The code to re-
produce our analysis is available at https:
//github.com/facebookresearch/LAMA.
1 Introduction
Recently, pretrained high-capacity language mod-
els such as ELMo (Peters et al., 2018a) and BERT
(Devlin et al. , 2018a) have become increasingly
important in NLP. They are optimised to either
predict the next word in a sequence or some
masked word anywhere in a given sequence ( e.g.
“Dante was born in [M ask] in the year 1265.”).
The parameters of these models appear to store
Memory Query Answer
Symbolic
Memory Access
Neural LM
Memory Access
(Dante, born-in, X)
“Dante was born in[Mask].”
Dante
Florence
born-in
Florence
Florence
KG
LM
e.g. ELMo/BERT
Figure 1: Querying knowledge bases (KB) and lan-
guage models (LM) for factual knowledge.
vast amounts of linguistic knowledge (Peters et al.,
2018b; Goldberg, 2019; Tenney et al., 2019) use-
ful for downstream tasks. This knowledge is
usually accessed either by conditioning on latent
context representations produced by the original
model or by using the original model weights to
initialize a task-specific model which is then fur-
ther fine-tuned. This type of knowledge transfer
is crucial for current state-of-the-art results on a
wide range of tasks.
In contrast, knowledge bases are e ffective so-
lutions for accessing annotated gold-standard re-
lational data by enabling queries such as (D ante,
born-in, X). However, in practice we often need
to extract relational data from text or other modal-
ities to populate these knowledge bases. This
requires complex NLP pipelines involving entity
extraction, coreference resolution, entity linking
and relation extraction (Surdeanu and Ji, 2014)—
components that often need supervised data and
fixed schemas. Moreover, errors can easily prop-
agate and accumulate throughout the pipeline. In-
stead, we could attempt to query neural language
models for relational data by asking them to fill in
masked tokens in sequences like “Dante was born
arXiv:1909.01066v2 [cs.CL] 4 Sep 2019
Figure 2: BERT world knowledge (Petroni et al., 2019)
blanks (e.g. "Cats like to chase [___]"). Petroni
et al. (2019) showed that, for some relation types,
vanilla BERT is competitive with methods rely-
ing on knowledge bases (Figure 2), and Roberts
et al. (2020) show the same for open-domain QA
using T5 model (Raffel et al., 2019). Davison et al.
(2019) suggest that it generalizes better to unseen
data. In order to retrieve BERT’s knowledge, we
need good template sentences, and there is work
on their automatic extraction and augmentation
(Bouraoui et al., 2019; Jiang et al., 2019b).
However, BERT cannot reason based on its
world knowledge. Forbes et al. (2019) show that
BERT can "guess" the affordances and properties of
many objects, but can not reason about the relation-
ship between properties and affordances. For ex-
ample, it “knows" that people can walk into houses,
and that houses are big, but it cannot infer that
houses are bigger than people. Zhou et al. (2020)
and Richardson and Sabharwal (2019) also show
that the performance drops with the number of nec-
essary inference steps. Some of BERT’s world
knowledge success comes from learning stereotypi-
cal associations (Poerner et al., 2019), e.g., a person
with an Italian-sounding name is predicted to be
Italian, even when it is incorrect.
3.4 Limitations
Multiple probing studies in section 3 and section 4
report that BERT possesses a surprising amount of
syntactic, semantic, and world knowledge. How-
ever, Tenney et al. (2019a) remarks, “the fact that
a linguistic pattern is not observed by our probing
classifier does not guarantee that it is not there, and
the observation of a pattern does not tell us how it
is used." There is also the issue of how complex a
probe should be allowed to be (Liu et al., 2019a). If
a more complex probe recovers more information,
to what extent are we still relying on the original
model?
Furthermore, different probing methods may
lead to complementary or even contradictory con-
clusions, which makes a single test (as in most stud- | 2 | 2 | arxiv2_taclccby4_license.pdf |
Diagonal Heterogeneous
Vertical Vertical + diagonal Block
[CLS] [CLS] [SEP] [SEP] [SEP] [SEP] [SEP] [SEP] [CLS] [CLS] [SEP] [SEP] [SEP] [SEP] [CLS]
Figure 3: Attention patterns in BERT (Kovaleva et al., 2019)
ies) insufficient (Warstadt et al., 2019). A given
method might also favor one model over another,
e.g., RoBERTa trails BERT with one tree extraction
method, but leads with another (Htut et al., 2019).
The choice of linguistic formalism also matters
(Kuznetsov and Gurevych, 2020).
In view of all that, the alternative is to focus on
identifying what BERT actually relies on at infer-
ence time. This direction is currently pursued both
at the level of architecture blocks (to be discussed
in detail in subsection 6.3), and at the level of in-
formation encoded in model weights. Amnesic
probing (Elazar et al., 2020) aims to specifically
remove certain information from the model and see
how it changes performance, finding, for example,
that language modeling does rely on part-of-speech
information.
Another direction is information-theoretic prob-
ing. Pimentel et al. (2020) operationalize prob-
ing as estimating mutual information between the
learned representation and a given linguistic prop-
erty, which highlights that the focus should be not
on the amount of information contained in a rep-
resentation, but rather on how easily it can be ex-
tracted from it. V oita and Titov (2020) quantify
the amount of effort needed to extract information
from a given representation as minimum descrip-
tion length needed to communicate both the probe
size and the amount of data required for it to do
well on a task.
4 Localizing linguistic knowledge
4.1 BERT embeddings
In studies of BERT, the term "embedding" refers
to the output of a Transformer layer (typically, the
final one). Both conventional static embeddings
(Mikolov et al., 2013) and BERT-style embeddings
can be viewed in terms of mutual information max-
imization (Kong et al., 2019), but the latter are
contextualized. Every token is represented by a
vector dependent on the particular context of occur-
rence, and contains at least some information about
that context (Miaschi and Dell’Orletta, 2020).
Several studies reported that distilled contex-
tualized embeddings better encode lexical se-
mantic information (i.e. they are better at tra-
ditional word-level tasks such as word similarity).
The methods to distill a contextualized represen-
tation into static include aggregating the informa-
tion across multiple contexts (Akbik et al., 2019;
Bommasani et al., 2020), encoding "semantically
bleached" sentences that rely almost exclusively on
the meaning of a given word (e.g. "This is <>")
(May et al., 2019), and even using contextualized
embeddings to train static embeddings (Wang et al.,
2020d).
But this is not to say that there is no room for
improvement. Ethayarajh (2019) measure how
similar the embeddings for identical words are in
every layer, reporting that later BERT layers pro-
duce more context-specific representations3. They
also find that BERT embeddings occupy a narrow
cone in the vector space, and this effect increases
from the earlier to later layers. That is, two ran-
dom words will on average have a much higher
cosine similarity than expected if embeddings
were directionally uniform (isotropic) . Since
isotropy was shown to be beneficial for static word
embeddings (Mu and Viswanath, 2018), this might
be a fruitful direction to explore for BERT.
Since BERT embeddings are contextualized, an
interesting question is to what extent they cap-
ture phenomena like polysemy and homonymy.
There is indeed evidence that BERT’s contextu-
alized embeddings form distinct clusters corre-
sponding to word senses(Wiedemann et al., 2019;
Schmidt and Hofmann, 2020), making BERT suc-
cessful at word sense disambiguation task. How-
ever, Mickus et al. (2019) note thatthe representa-
tions of the same word depend on the position
of the sentence in which it occurs , likely due to
the NSP objective. This is not desirable from the
linguistic point of view, and could be a promising
3V oita et al. (2019a) look at the evolution of token embed-
dings, showing that in the earlier Transformer layers, MLM
forces the acquisition of contextual information at the expense
of the token identity, which gets recreated in later layers. | 3 | 3 | arxiv2_taclccby4_license.pdf |
avenue for future work.
The above discussion concerns token embed-
dings, but BERT is typically used as a sentence or
text encoder. The standard way to generate sen-
tence or text representations for classification is
to use the [CLS] token, but alternatives are also
being discussed, including concatenation of token
representations (Tanaka et al., 2020), normalized
mean (Tanaka et al., 2020), and layer activations
(Ma et al., 2019). See Toshniwal et al. (2020) for a
systematic comparison of several methods across
tasks and sentence encoders.
4.2 Self-attention heads
Several studies proposed classification of attention
head types. Raganato and Tiedemann (2018) dis-
cuss attending to the token itself, previous/next
tokens and the sentence end. Clark et al. (2019)
distinguish between attending to previous/next to-
kens, [CLS], [SEP], punctuation, and "attending
broadly" over the sequence. Kovaleva et al. (2019)
propose 5 patterns shown in Figure 3.
4.2.1 Heads with linguistic functions
The "heterogeneous" attention pattern shown in
Figure 3 could potentially be linguistically inter-
pretable, and a number of studies focused on iden-
tifying the functions of self-attention heads. In
particular, some BERT heads seem to specialize
in certain types of syntactic relations. Htut et al.
(2019) and Clark et al. (2019) report that there
are BERT heads that attended significantly more
than a random baseline to words in certain syntac-
tic positions. The datasets and methods used in
these studies differ, but they both find that there are
heads that attend to words in obj role more than
the positional baseline. The evidence for nsubj,
advmod, and amod varies between these two stud-
ies. The overall conclusion is also supported by
V oita et al. (2019b)’s study of the base Transformer
in machine translation context. Hoover et al. (2019)
hypothesize that even complex dependencies like
dobj are encoded by a combination of heads
rather than a single head, but this work is limited
to qualitative analysis. Zhao and Bethard (2020)
looked specifically for the heads encoding negation
scope.
Both Clark et al. (2019) and Htut et al. (2019)
conclude that no single head has the complete
syntactic tree information, in line with evidence
of partial knowledge of syntax (cf. subsection 3.1).
However, Clark et al. (2019) identify a BERT head
that can be directly used as a classifier to perform
coreference resolution on par with a rule-based
system, which by itself would seem to require quite
a lot of syntactic knowledge.
Lin et al. (2019) present evidence that atten-
tion weights are weak indicators of subject-
verb agreement and reflexive anaphora. Instead
of serving as strong pointers between tokens that
should be related, BERT’s self-attention weights
were close to a uniform attention baseline, but there
was some sensitivity to different types of distrac-
tors coherent with psycholinguistic data. This is
consistent with conclusions by Ettinger (2019).
To our knowledge, morphological information
in BERT heads has not been addressed, but with
the sparse attention variant by Correia et al. (2019)
in the base Transformer, some attention heads ap-
pear to merge BPE-tokenized words. For semantic
relations, there are reports of self-attention heads
encoding core frame-semantic relations (Kovaleva
et al., 2019), as well as lexicographic and common-
sense relations (Cui et al., 2020).
The overall popularity of self-attention as an in-
terpretability mechanism is due to the idea that
"attention weight has a clear meaning: how much
a particular word will be weighted when comput-
ing the next representation for the current word"
(Clark et al., 2019). This view is currently debated
(Jain and Wallace, 2019; Serrano and Smith, 2019;
Wiegreffe and Pinter, 2019; Brunner et al., 2020),
and in a multi-layer model where attention is fol-
lowed by non-linear transformations, the patterns
in individual heads do not provide a full picture.
Also, while many current papers are accompanied
by attention visualizations, and there is a growing
number of visualization tools (Vig, 2019; Hoover
et al., 2019), the visualization is typically limited
to qualitative analysis (often with cherry-picked
examples) (Belinkov and Glass, 2019), and should
not be interpreted as definitive evidence.
4.2.2 Attention to special tokens
Kovaleva et al. (2019) show that most self-
attention heads do not directly encode any non-
trivial linguistic information, at least when fine-
tuned on GLUE (Wang et al., 2018), since only less
than 50% of heads exhibit the "heterogeneous" pat-
tern. Much of the model produced the vertical pat-
tern (attention to [CLS], [SEP], and punctuation
tokens), consistent with the observations by Clark
et al. (2019). This redundancy is likely related to
the overparameterization issue (see section 6). | 4 | 4 | arxiv2_taclccby4_license.pdf |
More recently, Kobayashi et al. (2020) showed
that the norms of attention-weighted input vec-
tors, which yield a more intuitive interpretation
of self-attention, reduce the attention to special to-
kens. However, even when the attention weights
are normed, it is still not the case that most heads
that do the "heavy lifting" are even potentially in-
terpretable (Prasanna et al., 2020).
One methodological choice in in many studies
of attention is to focus on inter-word attention and
simply exclude special tokens (e.g. Lin et al. (2019)
and Htut et al. (2019)). However, if attention to
special tokens actually matters at inference time,
drawing conclusions purely from inter-word atten-
tion patterns does not seem warranted.
The functions of special tokens are not yet well
understood. [CLS] is typically viewed as an ag-
gregated sentence-level representation (although
all token representations also contain at least some
sentence-level information, as discussed in subsec-
tion 4.1); in that case, we may not see e.g. full
syntactic trees in inter-word attention because part
of that information is actually packed in [CLS].
Clark et al. (2019) experiment with encoding
Wikipedia paragraphs with base BERT to consider
specifically the attention to special tokens, noting
that heads in early layers attend more to [CLS],
in middle layers to [SEP], and in final layers to
periods and commas. They hypothesize that its
function might be one of "no-op", a signal to ig-
nore the head if its pattern is not applicable to the
current case. As a result, for example, [SEP]
gets increased attention starting in layer 5, but its
importance for prediction drops. However, after
fine-tuning both [SEP] and [CLS] get a lot of
attention, depending on the task (Kovaleva et al.,
2019). Interestingly, BERT also pays a lot of at-
tention to punctuation, which Clark et al. (2019)
explain by the fact that periods and commas are
simply almost as frequent as the special tokens, and
so the model might learn to rely on them for the
same reasons.
4.3 BERT layers
The first layer of BERT receives as input a combina-
tion of token, segment, and positional embeddings.
It stands to reason that the lower layers have
the most information about linear word order.
Lin et al. (2019) report a decrease in the knowledge
of linear word order around layer 4 in BERT-base.
This is accompanied by an increased knowledge
(a) ELMo (original)
Layer 0
Layer 2
(b) ELMo (4-layer)
Layer 0
Layer 4
(c) ELMo (transformer)
Layer 0
Layer 6
(d) OpenAI transformer
Layer 0
Layer 12
(e) BERT (base, cased)
Layer 0
Layer 12
(f) BERT (large, cased)
Layer 0
Layer 24
Lower Performance Higher Performance
Figure 3: A visualization of layerwise patterns in task
performance. Each column represents a probing task,
and each row represents a contextualizer layer.
textualizers. Furthermore, the ELMo-based mod-
els facilitate a controlled comparison—they only
differ in the contextualizer architecture used.
We evaluate how well CWR features perform
the pretraining task—bidirectional language mod-
eling. Specifically, we take the pretrained repre-
sentations for each layer and relearn the language
model softmax classifiers used to predict the next
and previous token. The ELMo models are trained
on the Billion Word Benchmark, so we retrain
the softmax classifier on similar data to mitigate
any possible effects from domain shift. We split
the held-out portion of the Billion Word Bench-
mark into train (80%, 6.2M tokens) and evalua-
tion (20%, 1.6M tokens) sets and use this data to
retrain and evaluate the softmax classifiers. We
expect that biLM perplexity will be lower when
training the softmax classifiers on representations
from layers that capture more information about
the pretraining task.
5.2 Results and Discussion
Figure 4 presents the performance of softmax clas-
sifiers trained to perform the bidirectional lan-
guage modeling task, given just the CWR s as in-
put. We notice that higher layers in recurrent mod-
els consistently achieve lower perplexities. Inter-
estingly, we see that layers 1 and 2 in the 4-layer
ELMo model have very similar performance—this
warrants further exploration. On the other hand,
the layers of the ELMo (transformer) model do not
exhibit such a monotonic increase. While the top-
most layer is best (which we expected, since this
is the vector originally fed into a softmax classifier
during pretraining), the middle layers show vary-
ing performance. Across all models, the represen-
tations that are better-suited for language model-
ing are also those that exhibit worse probing task
performance (Figure 3), indicating that contextu-
alizer layers trade off between encoding general
and task-specific features.
These results also reveal a difference in the
layerwise behavior of LSTMs and transformers;
moving up the LSTM layers yields more task-
specific representations, but the same does not
hold for transformers. Better understanding the
differences between transformers and LSTMs is
an active area of research (Chen et al., 2018; Tang
et al., 2018), and we leave further exploration of
these observations to future work.
These observations motivate the gradual un-
freezing method of Howard and Ruder (2018),
where the model layers are progressively unfrozen
(starting from the final layer) during the fine-
tuning process. Given our observation that higher-
level LSTM layers are less general (and more pre-
training task-specific), they likely have to be fine-
tuned a bit more in order to make them appropri-
ately task specific. Meanwhile, the base layer of
the LSTM already learns highly transferable fea-
tures, and may not benefit from fine-tuning.
6 Transferring Between Tasks
Successful pretrained contextualizers have used
self-supervised tasks such as bidirectional lan-
guage modeling (Peters et al., 2018a) and next sen-
tence prediction ( Devlin et al. , 2018), which en-
able the use of large, unannotated text corpora.
However, contextualizers can also be pretrained
on explicitly supervised objectives, as done in
pretrained sentence embedding methods ( Con-
neau et al. , 2017). To better understand how
the choice of pretraining task affects the linguis-
tic knowledge within and transferability of CWR s,
we compare pretraining on a range of different
explicitly-supervised tasks with bidirectional lan-
guage model pretraining.
Figure 4: BERT layer transferability (columns corre-
spond to probing tasks, Liu et al. (2019a).
of hierarchical sentence structure, as detected by
the probing tasks of predicting the token index, the
main auxiliary verb and the sentence subject.
There is a wide consensus in studies with differ-
ent tasks, datasets and methodologies that syntac-
tic information is most prominent in the middle
layers of BERT.4 Hewitt and Manning (2019) had
the most success reconstructing syntactic tree depth
from the middle BERT layers (6-9 for base-BERT,
14-19 for BERT-large). Goldberg (2019) reports
the best subject-verb agreement around layers 8-
9, and the performance on syntactic probing tasks
used by Jawahar et al. (2019) also seems to peak
around the middle of the model. The prominence
of syntactic information in the middle BERT layers
is related to Liu et al. (2019a)’s observation that the
middle layers of Transformers are best-performing
overall and the most transferable across tasks (see
Figure 4).
There is conflicting evidence about syntactic
chunks. Tenney et al. (2019a) conclude that "the
basic syntactic information appears earlier in the
network while high-level semantic features appear
at the higher layers", drawing parallels between
this order and the order of components in a typical
NLP pipeline – from POS-tagging to dependency
parsing to semantic role labeling. Jawahar et al.
(2019) also report that the lower layers were more
useful for chunking, while middle layers were more
useful for parsing. At the same time, the probing
experiments by Liu et al. (2019a) find the opposite:
both POS-tagging and chunking were performed
best at the middle layers, in both BERT-base and
BERT-large. However, all three studies use differ-
ent suites of probing tasks.
The final layers of BERT are the most task-
specific. In pre-training, this means specificity to
the MLM task, which explains why the middle
4These BERT results are also compatible with findings by
Vig and Belinkov (2019), who report the highest attention to
tokens in dependency relations in the middle layers of GPT-2. | 5 | 5 | arxiv2_taclccby4_license.pdf |
layers are more transferable (Liu et al., 2019a). In
fine-tuning, it explains why the final layers change
the most (Kovaleva et al., 2019), and why restoring
the weights of lower layers of fine-tuned BERT to
their original values does not dramatically hurt the
model performance (Hao et al., 2019).
Tenney et al. (2019a) suggest that while syntactic
information appears early in the model and can be
localized, semantics is spread across the entire
model, which explains why certain non-trivial ex-
amples get solved incorrectly at first but correctly
at the later layers. This is rather to be expected:
semantics permeates all language, and linguists de-
bate whether meaningless structures can exist at
all (Goldberg, 2006, p.166-182). But this raises
the question of what stacking more Transformer
layers in BERT actually achieves in terms of the
spread of semantic knowledge, and whether that
is beneficial. Tenney et al. compared BERT-base
and BERT-large, and found that the overall pattern
of cumulative score gains is the same, only more
spread out in the larger model.
Note that Tenney et al. (2019a)’s experiments
concern sentence-level semantic relations; Cui et al.
(2020) report that the encoding of ConceptNet se-
mantic relations is the worst in the early layers and
increases towards the top. Jawahar et al. (2019)
place "surface features in lower layers, syntactic
features in middle layers and semantic features in
higher layers", but their conclusion is surprising,
given that only one semantic task in this study actu-
ally topped at the last layer, and three others peaked
around the middle and then considerably degraded
by the final layers.
5 Training BERT
This section reviews the proposals to optimize the
training and architecture of the original BERT.
5.1 Model architecture choices
To date, the most systematic study of BERT archi-
tecture was performed by Wang et al. (2019b), who
experimented with the number of layers, heads, and
model parameters, varying one option and freez-
ing the others. They concluded that the number
of heads was not as significant as the number
of layers . That is consistent with the findings
of V oita et al. (2019b) and Michel et al. (2019)
(section 6), and also the observation by Liu et al.
(2019a) that the middle layers were the most trans-
ferable. Larger hidden representation size was con-
sistently better, but the gains varied by setting.
All in all, changes in the number of heads
and layers appear to perform different func-
tions. The issue of model depth must be related to
the information flow from the most task-specific
layers closer to the classifier (Liu et al., 2019a),
to the initial layers which appear to be the most
task-invariant (Hao et al., 2019), and where the
tokens resemble the input tokens the most (Brun-
ner et al., 2020) (see subsection 4.3). If that is the
case, a deeper model has more capacity to encode
information that is not task-specific.
On the other head, many self-attention heads
in vanilla BERT seem to naturally learn the same
patterns (Kovaleva et al., 2019). This explains
why pruning them does not have too much impact.
The question that arises from this is how far we
could get with intentionally encouraging diverse
self-attention patterns: theoretically, this would
mean increasing the amount of information in the
model with the same number of weights. Raganato
et al. (2020) show for Transformer-based machine
translation we can simply pre-set the patterns that
we already know the model would learn, instead of
learning them from scratch.
Vanilla BERT is symmetric and balanced in
terms of self-attention and feed-forward layers, but
it may not have to be. For the base Transformer,
Press et al. (2020) report benefits from more self-
attention sublayers at the bottom and more feedfor-
ward sublayers at the top.
5.2 Improvements to the training regime
Liu et al. (2019b) demonstrate the benefits of
large-batch training: with 8k examples both the
language model perplexity and downstream task
performance are improved. They also publish their
recommendations for other parameters. You et al.
(2019) report that with a batch size of 32k BERT’s
training time can be significantly reduced with no
degradation in performance. Zhou et al. (2019) ob-
serve that the normalization of the trained [CLS]
token stabilizes the training and slightly improves
performance on text classification tasks.
Gong et al. (2019) note that, since self-attention
patterns in higher and lower layers are similar, the
model training can be done in a recursive man-
ner, where the shallower version is trained first and
then the trained parameters are copied to deeper
layers. Such a "warm-start" can lead to a 25% faster
training without sacrificing performance. | 6 | 6 | arxiv2_taclccby4_license.pdf |
5.3 Pre-training BERT
The original BERT is a bidirectional Transformer
pre-trained on two tasks: next sentence prediction
(NSP) and masked language model (MLM) (sec-
tion 2). Multiple studies have come up with alter-
native training objectives to improve on BERT,
which could be categorized as follows:
• How to mask. Raffel et al. (2019) systemati-
cally experiment with corruption rate and cor-
rupted span length. Liu et al. (2019b) propose
diverse masks for training examples within
an epoch, while Baevski et al. (2019) mask
every token in a sequence instead of a random
selection. Clinchant et al. (2019) replace the
MASK token with [UNK] token, to help the
model learn a representation for unknowns
that could be useful for translation. Song et al.
(2020) maximize the amount of information
available to the model by conditioning on both
masked and unmasked tokens, and letting the
model see how many tokens are missing.
• What to mask. Masks can be applied to full
words instead of word-pieces (Devlin et al.,
2019; Cui et al., 2019). Similarly, we can
mask spans rather than single tokens (Joshi
et al., 2020), predicting how many are missing
(Lewis et al., 2019). Masking phrases and
named entities (Sun et al., 2019b) improves
representation of structured knowledge.
• Where to mask. Lample and Conneau (2019)
use arbitrary text streams instead of sentence
pairs and subsample frequent outputs similar
to Mikolov et al. (2013). Bao et al. (2020)
combine the standard autoencoding MLM
with partially autoregressive LM objective us-
ing special pseudo mask tokens.
• Alternatives to masking. Raffel et al. (2019)
experiment with replacing and dropping spans,
Lewis et al. (2019) explore deletion, infilling,
sentence permutation and document rotation,
and Sun et al. (2019c) predict whether a to-
ken is capitalized and whether it occurs in
other segments of the same document. Yang
et al. (2019) train on different permutations
of word order in the input sequence, maximiz-
ing the probability of the original word order
(cf. the n-gram word order reconstruction task
(Wang et al., 2019a)). Clark et al. (2020) de-
tect tokens that were replaced by a generator
network rather than masked.
• NSP alternatives. Removing NSP does not
hurt or slightly improves performance (Liu
et al., 2019b; Joshi et al., 2020; Clinchant
et al., 2019). Wang et al. (2019a) and Cheng
et al. (2019) replace NSP with the task of
predicting both the next and the previous sen-
tences. Lan et al. (2020a) replace the negative
NSP examples by swapped sentences from
positive examples, rather than sentences from
different documents. ERNIE 2.0 includes sen-
tence reordering and sentence distance pre-
diction. Bai et al. (2020) replace both NSP
and token position embeddings by a combina-
tion of paragraph, sentence, and token index
embeddings. Li and Choi (2020) experiment
with utterance order prediction task for multi-
party dialogue (and also MLM at the level of
utterances and the whole dialogue).
• Other tasks. Sun et al. (2019c) propose si-
multaneous learning of 7 tasks, including dis-
course relation classification and predicting
whether a segment is relevant for IR. Guu
et al. (2020) include a latent knowledge re-
triever in language model pretraining. Wang
et al. (2020c) combine MLM with knowledge
base completion objective. Glass et al. (2020)
replace MLM with span prediction task (as
in extractive question answering), where the
model is expected to provide the answer not
from its own weights, but from a different pas-
sage containing the correct answer (a relevant
search engine query snippet).
Another obvious source of improvement is pre-
training data. Several studies explored the ben-
efits of increasing the corpus volume (Liu et al.,
2019b; Conneau et al., 2019; Baevski et al., 2019)
and longer training (Liu et al., 2019b). The data
also does not have to be raw text: there is a num-
ber efforts to incorporate explicit linguistic in-
formation, both syntactic (Sundararaman et al.,
2019) and semantic (Zhang et al., 2020). Wu et al.
(2019b) and Kumar et al. (2020) include the label
for a given sequence from an annotated task dataset.
Schick and Schütze (2020) separately learn repre-
sentations for rare words.
Although BERT is already actively used as a
source of world knowledge (see subsection 3.3),
there is also work on explicitly supplying struc-
tured knowledge . One approach is entity-
enhanced models. For example, Peters et al.
(2019a); Zhang et al. (2019) include entity em- | 7 | 7 | arxiv2_taclccby4_license.pdf |
Figure 5: Pre-trained weights help BERT find wider
optima in fine-tuning on MRPC (right) than training
from scratch (left) (Hao et al., 2019)
beddings as input for training BERT, while Po-
erner et al. (2019) adapt entity vectors to BERT
representations. As mentioned above, Wang et al.
(2020c) integrate knowledge not through entity em-
beddings, but through additional pre-training ob-
jective of knowledge base completion. Sun et al.
(2019b,c) modify the standard MLM task to mask
named entities rather than random words, and Yin
et al. (2020) train with MLM objective over both
text and linearized table data. Wang et al. (2020a)
enhance RoBERTa with both linguistic and factual
knowledge with task-specific adapters.
Pre-training is the most expensive part of train-
ing BERT, and it would be informative to know
how much benefit it provides. On some tasks, a
randomly initialized and fine-tuned BERT obtains
competitive or higher results than the pre-trained
BERT with the task classifier and frozen weights
(Kovaleva et al., 2019). The consensus in the com-
munity is that pre-training does help in most situa-
tions, but the degree and its exact contribution re-
quires further investigation. Prasanna et al. (2020)
found that most weights of pre-trained BERT are
useful in fine-tuning, although there are "better"
and "worse" subnetworks. One explanation is that
pre-trained weights help the fine-tuned BERT find
wider and flatter areas with smaller generalization
error, which makes the model more robust to over-
fitting (see Figure 5 from Hao et al. (2019)).
Given the large number and variety of proposed
modifications, one would wish to know how much
impact each of them has. However, due to the
overall trend towards large model sizes, systematic
ablations have become expensive. Most new mod-
els claim superiority on standard benchmarks, but
gains are often marginal, and estimates of model
stability and significance testing are very rare.
5.4 Fine-tuning BERT
Pre-training + fine-tuning workflow is a crucial
part of BERT. The former is supposed to provide
task-independent knowledge, and the latter would
presumably teach the model to rely more on the
representations useful for the task at hand.
Kovaleva et al. (2019) did not find that to be the
case for BERT fine-tuned on GLUE tasks 5: dur-
ing fine-tuning, the most changes for 3 epochs oc-
curred in the last two layers of the models, but those
changes caused self-attention to focus on [SEP]
rather than on linguistically interpretable patterns.
It is understandable why fine-tuning would increase
the attention to [CLS], but not [SEP]. If Clark
et al. (2019) are correct that [SEP] serves as "no-
op" indicator, fine-tuning basically tells BERT what
to ignore.
Several studies explored the possibilities of im-
proving the fine-tuning of BERT:
• Taking more layers into account: learning
a complementary representation of the infor-
mation in deep and output layers (Yang and
Zhao, 2019), using a weighted combination
of all layers instead of the final one (Su and
Cheng, 2019; Kondratyuk and Straka, 2019),
and layer dropout (Kondratyuk and Straka,
2019).
• Two-stage fine-tuning introduces an inter-
mediate supervised training stage between
pre-training and fine-tuning (Phang et al.,
2019; Garg et al., 2020; Arase and Tsujii,
2019; Pruksachatkun et al., 2020; Glavaš and
Vuli´c, 2020). Ben-David et al. (2020) propose
a pivot-based variant of MLM to fine-tune
BERT for domain adaptation.
• Adversarial token perturbations improve
robustness of the model (Zhu et al., 2019).
• Adversarial regularization in combination
with Bregman Proximal Point Optimization
helps alleviate pre-trained knowledge forget-
ting and therefore prevents BERT from overfit-
ting to downstream tasks (Jiang et al., 2019a).
• Mixout regularization improves the stability
of BERT fine-tuning even for a small number
of training examples (Lee et al., 2019).
With large models, even fine-tuning becomes ex-
pensive, but Houlsby et al. (2019) show that it can
5Kondratyuk and Straka (2019) suggest that fine-tuning on
Universal Dependencies does result in syntactically meaning-
ful attention patterns, but there was no quantitative evaluation. | 8 | 8 | arxiv2_taclccby4_license.pdf |
be successfully approximated with adapter mod-
ules. They achieve competitive performance on
26 classification tasks at a fraction of the computa-
tional cost. Adapters in BERT were also used for
multi-task learning (Stickland and Murray, 2019)
and cross-lingual transfer (Artetxe et al., 2019). An
alternative to fine-tuning is extracting features from
frozen representations, but fine-tuning works better
for BERT (Peters et al., 2019b).
A big methodological challenge in the current
NLP is that the reported performance improve-
ments of new models may well be within varia-
tion induced by environment factors (Crane, 2018).
BERT is not an exception. Dodge et al. (2020)
report significant variation for BERT fine-tuned
on GLUE tasks due to both weight initialization
and training data order. They also propose early
stopping on the less-promising seeds.
Although we hope that the above observations
may be useful for the practitioners, this section
does not exhaust the current research on fine-tuning
and its alternatives. For example, we do not cover
such topics as Siamese architectures, policy gradi-
ent training, automated curriculum learning, and
others.
6 How big should BERT be?
6.1 Overparameterization
Transformer-based models keep growing by or-
ders of magnitude: the 110M parameters of base
BERT are now dwarfed by 17B parameters of
Turing-NLG (Microsoft, 2020), which is dwarfed
by 175B of GPT-3 (Brown et al., 2020). This trend
raises concerns about computational complexity
of self-attention (Wu et al., 2019a), environmental
issues (Strubell et al., 2019; Schwartz et al., 2019),
fair comparison of architectures (Aßenmacher and
Heumann, 2020), and reproducibility.
Human language is incredibly complex, and
would perhaps take many more parameters to de-
scribe fully, but the current models do not make
good use of the parameters they already have. V oita
et al. (2019b) showed that all but a few Trans-
former heads could be pruned without signif-
icant losses in performance . For BERT, Clark
et al. (2019) observe that most heads in the same
layer show similar self-attention patterns (perhaps
related to the fact that the output of all self-attention
heads in a layer is passed through the same MLP),
which explains why Michel et al. (2019) were able
to reduce most layers to a single head.
Depending on the task, some BERT heads/layers
are not only redundant (Kao et al., 2020), but also
harmful to the downstream task performance. Pos-
itive effect from head disabling was reported for
machine translation (Michel et al., 2019), abstrac-
tive summarization (Baan et al., 2019), and GLUE
tasks (Kovaleva et al., 2019). Additionally, Ten-
ney et al. (2019a) examine the cumulative gains of
their structural probing classifier, observing that in
5 out of 8 probing tasks some layers cause a drop
in scores (typically in the final layers). Gordon
et al. (2020) find that 30–40% of the weights can
be pruned without impact on downstream tasks.
In general, larger BERT models perform better
(Liu et al., 2019a; Roberts et al., 2020), but not
always: BERT-base outperformed BERT-large on
subject-verb agreement (Goldberg, 2019) and sen-
tence subject detection (Lin et al., 2019). Given
the complexity of language, and amounts of pre-
training data, it is not clear why BERT ends up with
redundant heads and layers. Clark et al. (2019) sug-
gest that one possible reason is the use of attention
dropouts, which causes some attention weights to
be zeroed-out during training.
6.2 Compression techniques
Given the above evidence of overparameteriza-
tion, it does not come as a surprise that BERT
can be efficiently compressed with minimal ac-
curacy loss, which would be highly desirable for
real-world applications. Such efforts to date are
summarized in Table 1. The main approaches are
knowledge distillation, quantization, and pruning.
The studies in the knowledge distillation
framework (Hinton et al., 2014) use a smaller
student-network trained to mimic the behavior of
a larger teacher-network. For BERT, this has been
achieved through experiments with loss functions
(Sanh et al., 2019b; Jiao et al., 2019), mimicking
the activation patterns of individual portions of the
teacher network (Sun et al., 2019a), and knowledge
transfer at the pre-training (Turc et al., 2019; Jiao
et al., 2019; Sun et al., 2020) or fine-tuning stage
(Jiao et al., 2019). McCarley et al. (2020) suggest
that distillation has so far worked better for GLUE
than for reading comprehension, and report good
results for QA from a combination of structured
pruning and task-specific distillation.
Quantization decreases BERT’s memory foot-
print through lowering the precision of its weights
(Shen et al., 2019; Zafrir et al., 2019). Note that | 9 | 9 | arxiv2_taclccby4_license.pdf |
Compression Performance Speedup Model Evaluation
BERT-base (Devlin et al., 2019) ×1 100% ×1 BERT 12 All GLUE tasks, SQuAD
BERT-small ×3.8 91% - BERT 4† All GLUE tasks
Distillation
DistilBERT (Sanh et al., 2019a) ×1.5 90% § ×1.6 BERT 6 All GLUE tasks, SQuAD
BERT6-PKD (Sun et al., 2019a) ×1.6 98% ×1.9 BERT 6 No WNLI, CoLA, STS-B; RACE
BERT3-PKD (Sun et al., 2019a) ×2.4 92% ×3.7 BERT 3 No WNLI, CoLA, STS-B; RACE
Aguilar et al. (2019), Exp. 3 ×1.6 93% - BERT 6 CoLA, MRPC, QQP, RTE
BERT-48 (Zhao et al., 2019) ×62 87% ×77 BERT 12∗† MNLI, MRPC, SST-2
BERT-192 (Zhao et al., 2019) ×5.7 93% ×22 BERT 12∗† MNLI, MRPC, SST-2
TinyBERT (Jiao et al., 2019) ×7.5 96% ×9.4 BERT 4† No WNLI; SQuAD
MobileBERT (Sun et al., 2020) ×4.3 100% ×4 BERT 24† No WNLI; SQuAD
PD (Turc et al., 2019) ×1.6 98% ×2.5‡ BERT6† No WNLI, CoLA and STS-B
WaLDORf (Tian et al., 2019) ×4.4 93% ×9 BERT 8†∥ SQuAD
MiniLM (Wang et al., 2020b) ×1.65 99% ×2 BERT 6 No WNLI, STS-B, MNLImm; SQuAD
MiniBERT(Tsai et al., 2019) ×6∗∗ 98% ×27∗∗ mBERT3† CoNLL-18 POS and morphology
BiLSTM-soft (Tang et al., 2019) ×110 91% ×434‡ BiLSTM1 MNLI, QQP, SST-2
Quanti-
zation
Q-BERT-MP (Shen et al., 2019) ×13 98% ¶ - BERT 12 MNLI, SST-2, CoNLL-03, SQuAD
BERT-QAT (Zafrir et al., 2019) ×4 99% - BERT 12 No WNLI, MNLI; SQuAD
GOBO(Zadeh and Moshovos, 2020) ×9.8 99% - BERT 12 MNLI
Pruning
McCarley et al. (2020), ff2 ×2.2‡ 98%‡ ×1.9‡ BERT24 SQuAD, Natural Questions
RPP (Guo et al., 2019) ×1.7‡ 99%‡ - BERT 24 No WNLI, STS-B; SQuAD
Soft MvP (Sanh et al., 2020) ×33 94% ¶ - BERT 12 MNLI, QQP, SQuAD
IMP (Chen et al., 2020), rewind 50% ×1.4–2.5 94–100% - BERT 12 No MNLI-mm; SQuAD
Other
ALBERT-base (Lan et al., 2020b) ×9 97% - BERT 12† MNLI, SST-2
ALBERT-xxlarge (Lan et al., 2020b) ×0.47 107% - BERT 12† MNLI, SST-2
BERT-of-Theseus (Xu et al., 2020) ×1.6 98% ×1.9 BERT 6 No WNLI
PoWER-BERT (Goyal et al., 2020) N/A 99% ×2–4.5 BERT 12 No WNLI; RACE
Table 1: Comparison of BERT compression studies. Compression, performance retention, inference time speedup
figures are given with respect to BERT base, unless indicated otherwise. Performance retention is measured as
a ratio of average scores achieved by a given model and by BERT base. The subscript in the model description
reflects the number of layers used. ∗Smaller vocabulary used. †The dimensionality of the hidden layers is reduced.
∥Convolutional layers used. ‡Compared to BERTlarge. ∗∗Compared to mBERT.§As reported in (Jiao et al., 2019).¶In
comparison to the dev set.
this strategy often requires compatible hardware.
As discussed in section 6, individual self-
attention heads and BERT layers can be disabled
without significant drop in performance (Michel
et al., 2019; Kovaleva et al., 2019; Baan et al.,
2019). Pruning is a compression technique that
takes advantage of that fact, typically reducing the
amount of computation via zeroing out of certain
parts of the large model. In structured pruning,
architecture blocks are dropped, as in LayerDrop
(Fan et al., 2019). In unstructured, the weights in
the entire model are pruned irrespective of their lo-
cation, as in magnitude pruning (Chen et al., 2020)
or movement pruning (Sanh et al., 2020).
Prasanna et al. (2020) and Chen et al. (2020)
explore BERT from the perspective of the lottery
ticket hypothesis (Frankle and Carbin, 2019), look-
ing specifically at the "winning" subnetworks in
pre-trained BERT. They independently find that
such subnetworks do exist, and that transferability
between subnetworks for different tasks varies.
If the ultimate goal of training BERT is compres-
sion, Li et al. (2020) recommend training larger
models and compressing them heavily rather than
compressing smaller models lightly.
Other techniques include decomposing BERT’s
embedding matrix into smaller matrices (Lan et al.,
2020a), progressive module replacing (Xu et al.,
2020) and dynamic elimination of intermediate en-
coder outputs (Goyal et al., 2020). See Ganesh et al.
(2020) for a more detailed discussion of compres-
sion methods.
6.3 Pruning and model analysis
There is a nascent discussion around pruning as a
model analysis technique. The basic idea is that
a compressed model a priori consists of elements
that are useful for prediction; therefore by finding
out what they do we may find out what the whole
network does. For instance, BERT has heads that
seem to encode frame-semantic relations, but dis-
abling them might not hurt downstream task per-
formance Kovaleva et al. (2019); this suggests that
this knowledge is not actually used.
For the base Transformer, V oita et al. (2019b)
identify the functions of self-attention heads and | 10 | 10 | arxiv2_taclccby4_license.pdf |
then check which of them survive the pruning, find-
ing that the syntactic and positional heads are the
last ones to go. For BERT, Prasanna et al. (2020)
go in the opposite direction: pruning on the basis of
importance scores, and interpreting the remaining
"good" subnetwork. With respect to self-attention
heads specifically, it does not seem to be the case
that only the heads that potentially encode non-
trivial linguistic patterns survive the pruning.
The models and methodology in these studies
differ, so the evidence is inconclusive. In particular,
V oita et al. (2019b) find that before pruning the
majority of heads are syntactic, and Prasanna et al.
(2020) – that the majority of heads do not have
potentially non-trivial attention patterns.
An important limitation of the current head and
layer ablation studies (Michel et al., 2019; Koval-
eva et al., 2019) is that they inherently assume
that certain knowledge is contained in heads/layers.
However, there is evidence of more diffuse rep-
resentations spread across the full network, such
as the gradual increase in accuracy on difficult se-
mantic parsing tasks (Tenney et al., 2019a) or the
absence of heads that would perform parsing "in
general" (Clark et al., 2019; Htut et al., 2019). If so,
ablating individual components harms the weight-
sharing mechanism. Conclusions from component
ablations are also problematic if the same informa-
tion is duplicated elsewhere in the network.
7 Directions for further research
BERTology has clearly come a long way, but it
is fair to say we still have more questions than
answers about how BERT works. In this section,
we list what we believe to be the most promising
directions for further research.
Benchmarks that require verbal reasoning.
While BERT enabled breakthroughs on many NLP
benchmarks, a growing list of analysis papers are
showing that its language skills are not as impres-
sive as it seems. In particular, it was shown to rely
on shallow heuristics in natural language inference
(McCoy et al., 2019b; Zellers et al., 2019; Jin et al.,
2020), reading comprehension (Si et al., 2019a;
Rogers et al., 2020; Sugawara et al., 2020; Si et al.,
2019b; Yogatama et al., 2019), argument reason-
ing comprehension (Niven and Kao, 2019), and
text classification (Jin et al., 2020). Such heuristics
can even be used to reconstruct a non-publicly-
available model (Krishna et al., 2020). As with
any optimization method, if there is a shortcut in
the data, we have no reason to expect BERT to not
learn it. But harder datasets that cannot be resolved
with shallow heuristics are unlikely to emerge if
their development is not as valued as modeling
work.
Benchmarks for the full range of linguistic
competence. While the language models seem to
acquire a great deal of knowledge about language,
we do not currently have comprehensive stress tests
for different aspects of linguistic knowledge. A
step in this direction is the "Checklist" behavioral
testing (Ribeiro et al., 2020), the best paper at ACL
2020. Ideally, such tests would measure not only
errors, but also sensitivity (Ettinger, 2019).
Developing methods to "teach" reasoning.
While large pre-trained models have a lot of knowl-
edge, they often fail if any reasoning needs to be
performed on top of the facts they possess (Tal-
mor et al., 2019, see also subsection 3.3). For in-
stance, Richardson et al. (2020) propose a method
to "teach" BERT quantification, conditionals, com-
paratives, and boolean coordination.
Learning what happens at inference time.
Most BERT analysis papers focus on different
probes of the model, with the goal to find what
the language model "knows". However, probing
studies have limitations (subsection 3.4), and to this
point, far fewer papers have focused on discovering
what knowledge actually gets used. Several promis-
ing directions are the "amnesic probing" (Elazar
et al., 2020), identifying features important for pre-
diction for a given task (Arkhangelskaia and Dutta,
2019), and pruning the model to remove the non-
important components (V oita et al., 2019b; Michel
et al., 2019; Prasanna et al., 2020).
8 Conclusion
In a little over a year, BERT has become a ubiq-
uitous baseline in NLP experiments and inspired
numerous studies analyzing the model and propos-
ing various improvements. The stream of papers
seems to be accelerating rather than slowing down,
and we hope that this survey helps the community
to focus on the biggest unresolved questions.
9 Acknowledgements
We thank the anonymous reviewers for their valu-
able feedback. This work is funded in part by
the NSF award number IIS-1844740 to Anna
Rumshisky. | 11 | 11 | arxiv2_taclccby4_license.pdf |
References
Gustavo Aguilar, Yuan Ling, Yu Zhang, Benjamin
Yao, Xing Fan, and Edward Guo. 2019. Knowl-
edge Distillation from Internal Representations.
arXiv preprint arXiv:1910.03723.
Alan Akbik, Tanja Bergmann, and Roland V oll-
graf. 2019. Pooled Contextualized Embeddings
for Named Entity Recognition. In Proceedings
of the 2019 Conference of the North Ameri-
can Chapter of the Association for Computa-
tional Linguistics: Human Language Technolo-
gies, Volume 1 (Long and Short Papers), pages
724–728, Minneapolis, Minnesota. Association
for Computational Linguistics.
Yuki Arase and Jun’ichi Tsujii. 2019. Transfer
Fine-Tuning: A BERT Case Study. In Proceed-
ings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the
9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), pages
5393–5404, Hong Kong, China. Association for
Computational Linguistics.
Ekaterina Arkhangelskaia and Sourav Dutta. 2019.
Whatcha lookin’at? DeepLIFTing BERT’s At-
tention in Question Answering. arXiv preprint
arXiv:1910.06431.
Mikel Artetxe, Sebastian Ruder, and Dani Yo-
gatama. 2019. On the Cross-lingual Trans-
ferability of Monolingual Representations.
arXiv:1911.03310 [cs].
Matthias Aßenmacher and Christian Heumann.
2020. On the comparability of Pre-Trained Lan-
guage Models. arXiv:2001.00781 [cs, stat].
Joris Baan, Maartje ter Hoeve, Marlies van der
Wees, Anne Schuth, and Maarten de Rijke.
2019. Understanding Multi-Head Attention
in Abstractive Summarization. arXiv preprint
arXiv:1911.03898.
Alexei Baevski, Sergey Edunov, Yinhan Liu, Luke
Zettlemoyer, and Michael Auli. 2019. Cloze-
driven Pretraining of Self-Attention Networks.
In Proceedings of the 2019 Conference on Em-
pirical Methods in Natural Language Process-
ing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 5360–5369, Hong Kong, China.
Association for Computational Linguistics.
He Bai, Peng Shi, Jimmy Lin, Luchen
Tan, Kun Xiong, Wen Gao, and Ming Li.
2020. SegaBERT: Pre-training of Segment-
aware BERT for Language Understanding.
arXiv:2004.14996 [cs].
Sriram Balasubramanian, Naman Jain, Gaurav Jin-
dal, Abhijeet Awasthi, and Sunita Sarawagi.
2020. What’s in a Name? Are BERT Named En-
tity Representations just as Good for any other
Name? In Proceedings of the 5th Workshop on
Representation Learning for NLP , pages 205–
214, Online. Association for Computational Lin-
guistics.
Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang,
Nan Yang, Xiaodong Liu, Yu Wang, Songhao
Piao, Jianfeng Gao, Ming Zhou, and Hsiao-
Wuen Hon. 2020. UniLMv2: Pseudo-Masked
Language Models for Unified Language Model
Pre-Training. arXiv:2002.12804 [cs].
Yonatan Belinkov and James Glass. 2019. Anal-
ysis Methods in Neural Language Processing:
A Survey. Transactions of the Association for
Computational Linguistics, 7:49–72.
Eyal Ben-David, Carmel Rabinovitz, and Roi Re-
ichart. 2020. PERL: Pivot-based Domain Adap-
tation for Pre-trained Deep Contextualized Em-
bedding Models. arXiv:2006.09075 [cs].
Rishi Bommasani, Kelly Davis, and Claire Cardie.
2020. Interpreting Pretrained Contextualized
Representations via Reductions to Static Em-
beddings. In Proceedings of the 58th Annual
Meeting of the Association for Computational
Linguistics, pages 4758–4781.
Zied Bouraoui, Jose Camacho-Collados, and
Steven Schockaert. 2019. Inducing Relational
Knowledge from BERT. arXiv:1911.12753
[cs].
Samuel Broscheit. 2019. Investigating Entity
Knowledge in BERT with Simple Neural End-
To-End Entity Linking. In Proceedings of the
23rd Conference on Computational Natural Lan-
guage Learning (CoNLL), pages 677–685, Hong
Kong, China. Association for Computational
Linguistics.
Tom B. Brown, Benjamin Mann, Nick Ryder,
Melanie Subbiah, Jared Kaplan, Prafulla Dhari-
wal, Arvind Neelakantan, Pranav Shyam, Girish | 12 | 12 | arxiv2_taclccby4_license.pdf |
Sastry, Amanda Askell, Sandhini Agarwal, Ariel
Herbert-V oss, Gretchen Krueger, Tom Henighan,
Rewon Child, Aditya Ramesh, Daniel M.
Ziegler, Jeffrey Wu, Clemens Winter, Christo-
pher Hesse, Mark Chen, Eric Sigler, Mateusz
Litwin, Scott Gray, Benjamin Chess, Jack Clark,
Christopher Berner, Sam McCandlish, Alec
Radford, Ilya Sutskever, and Dario Amodei.
2020. Language Models are Few-Shot Learners.
arXiv:2005.14165 [cs].
Gino Brunner, Yang Liu, Damian Pascual, Oliver
Richter, Massimiliano Ciaramita, and Roger
Wattenhofer. 2020. On Identifiability in Trans-
formers. In International Conference on Learn-
ing Representations.
Tianlong Chen, Jonathan Frankle, Shiyu Chang,
Sijia Liu, Yang Zhang, Zhangyang Wang, and
Michael Carbin. 2020. The Lottery Ticket
Hypothesis for Pre-trained BERT Networks.
arXiv:2007.12223 [cs, stat].
Xingyi Cheng, Weidi Xu, Kunlong Chen, Wei
Wang, Bin Bi, Ming Yan, Chen Wu, Luo Si, Wei
Chu, and Taifeng Wang. 2019. Symmetric Reg-
ularization based BERT for Pair-Wise Semantic
Reasoning. arXiv:1909.03405 [cs].
Kevin Clark, Urvashi Khandelwal, Omer Levy,
and Christopher D. Manning. 2019. What Does
BERT Look at? An Analysis of BERT’s Atten-
tion. In Proceedings of the 2019 ACL Workshop
BlackboxNLP: Analyzing and Interpreting Neu-
ral Networks for NLP, pages 276–286, Florence,
Italy. Association for Computational Linguistics.
Kevin Clark, Minh-Thang Luong, Quoc V . Le, and
Christopher D. Manning. 2020. ELECTRA: Pre-
Training Text Encoders as Discriminators Rather
Than Generators. In International Conference
on Learning Representations.
Stephane Clinchant, Kweon Woo Jung, and Vas-
silina Nikoulina. 2019. On the use of BERT
for Neural Machine Translation. In Proceedings
of the 3rd Workshop on Neural Generation and
Translation, pages 108–117, Hong Kong. Asso-
ciation for Computational Linguistics.
Alexis Conneau, Kartikay Khandelwal, Naman
Goyal, Vishrav Chaudhary, Guillaume Wen-
zek, Francisco Guzmán, Edouard Grave, Myle
Ott, Luke Zettlemoyer, and Veselin Stoyanov.
2019. Unsupervised Cross-Lingual Representa-
tion Learning at Scale. arXiv:1911.02116 [cs].
Gonçalo M. Correia, Vlad Niculae, and André F. T.
Martins. 2019. Adaptively Sparse Transform-
ers. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Pro-
cessing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 2174–2184, Hong Kong, China.
Association for Computational Linguistics.
Matt Crane. 2018. Questionable Answers in Ques-
tion Answering Research: Reproducibility and
Variability of Published Results. Transactions of
the Association for Computational Linguistics,
6:241–252.
Leyang Cui, Sijie Cheng, Yu Wu, and Yue Zhang.
2020. Does BERT Solve Commonsense Task via
Commonsense Knowledge? arXiv:2008.03945
[cs].
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin,
Ziqing Yang, Shijin Wang, and Guoping Hu.
2019. Pre-Training with Whole Word Masking
for Chinese BERT. arXiv:1906.08101 [cs].
Jeff Da and Jungo Kasai. 2019. Cracking the
Contextual Commonsense Code: Understand-
ing Commonsense Reasoning Aptitude of Deep
Contextual Representations. In Proceedings of
the First Workshop on Commonsense Inference
in Natural Language Processing , pages 1–12,
Hong Kong, China. Association for Computa-
tional Linguistics.
Joe Davison, Joshua Feldman, and Alexander Rush.
2019. Commonsense Knowledge Mining from
Pretrained Models. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 1173–1178, Hong
Kong, China. Association for Computational
Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training
of Deep Bidirectional Transformers for Lan-
guage Understanding. In Proceedings of the
2019 Conference of the North American Chapter
of the Association for Computational Linguis-
tics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 4171–4186. | 13 | 13 | arxiv2_taclccby4_license.pdf |
Jesse Dodge, Gabriel Ilharco, Roy Schwartz, Ali
Farhadi, Hannaneh Hajishirzi, and Noah Smith.
2020. Fine-Tuning Pretrained Language Models:
Weight Initializations, Data Orders, and Early
Stopping. arXiv:2002.06305 [cs].
Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and
Yoav Goldberg. 2020. When Bert Forgets How
To POS: Amnesic Probing of Linguistic Proper-
ties and MLM Predictions. arXiv:2006.00995
[cs].
Kawin Ethayarajh. 2019. How Contextual are
Contextualized Word Representations? Compar-
ing the Geometry of BERT, ELMo, and GPT-2
Embeddings. In Proceedings of the 2019 Con-
ference on Empirical Methods in Natural Lan-
guage Processing and the 9th International Joint
Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 55–65, Hong Kong,
China. Association for Computational Linguis-
tics.
Allyson Ettinger. 2019. What BERT is
not: Lessons from a new suite of psy-
cholinguistic diagnostics for language models.
arXiv:1907.13528 [cs].
Angela Fan, Edouard Grave, and Armand Joulin.
2019. Reducing Transformer Depth on Demand
with Structured Dropout. In International Con-
ference on Learning Representations.
Maxwell Forbes, Ari Holtzman, and Yejin Choi.
2019. Do Neural Language Representations
Learn Physical Commonsense? In Proceedings
of the 41st Annual Conference of the Cognitive
Science Society (CogSci 2019), page 7.
Jonathan Frankle and Michael Carbin. 2019. The
Lottery Ticket Hypothesis: Finding Sparse,
Trainable Neural Networks. In International
Conference on Learning Representations.
Prakhar Ganesh, Yao Chen, Xin Lou, Moham-
mad Ali Khan, Yin Yang, Deming Chen, Mari-
anne Winslett, Hassan Sajjad, and Preslav Nakov.
2020. Compressing large-scale transformer-
based models: A case study on BERT. arXiv
preprint arXiv:2002.11985.
Siddhant Garg, Thuy Vu, and Alessandro Moschitti.
2020. TANDA: Transfer and Adapt Pre-Trained
Transformer Models for Answer Sentence Selec-
tion. In AAAI.
Michael Glass, Alfio Gliozzo, Rishav Chakravarti,
Anthony Ferritto, Lin Pan, G P Shrivatsa Bhar-
gav, Dinesh Garg, and Avi Sil. 2020. Span
Selection Pre-training for Question Answering.
In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics,
pages 2773–2782, Online. Association for Com-
putational Linguistics.
Goran Glavaš and Ivan Vuli ´c. 2020. Is Super-
vised Syntactic Parsing Beneficial for Language
Understanding? An Empirical Investigation.
arXiv:2008.06788 [cs].
Adele Goldberg. 2006. Constructions at Work: The
Nature of Generalization in Language. Oxford
University Press, USA.
Yoav Goldberg. 2019. Assessing BERT’s syntactic
abilities. arXiv preprint arXiv:1901.05287.
Linyuan Gong, Di He, Zhuohan Li, Tao Qin, Liwei
Wang, and Tieyan Liu. 2019. Efficient training
of BERT by progressively stacking. In Interna-
tional Conference on Machine Learning, pages
2337–2346.
Mitchell A Gordon, Kevin Duh, and Nicholas An-
drews. 2020. Compressing BERT: Studying the
effects of weight pruning on transfer learning.
arXiv preprint arXiv:2002.08307.
Saurabh Goyal, Anamitra Roy Choudhary, Venkate-
san Chakaravarthy, Saurabh ManishRaje, Yogish
Sabharwal, and Ashish Verma. 2020. Power-
bert: Accelerating BERT inference for classifi-
cation tasks. arXiv preprint arXiv:2001.08950.
Fu-Ming Guo, Sijia Liu, Finlay S. Mungall, Xue
Lin, and Yanzhi Wang. 2019. Reweighted Prox-
imal Pruning for Large-Scale Language Repre-
sentation. arXiv:1909.12486 [cs, stat].
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pa-
supat, and Ming-Wei Chang. 2020. REALM:
Retrieval-Augmented Language Model Pre-
Training. arXiv:2002.08909 [cs].
Yaru Hao, Li Dong, Furu Wei, and Ke Xu. 2019.
Visualizing and Understanding the Effective-
ness of BERT. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 4143–4152, Hong | 14 | 14 | arxiv2_taclccby4_license.pdf |
Kong, China. Association for Computational
Linguistics.
John Hewitt and Christopher D. Manning. 2019.
A Structural Probe for Finding Syntax in Word
Representations. In Proceedings of the 2019
Conference of the North American Chapter of
the Association for Computational Linguistics:
Human Language Technologies, Volume 1 (Long
and Short Papers), pages 4129–4138.
Geoffrey Hinton, Oriol Vinyals, and Jeff Dean.
2014. Distilling the Knowledge in a Neural Net-
work. In Deep Learning and Representation
Learning Workshop: NIPS 2014.
Benjamin Hoover, Hendrik Strobelt, and Sebastian
Gehrmann. 2019. exBERT: A Visual Analy-
sis Tool to Explore Learned Representations in
Transformers Models. arXiv:1910.05276 [cs].
Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzeb-
ski, Bruna Morrone, Quentin de Laroussilhe, An-
drea Gesmundo, Mona Attariyan, and Sylvain
Gelly. 2019. Parameter-Efficient Transfer Learn-
ing for NLP. arXiv:1902.00751 [cs, stat].
Phu Mon Htut, Jason Phang, Shikha Bordia, and
Samuel R Bowman. 2019. Do attention heads
in BERT track syntactic dependencies? arXiv
preprint arXiv:1911.12246.
Sarthak Jain and Byron C. Wallace. 2019. Atten-
tion is not Explanation. In Proceedings of the
2019 Conference of the North American Chapter
of the Association for Computational Linguis-
tics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 3543–3556.
Ganesh Jawahar, Benoît Sagot, Djamé Seddah,
Samuel Unicomb, Gerardo Iñiguez, Márton Kar-
sai, Yannick Léo, Márton Karsai, Carlos Sar-
raute, Éric Fleury, et al. 2019. What does BERT
learn about the structure of language? In 57th
Annual Meeting of the Association for Computa-
tional Linguistics (ACL), Florence, Italy.
Haoming Jiang, Pengcheng He, Weizhu Chen, Xi-
aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019a.
SMART: Robust and Efficient Fine-Tuning for
Pre-trained Natural Language Models through
Principled Regularized Optimization. arXiv
preprint arXiv:1911.03437.
Zhengbao Jiang, Frank F. Xu, Jun Araki, and Gra-
ham Neubig. 2019b. How Can We Know What
Language Models Know? arXiv:1911.12543
[cs].
Xiaoqi Jiao, Yichun Yin, Lifeng Shang, Xin Jiang,
Xiao Chen, Linlin Li, Fang Wang, and Qun
Liu. 2019. TinyBERT: Distilling BERT for nat-
ural language understanding. arXiv preprint
arXiv:1909.10351.
Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter
Szolovits. 2020. Is BERT Really Robust? A
Strong Baseline for Natural Language Attack
on Text Classification and Entailment. In AAAI
2020.
Mandar Joshi, Danqi Chen, Yinhan Liu, Daniel S.
Weld, Luke Zettlemoyer, and Omer Levy. 2020.
SpanBERT: Improving Pre-Training by Repre-
senting and Predicting Spans. Transactions of
the Association for Computational Linguistics,
8:64–77.
Wei-Tsung Kao, Tsung-Han Wu, Po-Han Chi,
Chun-Cheng Hsieh, and Hung-Yi Lee. 2020.
Further boosting BERT-based models by du-
plicating existing layers: Some intriguing
phenomena inside BERT. arXiv preprint
arXiv:2001.09309.
Taeuk Kim, Jihun Choi, Daniel Edmiston, and
Sang-goo Lee. 2020. Are pre-trained language
models aware of phrases? simple but strong
baselines for grammar induction. In ICLR 2020.
Goro Kobayashi, Tatsuki Kuribayashi, Sho Yokoi,
and Kentaro Inui. 2020. Attention Module is
Not Only a Weight: Analyzing Transformers
with Vector Norms. arXiv:2004.10102 [cs].
Dan Kondratyuk and Milan Straka. 2019. 75 Lan-
guages, 1 Model: Parsing Universal Dependen-
cies Universally. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 2779–2795, Hong
Kong, China. Association for Computational
Linguistics.
Lingpeng Kong, Cyprien de Masson d’Autume, Lei
Yu, Wang Ling, Zihang Dai, and Dani Yogatama.
2019. A mutual information maximization per-
spective of language representation learning. In | 15 | 15 | arxiv2_taclccby4_license.pdf |
International Conference on Learning Represen-
tations.
Olga Kovaleva, Alexey Romanov, Anna Rogers,
and Anna Rumshisky. 2019. Revealing the Dark
Secrets of BERT. In Proceedings of the 2019
Conference on Empirical Methods in Natural
Language Processing and the 9th International
Joint Conference on Natural Language Process-
ing (EMNLP-IJCNLP), pages 4356–4365, Hong
Kong, China. Association for Computational
Linguistics.
Kalpesh Krishna, Gaurav Singh Tomar, Ankur P.
Parikh, Nicolas Papernot, and Mohit Iyyer. 2020.
Thieves on Sesame Street! Model Extraction of
BERT-Based APIs. In ICLR 2020.
Varun Kumar, Ashutosh Choudhary, and Eunah
Cho. 2020. Data Augmentation using Pre-
Trained Transformer Models. arXiv:2003.02245
[cs].
Ilia Kuznetsov and Iryna Gurevych. 2020. A Mat-
ter of Framing: The Impact of Linguistic For-
malism on Probing Results. arXiv:2004.14999
[cs].
Guillaume Lample and Alexis Conneau. 2019.
Cross-Lingual Language Model Pretraining.
arXiv:1901.07291 [cs].
Zhenzhong Lan, Mingda Chen, Sebastian Good-
man, Kevin Gimpel, Piyush Sharma, and Radu
Soricut. 2020a. ALBERT: A Lite BERT for
Self-Supervised Learning of Language Repre-
sentations. In ICLR.
Zhenzhong Lan, Mingda Chen, Sebastian Good-
man, Kevin Gimpel, Piyush Sharma, and Radu
Soricut. 2020b. ALBERT: A Lite BERT for
Self-supervised Learning of Language Represen-
tations. In ICLR 2020.
Cheolhyoung Lee, Kyunghyun Cho, and Wanmo
Kang. 2019. Mixout: Effective regularization to
finetune large-scale pretrained language models.
arXiv preprint arXiv:1909.11299.
Mike Lewis, Yinhan Liu, Naman Goyal, Mar-
jan Ghazvininejad, Abdelrahman Mohamed,
Omer Levy, Ves Stoyanov, and Luke Zettle-
moyer. 2019. BART: Denoising Sequence-to-
Sequence Pre-Training for Natural Language
Generation, Translation, and Comprehension.
arXiv:1910.13461 [cs, stat].
Changmao Li and Jinho D. Choi. 2020. Transform-
ers to Learn Hierarchical Contexts in Multiparty
Dialogue for Span-based Question Answering.
In Proceedings of the 58th Annual Meeting of
the Association for Computational Linguistics,
pages 5709–5714, Online. Association for Com-
putational Linguistics.
Zhuohan Li, Eric Wallace, Sheng Shen, Kevin Lin,
Kurt Keutzer, Dan Klein, and Joseph E Gonzalez.
2020. Train large, then compress: Rethinking
model size for efficient training and inference of
transformers. arXiv preprint arXiv:2002.11794.
Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019.
Open Sesame: Getting inside BERT’s Linguistic
Knowledge. In Proceedings of the 2019 ACL
Workshop BlackboxNLP: Analyzing and Inter-
preting Neural Networks for NLP , pages 241–
253.
Nelson F. Liu, Matt Gardner, Yonatan Belinkov,
Matthew E. Peters, and Noah A. Smith. 2019a.
Linguistic Knowledge and Transferability of
Contextual Representations. In Proceedings
of the 2019 Conference of the North Ameri-
can Chapter of the Association for Computa-
tional Linguistics: Human Language Technolo-
gies, Volume 1 (Long and Short Papers), pages
1073–1094, Minneapolis, Minnesota. Associa-
tion for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du,
Mandar Joshi, Danqi Chen, Omer Levy, Mike
Lewis, Luke Zettlemoyer, and Veselin Stoyanov.
2019b. RoBERTa: A Robustly Optimized BERT
Pretraining Approach. arXiv:1907.11692 [cs].
Xiaofei Ma, Zhiguo Wang, Patrick Ng, Ramesh
Nallapati, and Bing Xiang. 2019. Universal Text
Representation from BERT: An Empirical Study.
arXiv:1910.07973 [cs].
Christopher D. Manning, Kevin Clark, John He-
witt, Urvashi Khandelwal, and Omer Levy. 2020.
Emergent linguistic structure in artificial neural
networks trained by self-supervision. Proceed-
ings of the National Academy of Sciences, page
201907367.
Chandler May, Alex Wang, Shikha Bordia,
Samuel R. Bowman, and Rachel Rudinger. 2019.
On Measuring Social Biases in Sentence En-
coders. In Proceedings of the 2019 Confer- | 16 | 16 | arxiv2_taclccby4_license.pdf |
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 2,350